id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
11445817
https://en.wikipedia.org/wiki/ROM%20cartridge
ROM cartridge
A game cartridge, usually referred to in context simply as a cartridge, cart, or card, is a replaceable part designed to be connected to a consumer electronics device such as a home computer, video game console or, to a lesser extent, electronic musical instruments. A special type of cartridge named ROM cartridge is a memory card containing ROM. ROM cartridges can be used to load and run software such as video games or other application programs. The cartridge slot could also be used for hardware additions, RAM expansions or speech synthesis for example. Some cartridges had battery-backed static random-access memory, allowing a user to save data such as game progress or scores between uses. ROM cartridges allowed the user to rapidly load and access programs and data without using a floppy drive, which was an expensive peripheral during the home computer era, and without using slow, sequential, and often unreliable Compact Cassette tape. An advantage for the manufacturer was the relative security of the software in cartridge form, which was difficult for end users to replicate. However, cartridges were expensive to manufacture compared to floppy disks or CD-ROMs. As disk drives became more common and software expanded beyond the practical limits of ROM size, cartridge slots disappeared from later game consoles and personal computers. Cartridges are still used today with handheld game consoles such as the Nintendo DS, Nintendo 3DS, PlayStation Vita, and the tablet-like hybrid console Nintendo Switch, although sometimes referred to as game cards. Its widespread usage for video gaming has led the ROM cartridge to be often colloquially called a game cartridge. History ROM cartridges were popularized by early home computers which featured a special bus port for the insertion of cartridges containing software in ROM. In most cases the designs were fairly crude, with the entire address and data buses exposed by the port and attached via an edge connector; the cartridge was memory mapped directly into the system's address space such that the CPU could execute the program in place without having to first copy it into expensive RAM. The Texas Instruments TI-59 family of programmable scientific calculators used interchangeable ROM cartridges that could be installed in a slot at the back of the calculator. The calculator came with a module that provides several standard mathematical functions including solution of simultaneous equations. Other modules were specialized for financial calculations, or other subject areas, and even a "games" module. Modules were not user-programmable. The Hewlett-Packard HP-41C had expansion slots which could hold ROM memory as well as I/O expansion ports. Notable computers using cartridges in addition to magnetic media were the Commodore VIC-20 and 64, MSX standard, the Atari 8-bit family (400/800/XL/XE), the Texas Instruments TI-99/4A (where they were called Solid State Command Modules and were not directly mapped to the system bus) and the IBM PCjr (where the cartridge was mapped into BIOS space). Some arcade system boards, such as Capcom's CP System and SNK's Neo Geo, also used ROM cartridges. A precursor to modern game cartridges of second generation video consoles was introduced with the first generation video game console Magnavox Odyssey in 1972, using jumper cards to turn on and off certain electronics inside the console. A modern take on game cartridges was invented by Wallace Kirschner, Lawrence Haskel and Jerry Lawson of Alpex Computer Corporation, first unveiled as part of the Fairchild Channel F home console in 1976. The cartridge approach gained more popularity with the Atari 2600 released the following year. From the late 1970s to mid-1990s, the majority of home video game systems were cartridge-based. As compact disc technology came to be used widely for data storage, most hardware companies moved from cartridges to CD-based game systems. Nintendo remained the lone hold-out, using cartridges for their Nintendo 64 system; the company did not transition to optical media until 2001's GameCube. SNK still released games on the cartridge-based Neo Geo until 2004, with the final official release being Samurai Shodown V Special. Nintendo's handheld consoles, meanwhile, continued to use cartridges due to their faster loading times and minimal equipment for data reading being beneficial for playing video games in short, several-minute intervals. In 1976, 310,000 home video game cartridges were sold in the United States. Between 1983 and 2013, a total of software cartridges had been sold for Nintendo consoles. Design ROM cartridges can not only carry software, but additional hardware expansions as well. Examples include the Super FX coprocessor chip in some Super NES game paks, the SVP chip in the Sega Genesis version of Virtua Racing, and voice and chess modules in the Magnavox Odyssey². Micro Machines 2 on the Genesis/Mega Drive used a custom "J-Cart" cartridge design by Codemasters which incorporated two additional gamepad ports. This allowed players to have up to four gamepads connected to the console without the need for an additional multi-controller adapter. The ROM cartridge slot principle continues in various mobile devices, thanks to the development of high density low-cost flash memory. For example, a GPS navigation device might allow user updates of maps by inserting a flash memory chip into an expansion slot. An E-book reader can store the text of several thousand books on a flash chip. Personal computers may allow the user to boot and install an operating system off a USB flash drive instead of CD ROM or floppy disks. Digital cameras with flash drive slots allow users to rapidly exchange cards when full, and allow rapid transfer of pictures to a computer or printer. Advantages and disadvantages Storing software on ROM cartridges has a number of advantages over other methods of storage like floppy disks and optical media. As the ROM cartridge is memory mapped into the system's normal address space, software stored in the ROM can be read like normal memory; since the system does not have to transfer data from slower media, it allows for nearly instant load time and code execution. Software run directly from ROM typically uses less RAM, leaving memory free for other processes. While the standard size of optical media dictates a minimum size for devices which can read disks, ROM cartridges can be manufactured in different sizes, allowing for smaller devices like handheld game systems. ROM cartridges can be damaged, but they are generally more robust and resistant to damage than optical media; accumulation of dirt and dust on the cartridge contacts can cause problems, but cleaning the contacts with an isopropyl alcohol solution typically resolves the problems without risk of corrosion. ROM cartridges typically have less capacity than other media. The PCjr-compatible version of Lotus 1-2-3 comes on two cartridges and a floppy disk. ROM cartridges are typically more expensive to manufacture than discs, and storage space available on a cartridge is less than that of an optical disc like a DVD-ROM or CD-ROM. Techniques such as bank switching were employed to be able to use cartridges with a capacity higher than the amount of memory directly addressable by the processor. As video games became more complex (and the size of their code grew), software manufacturers began sacrificing the quick load time of ROM cartridges for the greater capacity and lower cost of optical media. Another source of pressure in this direction was that optical media could be manufactured in much smaller batches than cartridges; releasing a cartridge video game inevitably came with the risk of producing thousands of unsold cartridges. Electronic musical instruments usage Besides their prominent usage on video game consoles, ROM cartridges have also been used on a small number of electronic musical instruments, particularly electronic keyboards. Yamaha has made several models with such features, with their DX synthesizer in the 1980s, such as the DX1, DX5 and DX7 and their PSR keyboard lineup in the mid-1990s, namely the PSR-320, PSR-420, PSR-520, PSR-620, PSR-330, PSR-530 and the PSR-6000. These keyboards use specialized cards known as Music Cartridges, a ROM cartridge simply containing MIDI data to be played on the keyboard as MIDI sequence or song data. This technology, however, quickly become obsolete and extremely rare after the advent of floppy disk drive in later models. Casio has also known to use similar cartridges known as ROM Pack in the 1980s, before Yamaha's Music Cartridge were introduced. Few examples are several models in Casiotone line of portable electronic keyboards. Cartridge-based video game consoles Amstrad Amstrad GX4000Atari Atari 2600 Atari 5200 Atari 7800 Atari XEGS Atari Lynx Atari JaguarBandai WonderSwan WonderSwan Color/SwanCrystalColeco ColecovisionFairchild Camera and Instrument Fairchild Channel FFisher-Price Pixter Smart CycleInterton Interton Video 2000LeapFrog Leapster LeapPad LeapTVMagnavox/Philips Magnavox Odyssey Magnavox Odyssey 2/Philips Videopac G7000Mattel IntellivisionMilton Bradley Vectrex MicrovisionNEC TurboGrafx-16/PC Engine TurboExpressNikko Europe digiBLASTNintendo NES/Famicom (with its clones like Terminator) SNES/Super Famicom Nintendo 64 Game Boy Game Boy Color Game Boy Advance Virtual Boy Pokémon Mini Nintendo DS Nintendo 2DS Nintendo 3DS Nintendo SwitchNokia N-Gage/N-Gage QDSega SG-1000 Mark I/SG-1000 Mark II Sega Master System/Sega Mark III Sega Genesis/Mega Drive Sega Game Gear Sega Pico Sega 32X Genesis Nomad Advanced Pico BeenaSNK Neo Geo Neo Geo Pocket Neo Geo Pocket Color Neo Geo XSony''' PS Vita/PlayStation TV See also ROM image Dongle Port expander RAM pack Currah References External links Solid-state computer storage media Computer connectors Video game distribution
7161512
https://en.wikipedia.org/wiki/List%20of%20cryptographic%20file%20systems
List of cryptographic file systems
This is a list of filesystems with support for filesystem-level encryption. Not to be confused with full-disk encryption. General-purpose filesystems with encryption AdvFS on Digital Tru64 UNIX Novell Storage Services on Novell NetWare and Linux NTFS with Encrypting File System (EFS) for Microsoft Windows ZFS since Pool Version 30 Ext4, added in Linux kernel 4.1 in June 2015 F2FS, added in Linux 4.2 APFS, macOS High Sierra (10.13) and later. Cryptographic filesystems FUSE-based file systems Integrated into the Linux kernel eCryptfs Rubberhose filesystem (discontinued) StegFS (discontinued) Integrated into other UNIXes PEFS (Private Encrypted File System) on FreeBSD geli on FreeBSD EFS (Encrypted File System) on AIX See also Comparison of disk encryption software References Computing-related lists Disk encryption File systems
25450620
https://en.wikipedia.org/wiki/1938%20Alabama%20Crimson%20Tide%20football%20team
1938 Alabama Crimson Tide football team
The 1938 Alabama Crimson Tide football team (variously "Alabama", "UA" or "Bama") represented the University of Alabama in the 1938 college football season. It was the Crimson Tide's 45th overall and 6th season as a member of the Southeastern Conference (SEC). The team was led by head coach Frank Thomas, in his eighth year, and played their home games at Denny Stadium in Tuscaloosa and Legion Field in Birmingham, Alabama. They finished the season with a record of seven wins, one loss and one tie (7–1–1 overall, 4–1–1 in the SEC). The Crimson Tide opened the season with a 19–7 victory in an intersectional contest against USC at Los Angeles. They then followed up the win with consecutive shutouts, home victories over non-conference opponents Howard and NC State on homecoming. However, Alabama then was shut out 13–0 by Tennessee, their first loss against the Volunteers since 1932. The Crimson Tide then rebounded with victories against Sewanee, Kentucky and Tulane. After a 14–14 tie against Georgia Tech, Alabama defeated Vanderbilt in their season finale. With a final record of 7–1–1, Alabama was ranked No. 13 in the final AP Poll of the season. Additionally, after the season the Associated Press recognized Alabama as having the best record (40–4–3) and highest winning percentage (.909) of any major college team for the five-year period between 1934 and 1938. Statistically, the defense was one of the most dominant in school history and still holds numerous defense records. Schedule On December 5, 1937, Frank Thomas announced the 1938 schedule. The intersectional game against USC was announced in August 1937 and was the first between the two football powers. The remaining schedule included road games at Kentucky and Georgia Tech with the remaining three games split evenly between Denny Stadium and Legion Field. Game summaries USC Source: In August 1937, university officials announced Alabama would open the 1938 season in Los Angeles against the University of Southern California (USC). Looking for "revenge" after their January loss in the Rose Bowl, their first loss on the West Coast, the Crimson Tide defeated the Trojans 19–7 at the Los Angeles Memorial Coliseum. After a scoreless first quarter, Alabama scored two touchdowns in the second quarter to take a 13–0 halftime lead. The scores came on a pair of Herschel Mosley touchdown passes, the first on a seven-yard pass to Billy Slemons and the second on an 18-yard pass to Gene Blackwell. The Trojans responded after the first Alabama touchdown with their deepest drive into Crimson Tide territory of the game. On the drive, Robert Peoples connected with Grenny Lansdell for a 36-yard gain to the Alabama 22. However, the Alabama defense held, and USC failed to score after they turned the ball over on downs at the Alabama 13-yard line. After they held their 13–0 lead through the third quarter, Hal Hughes intercepted an Oliver Day pass and returned it 25-yards for an Alabama touchdown to make the score 19–0 after Vic Bradford missed his second extra point of the game. Later in the fourth, the Trojans scored their only points of the game. The one-yard Day touchdown run was set up after Al Krueger recovered Charley Boswell fumbled punt at the Alabama one-yard line. The victory was their first all-time against USC. Over 6,000 fans greeted the team at the Alabama Great Southern Railroad station in downtown Tuscaloosa upon their arrival the following Tuesday to celebrate their victory. Howard Source: A week after their intersectional victory over USC to open the season, Alabama hosted Howard (now Samford University) in their home opener. In the game, the Crimson Tide outgained the Bulldogs in rushing yards 354 to 8 in their 34–0 shutout at Denny Stadium. Alabama scored their first touchdown on a 15-yard Billy Slemons run to take a 7–0 first quarter lead. In the second quarter touchdowns were scored by, George Zivich on a 43-yard run and by Alvin Davis on a 56-yard run to extend the Alabama lead to 20–0 at halftime. The Crimson Tide then closed the game with a pair of second half touchdowns for the 34–0 victory. Davis scored in the third on a two-yard run and Charlie Holm scored in the fourth on a three-yard run. Davis starred for Alabama in the game with his 153 yards rushing on 15 attempts with a pair of touchdowns. The victory improved Alabama's all-time record against Howard to 16–0–1. NC State Source: In their third and final non-conference game of the season, Alabama hosted North Carolina State University (NC State) in their annual homecoming contest. In the game, the Crimson Tide's two second touchdowns were enough in their defeat of the Wolfpack in their 14–0 shutout at Denny Stadium. After they were held without a first down in the opening quarter, Alabama scored the only points of the game with their two second-quarter touchdowns. The first was on a 28-yard Herschel Mosley pass to Erin Warren and the second on a seven-yard Mosley touchdown run. The Alabama defense dominated the Wolfpack offense and allowed negative rushing yardage (minus four) and zero yards passing. On offense, Mosley starred for the Crimson Tide with his 123 rushing yards on 15 attempts and one passing and rushing touchdown. The victory was their first all-time against NC State. Tennessee Source: In Birmingham, Alabama was upset by rival Tennessee 13–0 at Legion Field. Leonard Coffman scored both of the Volunteers' touchdowns on one-yard runs in the first and third quarters. George Cafego also starred for Tennessee with his 120 rushing yards on 17 attempts that included separate runs of 48 and 33 yards. The loss was Alabama's first against Tennessee since the 1932 season, and brought Alabama's all-time record against Tennessee to 13–6–2. Sewanee Source: A week after their loss to Tennessee, Alabama defeated the Sewanee Tigers 32–0 at Denny Stadium. After a scoreless first quarter, Alabama took a 7–0 lead in the second after Vic Bradford scored on a one-yard quarterback sneak. Later in the quarter, a 51-yard Alvin Davis touchdown run was called back due to a holding penalty, and he Crimson Tide led 7–0 at the half. After Dallas Wicke scored on a one-yard run in the third, Alabama scored 19 fourth quarter points for the 32–0 win. In the fourth, Charley Boswell had a pair of rushing touchdowns and threw a third to Erin Warren in the win. The victory improved Alabama's all-time record against Sewanee to 17–10–3, in what was their last all-time meeting as Sewanee withdrew from SEC following the 1940 season and de-emphasized athletics. Kentucky Source: As Alabama entered their contest against Kentucky, they entered the rankings at No. 18 in the weekly AP Poll. In the game, the Crimson Tide defeated the Wildcats 26–6 on homecoming at McLean Stadium. Alabama opened the game with a pair of touchdowns to take a 14–0 lead in the first quarter. Charlie Holm scored first on a one-yard run and Vic Bradford scored the second on a 31-yard touchdown reception from Herschel Mosley. Kentucky responded in the second with their only points on a 71-yard Dave Zoeller touchdown run to cut the Alabama lead to 14–6 at the half. The Crimson Tide then scored on a pair of Mosley touchdown passes in the second half. The first came on a six-yard pass to Bradford in the third and the second on a nine-yard pass to Erin Warren in the fourth. The victory improved Alabama's all-time record against Kentucky 17–1. Tulane Source: After their victory over Kentucky, the Crimson Tide moved up three positions to the No. 15 spot in the weekly poll. In the game, the Crimson Tide defeated the Tulane Green Wave 3–0 after Vic Bradford converted a game-winning, 17-yard field goal late in the fourth quarter. The victory improved Alabama's all-time record against Tulane to 12–3–1. Georgia Tech Source: After their close victory over Tulane, the Crimson Tide dropped one position to the No. 16 spot in the weekly poll. In their game against Georgia Tech Alabama fell behind 14–0 after the first quarter, but a pair of second half touchdowns gave the Crimson Tide a 14–14 tie against the Yellow Jackets at Grant Field. Georgia Tech took an early 14–0 lead after W. C. Gibson threw a 16-yard touchdown pass to George Smith and W. H. Ector scored on a two-yard run. Still down 14–0 as they entered the third quarter, Alabama scored their first points of the game on a three-yard Alvin Davis touchdown run to cap a 57-yard drive. The Crimson Tide then tied the game in the fourth when they executed a hook and lateral play, with Davis crossing the endzone line for a 66-yard touchdown. Alabama was then in position to attempt a game-winning field goal from the Jackets' 15; however, time expired before they could get a play off which resulted in the 14–14 tie. The tie brought Alabama's all-time record against Georgia Tech to 11–10–3. Vanderbilt Source: In their season finale against the Vanderbilt Commodores, Alabama won 7–0 at Legion Field on Thanksgiving Day. The only scoring drive began in the third and ended early in the fourth with a two-yard Vic Bradford touchdown run. Bradford's extra point was then blocked, but George Zivich recovered it and took it in for the point to give Alabama the 7–0 lead. The victory improved Alabama's all-time record against Vanderbilt to 11–9. After the season After all of the regular season games were completed, the final AP Poll was released in early December. In the final poll, Alabama held the No. 13 position. Alabama was also recognized by the Associated Press for having the best record (40–4–3) and highest winning percentage (.909) of any major, college team for the five-year period between 1934 and 1938. Statistically, the 1938 defense was one of the best in school history. The 1938 squad still holds numerous defensive records that include: Fewest total yards allowed in a season with 701 Fewest total yards allowed per game with an average of 77.9 Fewest total yards allowed per play with an average of 1.2 Fewest first downs allowed in a season with 26 Fewest rushing yards allowed in a season with 305 Fewest rushing yards allowed per game with an average of 33.9 Fewest rushing yards allowed per play with an average of 0.95 Fewest passing attempts allowed per game with an average of 9.8 Fewest passing completions allowed per game with an average of 3.4 Fewest passing yards allowed in a season with 291 Fewest passing yards allowed per game with an average of 32.7 NFL Draft Several players that were varsity lettermen from the 1938 squad were drafted into the National Football League (NFL) between the 1939 and 1941 drafts. These players included the following: Personnel Varsity letter winners Coaching staff References General Specific Alabama Alabama Crimson Tide football seasons Alabama Crimson Tide football
28273766
https://en.wikipedia.org/wiki/Ngrep
Ngrep
ngrep (network grep) is a network packet analyzer written by Jordan Ritter. It has a command-line interface, and relies upon the pcap library and the GNU regex library. ngrep supports Berkeley Packet Filter (BPF) logic to select network sources or destinations or protocols, and also allows matching patterns or regular expressions in the data payload of packets using GNU grep syntax, showing packet data in a human-friendly way. ngrep is an open source application, and the source code is available to download from the ngrep site on SourceForge. It can be compiled and ported to multiple platforms, it works in many UNIX-like operating systems: Linux, Solaris, illumos, BSD, AIX, and also works on Microsoft Windows. Functionality ngrep is similar to tcpdump, but it has the ability to look for a regular expression in the payload of the packet, and show the matching packets on a screen or console. It allows users to see all unencrypted traffic being passed over the network, by putting the network interface into promiscuous mode. ngrep with an appropriate BPF filter syntax, can be used to debug plain text protocols interactions like HTTP, SMTP, FTP, DNS, among others, or to search for a specific string or pattern, using a regular expression syntax. ngrep also can be used to capture traffic on the wire and store pcap dump files, or to read files generated by other sniffer applications, like tcpdump, or wireshark. ngrep has various options or command line arguments. The ngrep man page in UNIX-like operating systems show a list of available options. Using ngrep In these examples, it is assumed that eth0 is the used network interface. Capture network traffic incoming/outgoing to/from eth0 interface and show parameters following HTTP (TCP/80) GET or POST methods ngrep -l -q -d eth0 -i "^GET |^POST " tcp and port 80 Capture network traffic incoming/outgoing to/from eth0 interface and show the HTTP (TCP/80) User-Agent string ngrep -l -q -d eth0 -i "User-Agent: " tcp and port 80 Capture network traffic incoming/outgoing to/from eth0 interface and show the DNS (UDP/53) querys and responses ngrep -l -q -d eth0 -i "" udp and port 53 Security Capturing raw network traffic from an interface requires special privileges or superuser privileges on some platforms, especially on Unix-like systems. ngrep default behavior is to drop privileges in those platforms, running under a specific unprivileged user. Like tcpdump, it is also possible to use ngrep for the specific purpose of intercepting and displaying the communications of another user or computer, or an entire network. A privileged user running ngrep in a server or workstation connected to a device configured with port mirroring on a switch, router, or gateway, or connected to any other device used for network traffic capture on a LAN, MAN, or WAN, can watch all unencrypted information related to login ID's, passwords, or URLs and content of websites being viewed in that network. Supported protocols IPv4 and IPv6, Internet Protocol version 4 and version 6 TCP, Transmission Control Protocol UDP, User Datagram Protocol ICMPv4 and ICMPv6, Internet Control Message Protocol version 4 and version 6 IGMP, Internet Group Management Protocol Ethernet, IEEE 802.3 PPP, Point to Point Protocol SLIP, Serial Line Internet Protocol FDDI, Fiber Data Distribution Protocol Token Ring, IEEE 802.5 See also Comparison of packet analyzers snoop, a command line packet analyzer included with Solaris and illumos dsniff, a packet sniffer and set of traffic analysis tools netsniff-ng, a free Linux networking toolkit etherape, a network mapping tool that relies on sniffing traffic tcptrace, a tool for analyzing the logs produced by tcpdump Microsoft Network Monitor, a packet analyzer xplico, a network forensics analysis tool References External links Official site for ngrep Ngrep - Linux man page Network analyzers Unix network-related software Windows security software MacOS security software Free software programmed in C Free network management software Software using the BSD license
38954406
https://en.wikipedia.org/wiki/2011%20QF99
2011 QF99
Asteroid is a minor planet from the outer Solar System and the first known Uranus trojan to be discovered. It measures approximately in diameter, assuming an albedo of 0.05. It was first observed 29 August 2011 during a deep survey of trans-Neptunian objects conducted with the Canada–France–Hawaii Telescope, but its identification as Uranian trojan was not announced until 2013. temporarily orbits near Uranus's Lagrangian point (leading Uranus). It will continue to librate around for at least 70,000 years and will remain a Uranus co-orbital for up to three million years. is thus a temporary Uranus trojan—a centaur captured some time ago. Uranus trojans are generally expected to be unstable and none of them are thought to be of primordial origin. A simulation led to the conclusion that at any given time, 0.4% of the centaurs in the scattered population within 34 AU would be Uranus co-orbitals, of which 64% (0.256% of all centaurs) would be in horseshoe orbits, 10% (0.04%) would be quasi-satellites, and 26% (0.104%) would be trojans (evenly split between the and groups). A second Uranian Trojan, , was announced in 2017. References External links Uranus trojans Minor planet object articles (unnumbered) 20110829
24149353
https://en.wikipedia.org/wiki/United%20States%20Army%20Recruiting%20Command
United States Army Recruiting Command
The United States Army Recruiting Command (USAREC) is responsible for manning both the United States Army and the Army Reserve. Recruiting operations are conducted throughout the United States, U.S. territories, and at U.S. military facilities in Europe, Asia, and the Middle East. This process includes the recruiting, medical and psychological examination, induction, and administrative processing of potential service personnel. USAREC is a major subordinate command under the United States Army Training and Doctrine Command (TRADOC), and is commanded by a Major General and assisted by a Deputy Commanding General (Brigadier General) and a Command Sergeant Major. The Command employs nearly 15,000 military and civilian personnel, the majority being Soldiers that are screened and selected to serve on recruiting duty for three to four years. Upon completing their recruiting assignment, these Soldiers can either return to their primary military occupational specialty (MOS) or volunteer to remain in the recruiting career field; those that remain in the recruiting career field are considered cadre recruiters and comprise the majority of the enlisted leadership of the command, providing experience, training, and continuity to the recruiting force. History Recruiting for the U.S. Army began in 1775 with the raising and training of the Continentals to fight in the American Revolutionary War. The Command traces its organizational history to 1822, when Major General Jacob Jennings Brown, commanding general of the Army, initiated the General Recruiting Service. For much of the rest of the 19th century recruitment was left to the regimental recruiting parties, usually recruiting in their regional areas as was the practice in Europe. Up to the commencement of the American Civil War two types of forces existed in the United States that performed their own recruiting: those for the Regular Army, and those for the state Militia. Due to severe shortage of troops after the first year of the war, conscription was introduced by both the Union and the Confederacy to enable continuing of operations on a thousand-mile front. Conscription was first introduced in the Confederacy by President Jefferson Davis on the recommendation by General Robert E. Lee on 16 April 1862. The United States Congress enacted by comfortable majorities the Enrollment Act of 1863 on 3 March after two weeks of debate. As a result, approximately 2,670,000 men were conscripted for federal and militia service by the Northern states. The realization that volunteers could never again be depended on for service was clear in the post-war analysis, but the dependence on them prevailed until the commencement of World War Iwhen President Woodrow Wilson, arguing for America's exclusion from the European war, believed that there would be found sufficient volunteers to meet the nation's military needs. However, European experiences with industrial warfare prevailed, and two years later Congress passed the Selective Service Act of 1917. There were two primary reasons for President Wilson approving conscription: he recognized the efficiency and equity of the draft over the difficult-to-manage system of inducting and training volunteers, and that by opting for conscription, he realized the possibility of blocking one of his leading political critics and opponents, former President Theodore Roosevelt, from raising a volunteer force to lead in France. The Act was however very selective in that "the draft 'selected' those men the Army wanted and society could best spare: 90 percent of the draftees were unmarried, and 70 percent were farm hands or manual hands." Conscription was again used to quickly grow the nation's small peacetime Army in 1940 into a wartime Army of more than 8.3 million personnel. However, there was a society-wide support for the conscription during World War II, in part due to efforts of the National Emergency Committee (NEC) of the Military Training Corps Association led by Greenville Clark who became known as the "Father of Selective Service." The Congress, faced with imminent need to mobilize, still took three months of debate until finally passing the Selective Training and Service Act (STASA) of 1940 on 16 September 1940. Nearly 50 million men registered and 10 million were inducted into armed forces under the Act. Although the STASA was extended after the war, it ended on 31 March 1947, and the Army had to turn to recruiting volunteers again, requiring and estimated 30,000 volunteers a month, but seeing only 12,000 enlisting. With the Cold War looming, the Congress authorized the Selective Service Act of 1948 to enable President Harry S. Truman to provide for 21 months of active Federal service, with all men from ages 18 to 26 required to register. This Act was extended due to the start of the Korean War, and replaced by the Universal Military Training and Service Act of 1951 by revising the earlier Act. The new Act extended the president’s authority to induct citizens for four years, granted him the authority to recall reservists, lowered the draft age to 18, lengthened the term of service to two years, and cancelled deferments for married men without children. With the end of the Korean War, the draft remained in force, but became increasingly unpopular although it continued to encourage volunteers and selected the bare minimum of annual recruits. Repeatedly renewed by overall majorities in Congress in 1955, 1959, and 1963, its final extension in 1967 was also passed by a majority of Congress, but only after a year of hearings and public debate. The U.S. Army became an all-volunteer force again in 1973. During the years of the Vietnam War between From 1965 to 1973, there were 1,728,254 inductions through selective service. There was however a direct effect on public support for the draft that was high even after the Korean War to its low in early 1970s because Draftees, who constituted only 16 percent of the armed forces, but 88 percent of infantry soldiers in Vietnam, accounted for over 50 percent of combat deaths in 1969, a peak year for casualties. Little wonder that the draft became the focus of anti-Vietnam activism. With these political consequences in mind in 1969 President Nixon appointed a commission, led by former Secretary of Defense Thomas Gates, "to develop a comprehensive plan for eliminating conscription and moving toward an All Volunteer Armed Force." However, even before this commission submitted its report on 13 May 1969, President Nixon informed the Congress that he intended to institute a reform that would see the draftees replaced with volunteers in his Special Message to Congress on Reforming the Military Draft. In February 1970, the Gates Commission released its favorable AVF report, which stated that "We unanimously believe that the nation’s interests will be better served by an all-volunteer force, supported by an effective stand-by draft, than by a mixed force of volunteers and conscripts; that steps should be taken promptly to move in this direction." Facilitating the transition to an all-volunteer force, the Army created District Recruiting Commands (DRC) through the continental United States to direct the efforts of its recruiters among the civilian population. The DRC's became "Battalions" in 1983. Organizational structure USAREC consists of a division headquarters, five enlisted recruiting brigades, one medical and chaplain recruiting brigade, one recruiting support brigade, and one training brigade. USAREC Headquarters USAREC headquarters is located at Fort Knox, Kentucky, and provides the strategic command and support to the Army's recruiting force. More than 400 officers, enlisted members and civilian employees work in one of the command's eight directorates and 14 staff sections, conducting administration, personnel, resource management, safety, market research and analysis, and public relations operations in support of the recruiting mission and the Soldiers and civilians working to achieve it. The headquarters complex and the personnel working there are managed by a Headquarters Company commanded by a Captain and assisted by a First Sergeant. Enlisted Recruiting Brigades Recruiting Brigades Five enlisted recruiting brigades make up the bulk of USAREC's recruiting force, and are responsible for the achievement of nearly all of the Army and Army Reserve's yearly recruitment missions. Each brigade is commanded by a Colonel and assisted by a Command Sergeant Major, a Headquarters Company, and support staff. Each brigade commands between seven and eight recruiting battalions and is responsible for the operational command and control of all Army recruiting operations within one of five regional geographic areas. Recruiting Battalions, Companies, and Stations Forty-four enlisted recruiting battalions are responsible for the tactical, or day-to-day, command and control of 261 Army recruiting companies conducting recruiting operations within their specific geographic areas. Each battalion is commanded by a Lieutenant Colonel who is assisted by a Command Sergeant Major and a support staff, and provide the local-level command, planning, and guidance to six to eight recruiting companies and approximately 250 recruiters within their area of operations. Each company is commanded by a Captain who is assisted by a First Sergeant, and they lead approximately 30 to 45 recruiters located at one of 1,400 local recruiting stations spread throughout the cities and towns within the battalion's geographic area of operations. Each Army Recruiting Station is commanded by a Station Commander, a successful cadre recruiter in the rank of Staff Sergeant or Sergeant First Class selected to lead an office of three to 15 recruiters in conducting the actual mission of recruiting qualified people into the Army. The recruiters in these recruiting stations represent the best of the Army to the American public; for many Americans an Army recruiter might be their first exposure to anyone currently in the military, so the Soldiers selected as recruiters are thoroughly screened both before and throughout their assignment for anything that could prevent them from properly representing the Army in public and successfully completing their mission. Enlisted Recruiting Structure The five enlisted recruiting brigades and their respective battalions are: U.S. Army 1st Recruiting Brigade, located at Fort George G. Meade, Maryland. This brigade consists of eight enlisted recruiting battalions and covers the Northeastern United States, as well as U.S. military bases in Europe, North Africa, and the Middle East. U.S. Army Albany Recruiting Battalion, Watervliet, New York. U.S. Army Baltimore Recruiting Battalion, Fort George G. Meade, Maryland U.S. Army New England Recruiting Battalion, Portsmouth, New Hampshire U.S. Army Harrisburg Recruiting Battalion, New Cumberland Army Depot, Pennsylvania U.S. Army New York City Recruiting Battalion, Fort Hamilton, New York U.S. Army Mid-Atlantic Recruiting Battalion, Lakehurst Naval Air Station, New Jersey U.S. Army Syracuse Recruiting Battalion, Syracuse, New York U.S. Army Richmond Recruiting Battalion, Richmond, Virginia U.S. Army 2d Recruiting Brigade, located at Redstone Arsenal, Alabama. This brigade consists of eight enlisted recruiting battalions and covers the Southeastern United States, Puerto Rico, and the U.S. Virgin Islands. U.S. Army Atlanta Recruiting Battalion, Atlanta, Georgia U.S. Army Columbia Recruiting Battalion, Fort Jackson, South Carolina U.S. Army Jacksonville Recruiting Battalion, Jacksonville, Florida U.S. Army Miami Recruiting Battalion, Miami, Florida U.S. Army Montgomery Recruiting Battalion, Maxwell Air Force Base, Alabama U.S. Army Raleigh Recruiting Battalion, Raleigh, North Carolina U.S. Army Tampa Recruiting Battalion, Tampa, Florida U.S. Army Baton Rouge Recruiting Battalion, Baton Rouge, Louisiana U.S. Army 3d Recruiting Brigade, located at Fort Knox, Kentucky. This brigade consists of eight enlisted recruiting battalions and covers part of the Midwestern United States U.S. Army Chicago Recruiting Battalion, Naval Station Great Lakes, Illinois U.S. Army Cleveland Recruiting Battalion, Cleveland, Ohio U.S. Army Columbus Recruiting Battalion, Columbus, Ohio U.S. Army Indianapolis Recruiting Battalion, Indianapolis, Indiana U.S. Army Great Lakes Recruiting Battalion, Lansing, Michigan U.S. Army Milwaukee Recruiting Battalion, Milwaukee, Wisconsin U.S. Army Minneapolis Recruiting Battalion, Fort Snelling, Minnesota U.S. Army Nashville Recruiting Battalion, Nashville, Tennessee U.S Army 5th Recruiting Brigade, located at Fort Sam Houston, Texas. This brigade consists of seven enlisted recruiting battalions and covers the Southwestern United States and parts of the Midwestern and Western United States not covered by the 3d Recruiting Brigade and 6th Recruiting Brigade, respectively. U.S. Army Dallas Recruiting Battalion, Irving, Texas U.S. Army Denver Recruiting Battalion, Denver, Colorado U.S. Army Houston Recruiting Battalion, Houston, Texas U.S. Army Kansas City Recruiting Battalion, Kansas City, Missouri U.S. Army Oklahoma City Recruiting Battalion, Oklahoma City, Oklahoma U.S. Army San Antonio Recruiting Battalion, Fort Sam Houston, Texas U.S. Army Phoenix Recruiting Battalion, Phoenix, Arizona U.S. Army 6th Recruiting Brigade, located at North Las Vegas, Nevada. This brigade consists of seven enlisted recruiting battalions and covers the Western United States, along with Alaska, Hawaii, Guam, American Samoa, Northern Mariana Islands, and U.S. military bases in Japan and South Korea. U.S. Army Seattle Recruiting Battalion, Seattle, Washington U.S. Army Portland Recruiting Battalion, Portland, Oregon U.S. Army Los Angeles Recruiting Battalion, Encino, California U.S. Army Northern California Recruiting Battalion, Sacramento, California U.S. Army Central California Recruiting Battalion, Naval Air Station Lemoore, California U.S. Army Southern California Recruiting Battalion, Mission Viejo, California U.S. Army Salt Lake City Recruiting Battalion, Salt Lake City, Utah Medical Recruiting Brigade The U.S. Army Medical Recruiting Brigade, is located at Fort Knox, Kentucky, and is tasked with recruiting medical professionals and chaplains for direct commission into the Regular Army and Army Reserve as Army Medical Department or Army Chaplain Corps officers along with providing operational oversight for the Army's special operations forces recruiting efforts. The brigade is commanded by a Colonel and assisted by a Command Sergeant Major, a Headquarters Company, and support staff that provide operational command and control to five medical recruiting battalions, the Special Operations Recruiting Battalion, and a chaplain recruiting branch covering the entire United States and Europe. U.S. Army 1st Medical Recruiting Battalion, Fort Meade, Maryland U.S. Army 2d Medical Recruiting Battalion, Redstone Arsenal, Alabama U.S. Army 3d Medical Recruiting Battalion, Fort Knox, Kentucky U.S. Army 5th Medical Recruiting Battalion, Fort Sam Houston, Texas U.S. Army 6th Medical Recruiting Battalion, Las Vegas, Nevada U.S. Army Special Operations Recruiting Battalion (Airborne), Fort Bragg, North Carolina U.S. Army Chaplain Recruiting Branch, Fort Knox, Kentucky Marketing and Engagement Brigade The U.S. Army Marketing and Engagement Brigade, located at Fort Knox, Kentucky, serves as USAREC's recruiting support brigade and is tasked with providing both direct and indirect support to the enlisted and medical recruiting brigades through demonstrations, displays, and engagement with the American public in order to show the skills and benefits of Army service and enhance the Army's recruiting and retention missions. The brigade is commanded by a Colonel and assisted by a Command Sergeant Major, a Headquarters Company, and support staff that provide operational command and control to three specialized support battalions: The U.S. Army Mission Support Battalion, located at Fort Knox, Kentucky, provides recruiting support through the management of display vehicle and exhibit support to the Army's recruiting efforts, as well as specialized teams that interact with specific focus groups. The battalion manages a fleet of interactive exhibit trailers and static displays that can be set up at campuses, fairs, or other special events to allow people to interact with Army Soldiers and their equipment. In addition, the battalion manages three special outreach teams that are also stationed at Fort Knox: U.S. Army Warrior Fitness Team, which competes in regional and national physical fitness or athletic competitions such as the CrossFit Games or Strongman competitions, as well as attend health and fitness events in order to demonstrate the benefits of Army service in achieving and maintaining a healthy and active lifestyle. U.S. Army ESports Team, which competes in a variety of regional and national online esports and gaming tournaments. Team members also livestream themselves playing games, competing in tournaments, and interacting with viewers on the Team's Twitch channel U.S. Army Musical Outreach Team, which consists of talented Army musicians that perform at high schools and special events. The U.S. Army Parachute Team, also known as the "Golden Knights," located at Fort Bragg, North Carolina, is the Army's aerial demonstration team and one of only three United States Department of Defense-sanctioned aerial demonstration teams. Since its activation in 1959, the Team has conducted demonstrations of precision freefall and parachuting operations in all 50 U.S. States and 48 countries, earning it the title of the "Army's Goodwill Ambassadors to the World". The Team also competes in national and international skydiving and parachuting competitions, winning over 2,000 gold medals and setting nearly 350 world records. The U.S. Army Marksmanship Unit, located at Fort Benning, Georgia, is the Army's premier small arms marksmanship unit. Unit members are considered to be some of the best marksmen in the world and regularly compete at the Olympic Games and in national and international shooting competitions; since its activation in 1956, Unit members have won 25 Olympic medals and over 65 individual and team championships. In addition, the Unit provides small arms research and development expertise to the Army, conducts marksmanship training to units throughout the United States Department of Defense as well as foreign nations, and performs exhibition shooting events around the nation. Recruiting and Retention College The U.S. Army Recruiting and Retention College, located at Fort Knox, Kentucky, serves as USAREC's training brigade and is responsible for the training and education of all Army and Army Reserve recruiters, career counselors, and recruiting leaders. (The Army National Guard manages its own recruiting and retention program and trains its personnel at the Strength Maintenance Training Center at Camp Robinson, Arkansas.) The college is part of the Army University system, and reports to both the USAREC commanding general for operational command and control and to the U.S. Army Training and Doctrine Command for management and certification of its educational programs. The college is commanded by a Colonel who serves as the institution's Commandant and is assisted by a civilian Dean, a Command Sergeant Major (who also serves as Commandant of the college's Noncommissioned Officer Academy), a Headquarters Company, and support staff that manages and supports approximately 120 Soldiers and civilians, the majority of whom are senior cadre recruiters and career counselors that have been selected to serve for two to three years as instructors based on exceptional past performance in multiple recruiting or retention positions. Approximately 6,500 Soldiers and civilians each year are trained at the college, attending one of the 16 courses offered covering recruiting, career counseling, recruiting leadership, unit command, senior executive leadership, and staff positions. Students selected for duty as field recruiters or career counselors must first graduate from their respective certification course taught at the college in order to serve in those roles in the Army and be authorized to wear the Army Recruiter Badge or Army Career Counselor Badge as a permanent award on their uniform. In addition to training the recruiting and retention force, the college provides training to new instructors and training managers for both USAREC and the Army, writes and publishes Army recruiting regulations and doctrine, creates and manages career progression plans for Army recruiters and career counselors, conducts recruiter training missions with foreign nations to share knowledge and best practices, and coordinates with civilian higher learning institutions and accrediting agencies for the awarding of civilian college credits and certifications to the college's graduates. Courses taught at the college are regularly evaluated by the American Council on Education for awarding of civilian college credits towards undergraduate degrees, and the college received an initial six-year accreditation from the Council on Occupational Education in March 2021 which provided the college with a civilian accreditation of its training and educational standards and placed it as one of only 45 COE-accredited federal training and education institutions. Command Current key command personnel of the Command include: Commanding General – Major General Kevin Vereen Deputy Commanding General – Brigadier General John Cushing Deputy Commanding General - Brigadier General Daphne Davis Command Sergeant Major – Command Sgt. Maj. John W. Foley Past commanders Major General Kevin Vereen 2020–Present Major General Frank M. Muth 2018 – 2020 Major General Jeffrey J. Snow 2015 – 2018 Major General Allen W. Batschelet 2013 – 2015 Major General David L. Mann 2011 – 2013 Major General Donald M. Campbell Jr. 2009 – 2011 Major General Thomas P. Bostick 2005 – 2009 Major General Michael D. Rochelle 2002 – 2005 Major General Dennis D. Cavin 2000 – 2002 Major General Evan R. Gaddis 1998 – 2002 Major General Mark R. Hamilton 1997 – 1998 Major General Alfonso E. Lenhardt 1996 – 1997 Major General Kenneth W. Simpson 1993 – 1996 Major General Jack C. Wheeler 1989 – 1993 Major General Thomas P. Carney 1987 -1989 Major General Allen K. Ono 1985 – 1987 Major General Jack O. Bradshaw 1983 – 1985 Major General Howard G. Crowell Jr. 1981 – 1983 Major General Maxwell R. Thurman 1979 – 1981 Major General William L. Mundie 1978 – 1979 Major General Eugene P. Forrester 1975 – 1978 Major General William B. Fulton 1974 – 1975 Major General John Q. Henion 1971 – 1974 Brigadier General Carter W. Clarke Jr. 1971 Major General Donald H. McGovern 1968 – 1971 Brigadier General Frank L. Gunn 1966 – 1968 Brigadier General Leonidas Gavalas 1964 – 1966 See also Initiatives U.S. Army Esports Comparable organizations Marine Corps Recruiting Command (U.S. Marine Corps) United States Navy Recruiting Command Air Education and Training Command (U.S. Air Force) Notes and references Sources van Creveld, Martin, The Transformation of War, The Free Press, New York, 1991 Vandergriff, Donald, Manning the Future Legions of the United States, Praeger Security International, London, 2008 Gates, Thomas, S., The Report of the President's Commission on an All-Volunteer Armed Force, US Government Printing Office, Washington, DC, 1970 External links Brigades and Battalions of the Recruiting Command Recruiting Military recruitment
23974096
https://en.wikipedia.org/wiki/Helith
Helith
Helith Network (or just "Helith") is a hacker collective active since 1999 and is a globally spread community. It is suspected that Helith is affiliated to specialists in the field of malware and network security. Name The origin of the name came from accident German where the word Helith means "Heroes". It was chosen because the group beliefs that nobody cares for those who are poor or those who did not had the same chances like studied hackers. It was also chosen to point out that the members just do what they are ready to do even if it conflicts with laws or civil restrictions like beliefs or ethics. The origin of the name may be traced to the fact that "Helith" was founded in Germany and thus accident German was chosen for the group name. Some of the founding members of Helith shared a belief and talked during a Chaos Communication Congress Congress in 1998-1999 in Germany at an improved round table conference about what needs to get done to reach this goal. At the conference Rembrandt was chosen to be the public link for Helith and thus making him the poster child and victim for federal forces. It is not known who is member of "Helith" nor how many members do exist or what they do in detail because very few information get public and Rembrandt itself is a very problematic character like Theo de Raadt. History Helith was founded in 1998-1999 in the Berlin area as a location for its members to share information without judging anybody about how they make their living or for whom they work. Their computer hardware and work on various projects like John the Ripper (partly SSE2 code, porting to OpenBSD), metasploit, medusa, hydra or nmap. In time, the members of "Helith" released several security advisories affecting even the most secure OpenSource Operating System OpenBSD, PF firewall, OpenSSH, NetBSD and vendors like Netgear or Nortel. On July 30, 2007, Washington Post reporter Brian Krebs wrote an article partly about "Helith" cracking the Deutsche Bank internal network. The global links of Helith reach least from Germany where it was founded, to Russia, Romania, Columbia, several African countries and the USA. Members Helith Network membership varied but included at various times: benkei, ConCode, Cyneox, Rembrandt, Rott_En, noptrix, Skyout, Zarathu A lot other members might be active but are not disclosed. The list was created during research with Google and visiting the Helith-Website. External links Washington Post article Current Helith Website ExploitDB posted Advisory of Helith about PF NetBSD security Advisory about a Bug in PF ExploitDB posted Advisory of Helith about a common WiMax router Hacker groups
25983932
https://en.wikipedia.org/wiki/David%20R.%20Heise
David R. Heise
David Reuben Jerome Heise (born March 15, 1937, died September 28, 2021) was a social psychologist who originated the idea that affectual processes control interpersonal behavior. He contributed to both quantitative and qualitative methodology in sociology. He retired from undergraduate teaching in 2002, but continued research and graduate student consulting as Rudy Professor of Sociology Emeritus at Indiana University. He is most well known for his work on affect control theory. Heise died on September 28, 2021, after a brief illness. Education and career Heise was born in Evanston, Illinois. He attended Illinois Institute of Technology from 1954 to 1956, and then transferred to the University of Missouri School of Journalism where he received a B.J. degree in 1958. Additionally he received an A.B. in Mathematics and the Physical Sciences from the University of Missouri in 1959. Heise joined the Laboratories for Applied Sciences at the University of Chicago, on the non-public top floor of the Museum of Science and Industry (Chicago), as a technical writer. His first publication was a by-lined full page report in the Chicago Sun-Times concerning a 1960 high-temperature physics conference held by the Laboratories. After beginning graduate courses related to communications studies in 1961, Heise's interests generalized to social psychology (sociology), and he became a fellow in a National Institute of Mental Health (NIMH) training program directed by University of Chicago sociologist Fred Strodtbeck. While a graduate student in the University of Chicago Sociology Department (the Chicago School), he studied with Elihu Katz, James A. Davis, Peter Blau, Edward Shils, Otis Dudley Duncan, Peter Rossi, and Leo Goodman. He received his M.A. in 1962 and his Ph.D. in 1964. From 1963 to 1969, Heise served as instructor, post-doctoral fellow, and assistant professor at the University of Wisconsin–Madison. He worked for two years as an associate professor at Queens College, City University of New York, where his colleague Patricia Kendall linked him to her husband Paul Lazarsfeld. From 1971 to 1981 he was professor of sociology at the University of North Carolina at Chapel Hill, where he directed an NIMH training program in sociological methodology, and began his signature research on affect control theory. After joining the sociology department at Indiana University in 1981, he directed another NIMH training program in methodology from 1988 to 1993, and was awarded a James H. Rudy Endowed Professorship in 1990. Impression Formation and Affect Control Heise works extensively with Charles E. Osgood's semantic differential for measuring affective associations of words (connotative meanings). His dissertation included semantic differential measurements for 1,000 frequent English words, and he and his students compiled four more lexicons in the United States since then, each containing affective measurements for 1,250 or more English words. Heise used the affective measurements to begin quantitative studies of impression formation while at the University of Wisconsin. The impression formation research seeks empirically based equations for predicting how various kinds of events influence individuals' feelings about people, behaviors, and settings. This research employs structural equation modeling and path analysis (statistics), as discussed by Heise in his book Causal Analysis. In 1978 Heise reported research results regarding affective meanings and impression formation in Computer-Assisted Analysis of Social Action. An account of similar work over the prior century was provided in his 2010 book Surveying Cultures. At the University of North Carolina, Heise began work on affect control theory, a cybernetic approach to impression management through interpersonal action. An oral presentation of the theory was given in 1972, an article on the theory was published in 1977, and a book, Understanding Events, appeared in 1979. NIMH research funding during the late 1970s supported data collection for a variety of graduate student projects related to affect control theory. The results of these projects were reported in a book, Analyzing Social Interaction, edited by Heise and his student Lynn Smith-Lovin. After Heise moved to Indiana University, he and his students, and others, continued work on impression formation and on affect control theory. Heise expanded methods for measuring affective meanings to computer-assisted personal interviewing. He prepared programs in the Java programming language to collect data over the Internet, and to publish his computer simulation system for obtaining and examining predictions from affect control theory on the World Wide Web. Heise summarized the multiple lines of research on affect control theory, delineated the theory's mathematical model, and provided a readable introduction to the theory in his 2007 book, Expressive Order. He has discussed how affect control theory's computational model of emotional facial expressions can facilitate the creation of emoting machines (affective computing). Affect control theory (ACT) has been acclaimed both by sociologists and psychologists. Thomas Fararo discussed the theory as follows. "Heise employs a control system model. The basic idea is that momentary affective states are under the control of more enduring sentiments. His 'fundamental sentiments' are instantiations of operative ideals, whereas his momentary affective meaning states are the results of read inputs. These are compared with the fundamental sentiments from moment to moment in the affect control process. Behavior is the control of affect via the feedback loop. Undoubtedly this is the best developed empirically applicable cybernetic model in the history of theoretical sociology." In an essay on the sociology of emotions, T. David Kemper wrote, "Indubitably, Heise has the most methodologically rigorous program of all sociologists, with the added attraction of its mathematical precision. ... Using the cultural meanings of its constituent terms, and combinations of terms, as the raw materials, ACT is, if nothing else, a simulation program par excellence. It can formulate both emotional outcomes of situations and situational outcomes of emotions in a manner that is more efficient than any other presently available in either sociology or psychology." Psychologists Gerald Clore and Jesse Pappas discussed affect control theory as follows. "Ideas about settings, identities, actions, and emotions are part of the fabric of sociology and social psychology. Innumerable theories offer explanations for how subsets of these elements are related in particular contexts, but in lieu of such a piecemeal approach, Heise offers a general explanation for the entire set of relationships. His account, moreover, is formalized in equations and implemented in a computer program capable of making numerical predictions about ongoing human interactions. This is an astounding achievement. By comparison, the rest of us work on modest problems with blunt instruments." Event Structure Analysis At Indiana University Heise began a second program of research on interpersonal action, this one emphasizing rationality rather than affect. Building on production systems in cognitive science, especially as applied by other sociologists, Heise developed a framework called Event Structure Analysis for analyzing reiterative social processes. The approach posits that later events are linked logically to earlier events in networks of prerequisite implications, and that narratives about incidents implicitly communicate this underlying logical structure. The analytic problem is to draw the implicit logical structure out of the narrative into an explicit model characterizing the incident and similar happenings. Heise proposed having culture experts accomplish this by judging which events in a given narrative were prerequisites for others. However, judging logical priority for all pairs of events in a lengthy narrative would be overwhelming, so Heise created a computer program to elicit answers, to process prior answers logically in order to minimize queries, and to draw a graphical representation of the logical network that implicitly underlies the analyzed narrative. The computer program has been applied by sociological ethnographers, social historians, and organization researchers. Event structure analysis is an important part of the growing field of social sequence analysis. Macrosociology contributions With Gerhard Lenski and John Wardwell, Heise published a quantitative analysis of sociocultural evolution. With student Alex , Heise developed the concept of macroaction for analyzing organizational processes. Student Steven Lerner and Heise demonstrated that international interactions have a substantial affective basis. An empirical basis for analyzing semantic networks in order to identify major social institutions and their constituent roles was developed in a book by Heise and Neil MacKinnon, The approach was applied with multi-method analyses to identify the major social institutions of contemporary American society in Heise's book Cultural Meanings and Social Institutions: Social Organization Through Language. Honors and offices Heise was a Guggenheim Fellow in 1977, and a research fellow of the Japan Society for the Promotion of Science in 1990. He served as editor of Sociological Methodology from 1974 to 1976, and as editor of Sociological Methods & Research from 1980 to 1983. Heise was chair of the Microcomputing Section of the American Sociological Association (ASA) in 1990 and 1991, and he was chair of the ASA's Mathematical Sociology Section in 2003 and 2004. He received the Microcomputing Section's Award for Outstanding Contributions to Computing in 1995. The ASA's Social Psychology Section gave him its Cooley-Mead Award in 1998, the Sociology of Emotions Section gave him its Lifetime Achievement Award in 2002, and the Mathematical Sociology Section gave him both its James S. Coleman Distinguished Career Award and its Harrison White Outstanding Book Award in 2010. The International Academy for Intercultural Research presented him with its Lifetime Achievement Award in 2013. In 2017 the ASA's Section on Methodology recognized Heise's contributions to sociological methodology with its Paul F. Lazarsfeld Award. Notes References . . . . . . . . . Heise, David (1997). Interact On-Line (Java applet). . . . . . . . . . . . External links Copies of the original webpages from Dave Heise's indiana page, maintained by Jesse Hoey Affect Control Theory website Event Structure Analysis website new ACT site American sociologists American psychologists Missouri School of Journalism alumni People from Evanston, Illinois 1937 births Living people
2193470
https://en.wikipedia.org/wiki/Ben%20Daglish
Ben Daglish
Ben Daglish (31 July 1966 – 1 October 2018) was an English composer and musician. Born in London, his parents moved to Sheffield when he was one year old. He was known for creating many soundtracks for home computer games during the 1980s, including such as The Last Ninja, Trap, Krakout, and Deflektor. Daglish teamed up with fellow C64 musician and prolific programmer Tony Crowther, forming W.E.M.U.S.I.C., which stood for "We Make Use of Sound in Computers". Daglish had attended the same school as Crowther. Daglish mostly worked freelance but was employed by Gremlin Graphics for a couple of years. Biography Daglish lived in Derbyshire where he composed, played and performed in a number of UK bands, including Loscoe State Opera. He also regularly performed with violinist Mark Knight and the band SID80s at retro computer game events such as Back in Time Live and Retrovision. He had also performed with Commodore 64 revival band Press Play On Tape together with Rob Hubbard. He was a fan of the late Ronnie Hazlehurst, a prolific composer for television. He died from complications from lung cancer on 1 October 2018. Compositions Amstrad CPC Dark Fusion (1988 – Gremlin Graphics Software) Deflektor (1987 – Vortex Software) H.A.T.E. – Hostile All Terrain Encounter (1989 – Vortex Software) Mask (1987 – Gremlin Graphics Software) Mask II (1988 – Gremlin Graphics Software) Masters of the Universe (Les Maitres De L'Univers) (1987 – Gremlin Graphics Software) North Star (1988 – Gremlin Graphics Software) Skate Crazy (1988 – Gremlin Graphics Software) Supercars (1990 – Gremlin Graphics Software) Switch Blade (1990 – Gremlin Graphics Software) Terramex Cosmic Relief : Prof. Renegade to the Rescue (1988 – Grandslam) The Real Stunt Experts (1989 – Alternative Software) Thing Bounces Back (1987 – Gremlin Graphics Software) Atari ST 3D Galax (1987) Action Fighter (1986) Artura (1988) Axel's Magic Hammer (1989) Blasteroids (1989) Butcher Hill (1989) California Games (1989) Captain America - Defies the Doom Tube (1988) Chase H.Q. (1989) Chubby Gristle (1988) Continental Circus (1989) Cosmic Relief (1987) Dark Fusion (1988) Deflektor (1988) Dynamite Düx (1988) FoFT - Federation of Free Traders (1989) Footballer of the Year 2 (1989) Gary Lineker's Hot Shots Greg Norman's Ultimate Golf (1990) H.A.T.E. – Hostile All Terrain Encounter (1989) Hot Rod (1990) John Lowe's Ultimate Darts (1989) Kingmaker (1993) Legends of Valour (1993) Lotus Esprit Turbo Challenge (1990) Masters of the Universe (1988) Mickey Mouse: The Computer Game (1988) Monty Python's Flying Circus (1990) Motor Massacre (1988) Motörhead (1992) North Star (1988) Pac-Mania (1989) Passing Shot (1988) Prison (1989) Rick Dangerous (1989) Rick Dangerous 2 (1990) Road Raider (1988) Saint & Greavsie (1989) Skidz (1990) Super Cars (1989) Super Scramble Simulator (1989) Switchblade (1989) Terramex (1987) The Flintstones (1988) The Munsters (1988) The Running Man (1989) Thunderbirds (1989) Wizard Warz (1987) Xybots (1989) Commodore 64 720° Ark Pandora Alternative World Games Artura Auf Wiedersehen Monty (with Rob Hubbard) Avenger Biggles Blasteroids Blood Brothers Blood Valley Bobby Bearing Bulldog Bombo Challenge of the Gobots Chubby Gristle Cobra (arrangement of the unused movie theme "Skyline" by Sylvester Levay) Dark Fusion Death Wish 3 (1987) Defenders of the Earth Deflektor Dogfight 2187 Firelord (1986) Footballer of the Year Footballer of the Year 2 Future Knight Future Knight II Gary Lineker's Hot Shot Gary Lineker's Super Skills Gauntlet and Gauntlet II Greg Norman's Ultimate Golf Hades Nebula Harvey Headbanger He-Man and the Masters of the Universe Heroes of the Lance Jack the Nipper Jack the Nipper II Kettle Killer-Ring Krakout L.O.C.O. Mask III – Venom Strikes Back Mickey Mouse Mountie Mick's Death Ride Munsters Northstar Olli and Lissa Pac-Mania Percy the Potty Pigeon Potty Pidgeon (Death tune only) Pub Games Re-Bounder Real Stunt Experts Return of the Mutant Camels Skate Crazy SkateRock Super Cars Supersports Switchblade TechnoCop They Stole a Million Thing Bounces Back Terramex The Flintstones The Last Ninja (with Anthony Lees) Trap Vikings Way of the Tiger William Wobbler Wizard Warz Zarjaz Source: The High Voltage SID Collection Commodore Amiga Artura (1989) Chubby Gristle (1988) Deflektor (1988) Federation of Free Traders (1989) Pac-Mania (1988, re-arrangement of arcade game tunes) Switchblade (1989) Corporation (1990) Super Cars (1990) Sinclair ZX Spectrum Artura (1989) Auf Wiedersehen Monty (1987) Avenger (1986) Blasteroids (1987) Blood Brothers (1988) Blood Valley (1987) Butcher Hill (1989) Challenge of the Gobots (1987) Chubby Gristle (1988) Dark Fusion (1988) Death Wish 3 (1987) Deflektor (1988) The Flintstones (1988) Footballer of the Year (1987) Future Knight (1987) Gary Lineker's Hot Shots (1988) Gary Lineker's Super Skills (1988) Gauntlet 2 (1988) H.A.T.E. – Hostile All Terrain Encounter (1989) Jack the Nipper 2: in Coconut Capers (1987) Krakout (1987) Mask 1, Mask 2 (1988) MASK III: Venom Strikes Back (1988) Masters of the Universe (1987) Mickey Mouse (1988) Moley Christmas (1987) Motor Massacre (1989) Mountie Mick's Death Ride North Star (1988) Pacmania (1988) The Real Stunt Experts Skate Crazy (1988) Super Scramble Simulator (1989) Super Sports Switchblade (1991) Techno Cop (1988) Terramex (1988) Thing Bounces Back (1987) Trap (128k) (1985) Wizard Wars References External links Homepage Artist profile at OverClocked ReMix C64Audio.com Publisher and record label for Daglish's Commodore 64 music Remix64's Interview with Ben Daglish English podcast interview from retrokompott.de Profile at MobyGames 1966 births 2018 deaths Musicians from London Commodore 64 music English electronic musicians Video game composers Deaths from lung cancer
591892
https://en.wikipedia.org/wiki//dev/zero
/dev/zero
is a special file in Unix-like operating systems that provides as many null characters (ASCII NUL, 0x00) as are read from it. One of the typical uses is to provide a character stream for initializing data storage. Function Read operations from return as many null characters (0x00) as requested in the read operation. Unlike , may be used as a source, not only as a sink for data. All write operations to succeed with no other effects. However, is more commonly used for this purpose. When is memory-mapped, e.g., with mmap, to the virtual address space, it is equivalent to using anonymous memory; i.e. memory not connected to any file. History was introduced in 1988 by SunOS-4.0 in order to allow a mappable BSS segment for shared libraries using anonymous memory. HP-UX 8.x introduced the MAP_ANONYMOUS flag for mmap(), which maps anonymous memory directly without a need to open . Since the late 1990s, MAP_ANONYMOUS or MAP_ANON are supported by most UNIX versions, removing the original purpose of . Examples The dd Unix utility program reads octet streams from a source to a destination, possibly performing data conversions in the process. Destroying existing data on a file system partition (low-level formatting): dd if=/dev/zero of=/dev/<destination partition> Creating a 1 MiB file, called foobar, filled with null characters: dd if=/dev/zero of=foobar count=1024 bs=1024 Note: The block size value can be given in SI (decimal) values, e.g. in GB, MB, etc. To create a 1 GB file one would simply type: dd if=/dev/zero of=foobar count=1 bs=1GB Note: Instead of creating a real file with only zero bytes, many file systems also support the creation of sparse files which returns zeros upon reading but use less actual space. See also Unix philosophy Standard streams References Unix file system technology Device file
64325573
https://en.wikipedia.org/wiki/Sapna%20Cheryan
Sapna Cheryan
Sapna Cheryan (born 1978) is an American social psychologist. She is a Full professor of social psychology in the Department of Psychology at the University of Washington. Early life and education Cheryan was born to financial aid administrator mother Leela Cheryan and research professor father Munir Cheryan in Chicago, Illinois. Growing up, she became interested in topics revolving around race, gender, and equality. She earned her Bachelor of Arts degree in Psychology and American Studies before enrolling at Stanford University for her PhD. As a graduate student, she began to notice that the atmosphere of working or learning environments could directly influence ones choice to join the field. This led her to develop her thesis titled Strategies of belonging: defending threatened identities. After graduating from Stanford, Cheryan married Giri Shivaram in 2008. Giri Shivaram is an interventional radiologist at Seattle Children’s Hospital. Career Upon earning her PhD, Cheryan immediately joined the faculty of Psychology at the University of Washington (UW) with a specific focus on gendered stereotypes and prejudices. She co-founded UW's Debunking Stereotypes Workshop with students Amanda Tose, Marissa Vichayapai, and Lauren Hudson to encourage more women to join Science, technology, engineering, and mathematics (STEM) fields. Cheryan also led a research project that used statistics from the National Science Foundation (NSF) to prove that negative stereotypes of computer scientists could result in less women joining the field. As a result of her research, she received the 2009 NSF's Junior Faculty Career Award for "outstanding research, excellent education and the integration of education and research within the context of the mission of their organizations." She also earned the 2011 American Association of University Women Named Honoree for her efforts to achieve equity for women in science-based fields. During the 2012–13 academic year, Cheryan conducted three studies in New York as a Russell Sage Foundation fellow. The research project focused on the effects anti-American stereotypes had on immigrant groups in America. She also studied the negative effects that geeky male nerd stereotypes portrayed in the media had on women joining STEM fields. Cheryan and her research colleagues conducted two studies on the female undergraduate populations attending UW and Stanford University; firstly asking them to describe computer science majors and secondly asking them to read a fabricated newspaper article. At the conclusion of the study, Cheryan concluded that women were more likely than men to be influenced by negative stereotypes surrounding STEM fields. The following year, Cheryan was invited to the White House by former President Barack Obama after it was decided to create a “computer science classroom design prize” in her honor. In 2015, Cheryan continued her research into stereotypes by returning to Stanford to conduct another study, this time focusing on males' perceived masculinity. She used falsified data to infer to her male participants, who were squeezing a handheld device, that they were on average or weaker than their female counterparts. She followed up her experiment by asking health and body-related questions, during which she noticed men often exaggerated their height to seem more masculine. Upon realizing this, she conducted a second male-focused group study where students would answer a masculinity test with multiple-choice questions about consumer preferences and personal attributes. Those who scored lower on the test, although all results were randomized, felt the need to overcompensate by choosing more male consumer products as compensation for their time. She also led a female-focused study where she asked undergraduate students to interact with male and female actors who pretended to be computer science majors. Half of the participants interacted with actors who fit the nerdy, geeky computer scientist stereotypes who claimed to enjoy solitary hobbies, while the others interacted with actors dressed and acting like "typical college students." The results of the study fount that women were more influenced by stereotypes in computer science than gender. During the 2016–17 year, Cheryan continued to conduct various studies on how stereotypes directly divert young girls for pursuing a career in STEM. Cheryan and her colleagues found that the culture of STEM and lack of encouragement for women to focus on math and science were the main causes of the gender gap in STEM fields. She also led a study titled Gay Asian Americans Are Seen as More American Than Asian Americans Who Are Presumed Straight, which found that American perceived homosexual Asian Americans to be more likely to speak fluent English than those whose sexual identity was not specified. Cheryan also received a visiting fellowship position in communications at the Center for Advanced Study in the Behavioral Sciences at Stanford University during the academic year. As a result of her research on gender, STEM, and female stereotypes, Cheryan was approached by Mattel in the spring of 2018 to advise on their latest Barbie dolls. She was appointed to their 12 person Barbie Global Advisory Council in order to "help inform and refine Barbie brand initiatives." During the summer of 2019, Cheryan was promoted from Associate professor to Full professor of social psychology in the Department of Psychology. Notes References External links Living people 1978 births University of Washington faculty Stanford University alumni Northwestern University alumni American social psychologists 21st-century psychologists 21st-century American women scientists Place of birth missing (living people) American women academics
505218
https://en.wikipedia.org/wiki/Register%20machine
Register machine
In mathematical logic and theoretical computer science a register machine is a generic class of abstract machines used in a manner similar to a Turing machine. All the models are Turing equivalent. Overview The register machine gets its name from its use of one or more "registers". In contrast to the tape and head used by a Turing machine, the model uses multiple, uniquely addressed registers, each of which holds a single positive integer. There are at least four sub-classes found in literature, here listed from most primitive to the most like a computer: Counter machine – the most primitive and reduced theoretical model of a computer hardware. Lacks indirect addressing. Instructions are in the finite state machine in the manner of the Harvard architecture. Pointer machine – a blend of counter machine and RAM models. Less common and more abstract than either model. Instructions are in the finite state machine in the manner of the Harvard architecture. Random-access machine (RAM) – a counter machine with indirect addressing and, usually, an augmented instruction set. Instructions are in the finite state machine in the manner of the Harvard architecture. Random-access stored-program machine model (RASP) – a RAM with instructions in its registers analogous to the Universal Turing machine; thus it is an example of the von Neumann architecture. But unlike a computer, the model is idealized with effectively infinite registers (and if used, effectively infinite special registers such as an accumulator). Unlike a computer or even RISC, the instruction set is much reduced in number. Any properly defined register machine model is Turing equivalent. Computational speed is very dependent on the model specifics. In practical computer science, a similar concept known as a virtual machine is sometimes used to minimise dependencies on underlying machine architectures. Such machines are also used for teaching. The term "register machine" is sometimes used to refer to a virtual machine in textbooks. Formal definition A register machine consists of: An unbounded number of labeled, discrete, unbounded registers unbounded in extent (capacity): a finite (or infinite in some models) set of registers each considered to be of infinite extent and each of which holds a single non-negative integer (0, 1, 2, ...). The registers may do their own arithmetic, or there may be one or more special registers that do the arithmetic e.g. an "accumulator" and/or "address register". See also Random-access machine. Tally counters or marks: discrete, indistinguishable objects or marks of only one sort suitable for the model. In the most-reduced counter machine model, per each arithmetic operation only one object/mark is either added to or removed from its location/tape. In some counter machine models (e.g. Melzak (1961), Minsky (1961)) and most RAM and RASP models more than one object/mark can be added or removed in one operation with "addition" and usually "subtraction"; sometimes with "multiplication" and/or "division". Some models have control operations such as "copy" (variously: "move", "load", "store") that move "clumps" of objects/marks from register to register in one action. A (very) limited set of instructions: the instructions tend to divide into two classes: arithmetic and control. The instructions are drawn from the two classes to form "instruction-sets", such that an instruction set must allow the model to be Turing equivalent (it must be able to compute any partial recursive function). Arithmetic: arithmetic instructions may operate on all registers or on just a special register (e.g. accumulator). They are usually chosen from the following sets (but exceptions abound): Counter machine: { Increment (r), Decrement (r), Clear-to-zero (r) } Reduced RAM, RASP: { Increment (r), Decrement (r), Clear-to-zero (r), Load-immediate-constant k, Add (r1,r2), proper-Subtract (r1,r2), Increment accumulator, Decrement accumulator, Clear accumulator, Add to accumulator contents of register r, proper-Subtract from accumulator contents of register r, } Augmented RAM, RASP: All of the reduced instructions plus: { Multiply, Divide, various Boolean bit-wise (left-shift, bit test, etc.)} Control: Counter machine models: optional { Copy (r1,r2) } RAM and RASP models: most have { Copy (r1,r2) }, or { Load Accumulator from r, Store accumulator into r, Load Accumulator with immediate constant } All models: at least one conditional "jump" (branch, goto) following test of a register e.g. { Jump-if-zero, Jump-if-not-zero (i.e. Jump-if-positive), Jump-if-equal, Jump-if-not equal } All models optional: { unconditional program jump (goto) } Register-addressing method: Counter machine: no indirect addressing, immediate operands possible in highly atomized models RAM and RASP: indirect addressing available, immediate operands typical Input-output: optional in all models State register: A special Instruction Register "IR", finite and separate from the registers above, stores the current instruction to be executed and its address in the TABLE of instructions; this register and its TABLE is located in the finite state machine. The IR is off-limits to all models. In the case of the RAM and RASP, for purposes of determining the "address" of a register, the model can select either (i) in the case of direct addressing—the address specified by the TABLE and temporarily located in the IR or (ii) in the case of indirect addressing—the contents of the register specified by the IR's instruction. The IR is not the "program counter" (PC) of the RASP (or conventional computer). The PC is just another register similar to an accumulator, but dedicated to holding the number of the RASP's current register-based instruction. Thus a RASP has two "instruction/program" registers—(i) the IR (finite state machine's Instruction Register), and (ii) a PC (Program Counter) for the program located in the registers. (As well as a register dedicated to "the PC", a RASP may dedicate another register to "the Program-Instruction Register" (going by any number of names such as "PIR, "IR", "PR", etc.) List of labeled instructions, usually in sequential order: A finite list of instructions . In the case of the counter machine, random-access machine (RAM) and pointer machine the instruction store is in the "TABLE" of the finite state machine; thus these models are example of the Harvard architecture. In the case of the RASP the program store is in the registers; thus this is an example of the von Neumann architecture. See also Random-access machine and Random-access stored-program machine.Usually, like computer programs, the instructions are listed in sequential order; unless a jump is successful the default sequence continues in numerical order. An exception to this is the abacus (Lambek (1961), Minsky (1961)) counter machine models—every instruction has at least one "next" instruction identifier "z", and the conditional branch has two. Observe also that the abacus model combines two instructions, JZ then DEC: e.g. { INC ( r, z ), JZDEC ( r, ztrue, zfalse ) }.See McCarthy Formalism for more about the conditional expression "IF r=0 THEN ztrue ELSE zfalse" (cf McCarthy (1960)). Historical development of the register machine model Two trends appeared in the early 1950s—the first to characterize the computer as a Turing machine, the second to define computer-like models—models with sequential instruction sequences and conditional jumps—with the power of a Turing machine, i.e. a so-called Turing equivalence. Need for this work was carried out in context of two "hard" problems: the unsolvable word problem posed by Emil Post—his problem of "tag"—and the very "hard" problem of Hilbert's problems—the 10th question around Diophantine equations. Researchers were questing for Turing-equivalent models that were less "logical" in nature and more "arithmetic" (cf Melzak (1961) p. 281, Shepherdson–Sturgis (1963) p. 218). The first trend—toward characterizing computers—seems to have originated with Hans Hermes (1954), Rózsa Péter (1958), and Heinz Kaphengst (1959), the second trend with Hao Wang (1954, 1957) and, as noted above, furthered along by Zdzislaw Alexander Melzak (1961), Joachim Lambek (1961) and Marvin Minsky (1961, 1967). The last five names are listed explicitly in that order by Yuri Matiyasevich. He follows up with: "Register machines [some authors use "register machine" synonymous with "counter-machine"] are particularly suitable for constructing Diophantine equations. Like Turing machines, they have very primitive instructions and, in addition, they deal with numbers" (Yuri Matiyasevich (1993), Hilbert's Tenth Problem, commentary to Chapter 5 of the book, at http://logic.pdmi.ras.ru/yumat/H10Pbook/commch_5htm. ) It appears that Lambek, Melzak, Minsky and Shepherdson and Sturgis independently anticipated the same idea at the same time. See Note On Precedence below. The history begins with Wang's model. Wang's (1954, 1957) model: Post–Turing machine Wang's work followed from Emil Post's (1936) paper and led Wang to his definition of his Wang B-machine—a two-symbol Post–Turing machine computation model with only four atomic instructions: { LEFT, RIGHT, PRINT, JUMP_if_marked_to_instruction_z } To these four both Wang (1954, 1957) and then C.Y. Lee (1961) added another instruction from the Post set { ERASE }, and then a Post's unconditional jump { JUMP_to_ instruction_z } (or to make things easier, the conditional jump JUMP_IF_blank_to_instruction_z, or both. Lee named this a "W-machine" model: { LEFT, RIGHT, PRINT, ERASE, JUMP_if_marked, [maybe JUMP or JUMP_IF_blank] } Wang expressed hope that his model would be "a rapprochement" (p. 63) between the theory of Turing machines and the practical world of the computer. Wang's work was highly influential. We find him referenced by Minsky (1961) and (1967), Melzak (1961), Shepherdson and Sturgis (1963). Indeed, Shepherdson and Sturgis (1963) remark that: "...we have tried to carry a step further the 'rapprochement' between the practical and theoretical aspects of computation suggested by Wang" (p. 218) Martin Davis eventually evolved this model into the (2-symbol) Post–Turing machine. Difficulties with the Wang/Post–Turing model: Except there was a problem: the Wang model (the six instructions of the 7-instruction Post–Turing machine) was still a single-tape Turing-like device, however nice its sequential program instruction-flow might be. Both Melzak (1961) and Shepherdson and Sturgis (1963) observed this (in the context of certain proofs and investigations): "...a Turing machine has a certain opacity... a Turing machine is slow in (hypothetical) operation and, usually, complicated. This makes it rather hard to design it, and even harder to investigate such matters as time or storage optimization or a comparison between efficiency of two algorithms. (Melzak (1961) p. 281) "...although not difficult ... proofs are complicated and tedious to follow for two reasons: (1) A Turing machine has only head so that one is obliged to break down the computation into very small steps of operations on a single digit. (2) It has only one tape so that one has to go to some trouble to find the number one wishes to work on and keep it separate from other numbers" (Shepherdson and Sturgis (1963) p. 218). Indeed, as examples at Turing machine examples, Post–Turing machine and partial function show, the work can be "complicated". Minsky, Melzak-Lambek and Shepherdson–Sturgis models "cut the tape" into many So why not 'cut the tape' so each is infinitely long (to accommodate any size integer) but left-ended, and call these three tapes "Post–Turing (i.e. Wang-like) tapes"? The individual heads will move left (for decrement) and right (for increment). In one sense the heads indicate "the tops of the stack" of concatenated marks. Or in Minsky (1961) and Hopcroft and Ullman (1979, p. 171ff) the tape is always blank except for a mark at the left end—at no time does a head ever print or erase. We just have to be careful to write our instructions so that a test-for-zero and jump occurs before we decrement otherwise our machine will "fall off the end" or "bump against the end"—we will have an instance of a partial function. Before a decrement our machine must always ask the question: "Is the tape/counter empty? If so then I can't decrement, otherwise I can." Minsky (1961) and Shepherdson–Sturgis (1963) prove that only a few tapes—as few as one—still allow the machine to be Turing equivalent IF the data on the tape is represented as a Gödel number (or some other uniquely encodable-decodable number); this number will evolve as the computation proceeds. In the one tape version with Gödel number encoding the counter machine must be able to (i) multiply the Gödel number by a constant (numbers "2" or "3"), and (ii) divide by a constant (numbers "2" or "3") and jump if the remainder is zero. Minsky (1967) shows that the need for this bizarre instruction set can be relaxed to { INC (r), JZDEC (r, z) } and the convenience instructions { CLR (r), J (r) } if two tapes are available. A simple Gödelization is still required, however. A similar result appears in Elgot–Robinson (1964) with respect to their RASP model. Melzak's (1961) model is different: clumps of pebbles go into and out of holes Melzak's (1961) model is significantly different. He took his own model, flipped the tapes vertically, called them "holes in the ground" to be filled with "pebble counters". Unlike Minsky's "increment" and "decrement", Melzak allowed for proper subtraction of any count of pebbles and "adds" of any count of pebbles. He defines indirect addressing for his model (p. 288) and provides two examples of its use (p. 89); his "proof" (p. 290-292) that his model is Turing equivalent is so sketchy that the reader cannot tell whether or not he intended the indirect addressing to be a requirement for the proof. Legacy of Melzak's model is Lambek's simplification and the reappearance of his mnemonic conventions in Cook and Reckhow 1973. Lambek (1961) atomizes Melzak's model into the Minsky (1961) model: INC and DEC-with-test Lambek (1961) took Melzak's ternary model and atomized it down to the two unary instructions—X+, X- if possible else jump—exactly the same two that Minsky (1961) had come up with. However, like the Minsky (1961) model, the Lambek model does execute its instructions in a default-sequential manner—both X+ and X- carry the identifier of the next instruction, and X- also carries the jump-to instruction if the zero-test is successful. Elgot–Robinson (1964) and the problem of the RASP without indirect addressing A RASP or random-access stored-program machine begins as a counter machine with its "program of instruction" placed in its "registers". Analogous to, but independent of, the finite state machine's "Instruction Register", at least one of the registers (nicknamed the "program counter" (PC)) and one or more "temporary" registers maintain a record of, and operate on, the current instruction's number. The finite state machine's TABLE of instructions is responsible for (i) fetching the current program instruction from the proper register, (ii) parsing the program instruction, (iii) fetching operands specified by the program instruction, and (iv) executing the program instruction. Except there is a problem: If based on the counter machine chassis this computer-like, von Neumann machine will not be Turing equivalent. It cannot compute everything that is computable. Intrinsically the model is bounded by the size of its (very-) finite state machine's instructions. The counter machine based RASP can compute any primitive recursive function (e.g. multiplication) but not all mu recursive functions (e.g. the Ackermann function ). Elgot–Robinson investigate the possibility of allowing their RASP model to "self modify" its program instructions. The idea was an old one, proposed by Burks-Goldstine-von Neumann (1946-7), and sometimes called "the computed goto." Melzak (1961) specifically mentions the "computed goto" by name but instead provides his model with indirect addressing. Computed goto: A RASP program of instructions that modifies the "goto address" in a conditional- or unconditional-jump program instruction. But this does not solve the problem (unless one resorts to Gödel numbers). What is necessary is a method to fetch the address of a program instruction that lies (far) "beyond/above" the upper bound of the finite state machine's instruction register and TABLE. Example: A counter machine equipped with only four unbounded registers can e.g. multiply any two numbers ( m, n ) together to yield p—and thus be a primitive recursive function—no matter how large the numbers m and n; moreover, less than 20 instructions are required to do this! e.g. { 1: CLR ( p ), 2: JZ ( m, done ), 3 outer_loop: JZ ( n, done ), 4: CPY ( m, temp ), 5: inner_loop: JZ ( m, outer_loop ), 6: DEC ( m ), 7: INC ( p ), 8: J ( inner_loop ), 9: outer_loop: DEC ( n ), 10 J ( outer_loop ), HALT } However, with only 4 registers, this machine has not nearly big enough to build a RASP that can execute the multiply algorithm as a program. No matter how big we build our finite state machine there will always be a program (including its parameters) which is larger. So by definition the bounded program machine that does not use unbounded encoding tricks such as Gödel numbers cannot be universal. Minsky (1967) hints at the issue in his investigation of a counter machine (he calls them "program computer models") equipped with the instructions { CLR (r), INC (r), and RPT ("a" times the instructions m to n) }. He doesn't tell us how to fix the problem, but he does observe that: "... the program computer has to have some way to keep track of how many RPT's remain to be done, and this might exhaust any particular amount of storage allowed in the finite part of the computer. RPT operations require infinite registers of their own, in general, and they must be treated differently from the other kinds of operations we have considered." (p. 214) But Elgot and Robinson solve the problem: They augment their P0 RASP with an indexed set of instructions—a somewhat more complicated (but more flexible) form of indirect addressing. Their P'0 model addresses the registers by adding the contents of the "base" register (specified in the instruction) to the "index" specified explicitly in the instruction (or vice versa, swapping "base" and "index"). Thus the indexing P'0 instructions have one more parameter than the non-indexing P0 instructions: Example: INC ( rbase, index ) ; effective address will be [rbase] + index, where the natural number "index" is derived from the finite-state machine instruction itself. Hartmanis (1971) By 1971 Hartmanis has simplified the indexing to indirection for use in his RASP model. Indirect addressing: A pointer-register supplies the finite state machine with the address of the target register required for the instruction. Said another way: The contents of the pointer-register is the address of the "target" register to be used by the instruction. If the pointer-register is unbounded, the RAM, and a suitable RASP built on its chassis, will be Turing equivalent. The target register can serve either as a source or destination register, as specified by the instruction. Note that the finite state machine does not have to explicitly specify this target register's address. It just says to the rest of the machine: Get me the contents of the register pointed to by my pointer-register and then do xyz with it. It must specify explicitly by name, via its instruction, this pointer-register (e.g. "N", or "72" or "PC", etc.) but it doesn't have to know what number the pointer-register actually contains (perhaps 279,431). Cook and Reckhow (1973) describe the RAM Cook and Reckhow (1973) cite Hartmanis (1971) and simplify his model to what they call a random-access machine (RAM—i.e. a machine with indirection and the Harvard architecture). In a sense we are back to Melzak (1961) but with a much simpler model than Melzak's. Precedence Minsky was working at the MIT Lincoln Laboratory and published his work there; his paper was received for publishing in the Annals of Mathematics on August 15, 1960, but not published until November 1961. While receipt occurred a full year before the work of Melzak and Lambek was received and published (received, respectively, May and June 15, 1961, and published side-by-side September 1961). That (i) both were Canadians and published in the Canadian Mathematical Bulletin, (ii) neither would have had reference to Minsky's work because it was not yet published in a peer-reviewed journal, but (iii) Melzak references Wang, and Lambek references Melzak, leads one to hypothesize that their work occurred simultaneously and independently. Almost exactly the same thing happened to Shepherdson and Sturgis. Their paper was received in December 1961—just a few months after Melzak and Lambek's work was received. Again, they had little (at most 1 month) or no benefit of reviewing the work of Minsky. They were careful to observe in footnotes that papers by Ershov, Kaphengst and Peter had "recently appeared" (p. 219). These were published much earlier but appeared in the German language in German journals so issues of accessibility present themselves. The final paper of Shepherdson and Sturgis did not appear in a peer-reviewed journal until 1963. And as they fairly and honestly note in their Appendix A, the 'systems' of Kaphengst (1959), Ershov (1958), Peter (1958) are all so similar to what results were obtained later as to be indistinguishable to a set of the following: produce 0 i.e. 0 → n increment a number i.e. n+1 → n "i.e. of performing the operations which generate the natural numbers" (p. 246) copy a number i.e. n → m to "change the course of a computation", either comparing two numbers or decrementing until 0 Indeed, Shepherson and Sturgis conclude "The various minimal systems are very similar"( p. 246) By order of publishing date the work of Kaphengst (1959), Ershov (1958), Peter (1958) were first. See also Counter machine Counter-machine model Pointer machine Random-access machine Random-access stored-program machine Turing machine Universal Turing machine Turing machine gallery Turing machine examples Wang B-machine Post–Turing machine - description plus examples Algorithm Algorithm characterizations Halting problem Busy beaver Stack machine WDR paper computer Bibliography Background texts: The following bibliography of source papers includes a number of texts to be used as background. The mathematics that led to the flurry of papers about abstract machines in the 1950s and 1960s can be found in van Heijenoort (1967)—an assemblage of original papers spanning the 50 years from Frege (1879) to Gödel (1931). Davis (ed.) The Undecidable (1965) carries the torch onward beginning with Gödel (1931) through Gödel's (1964) postscriptum (p. 71); the original papers of Alan Turing (1936-7) and Emil Post (1936) are included in The Undecidable. The mathematics of Church, Rosser and Kleene that appear as reprints of original papers in The Undecidable is carried further in Kleene (1952), a mandatory text for anyone pursuing a deeper understanding of the mathematics behind the machines. Both Kleene (1952) and Davis (1958) are referenced by a number of the papers. For a good treatment of the counter machine see Minsky (1967) Chapter 11 "Models similar to Digital Computers"—he calls the counter machine a "program computer". A recent overview is found at van Emde Boas (1990). A recent treatment of the Minsky (1961)/Lambek (1961) model can be found Boolos-Burgess-Jeffrey (2002); they reincarnate Lambek's "abacus model" to demonstrate equivalence of Turing machines and partial recursive functions, and they provide a graduate-level introduction to both abstract machine models (counter- and Turing-) and the mathematics of recursion theory. Beginning with the first edition Boolos-Burgess (1970) this model appeared with virtually the same treatment. The papers: The papers begin with Wang (1957) and his dramatic simplification of the Turing machine. Turing (1936), Kleene (1952), Davis (1958) and in particular Post (1936) are cited in Wang (1957); in turn, Wang is referenced by Melzak (1961), Minsky (1961) and Shepherdson–Sturgis (1961-3) as they independently reduce the Turing tapes to "counters". Melzak (1961) provides his pebble-in-holes counter machine model with indirection but doesn't carry the treatment further. The work of Elgot–Robinson (1964) define the RASP—the computer-like random-access stored-program machines—and appear to be the first to investigate the failure of the bounded counter machine to calculate the mu-recursive functions. This failure—except with the draconian use of Gödel numbers in the manner of Minsky (1961))—leads to their definition of "indexed" instructions (i.e. indirect addressing) for their RASP model. Elgot–Robinson (1964) and more so Hartmanis (1971) investigate RASPs with self-modifying programs. Hartmanis (1971) specifies an instruction set with indirection, citing lecture notes of Cook (1970). For use in investigations of computational complexity Cook and his graduate student Reckhow (1973) provide the definition of a RAM (their model and mnemonic convention are similar to Melzak's, but offer him no reference in the paper). The pointer machines are an offshoot of Knuth (1968, 1973) and independently Schönhage (1980). For the most part the papers contain mathematics beyond the undergraduate level—in particular the primitive recursive functions and mu recursive functions presented elegantly in Kleene (1952) and less in depth, but useful nonetheless, in Boolos-Burgess-Jeffrey (2002). All texts and papers excepting the four starred have been witnessed. These four are written in German and appear as references in Shepherdson–Sturgis (1963) and Elgot–Robinson (1964); Shepherdson–Sturgis (1963) offer a brief discussion of their results in Shepherdson–Sturgis' Appendix A. The terminology of at least one paper (Kaphengst (1959) seems to hark back to the Burke-Goldstine-von Neumann (1946-7) analysis of computer architecture. References Notes Sources George Boolos, John P. Burgess, Richard Jeffrey (2002), Computability and Logic: Fourth Edition, Cambridge University Press, Cambridge, England. The original Boolos-Jeffrey text has been extensively revised by Burgess: more advanced than an introductory textbook. "Abacus machine" model is extensively developed in Chapter 5 Abacus Computability; it is one of three models extensively treated and compared—the Turing machine (still in Boolos' original 4-tuple form) and recursion the other two. Arthur Burks, Herman Goldstine, John von Neumann (1946), "Preliminary discussion of the logical design of an electronic computing instrument", reprinted pp. 92ff in Gordon Bell and Allen Newell (1971), Computer Structures: Readings and Examples, McGraw-Hill Book Company, New York. . Stephen A. Cook and Robert A. Reckhow (1972), Time-bounded random access machines, Journal of Computer Systems Science 7 (1973), 354–375. Martin Davis (1958), Computability & Unsolvability, McGraw-Hill Book Company, Inc. New York. Calvin Elgot and Abraham Robinson (1964), "Random-Access Stored-Program Machines, an Approach to Programming Languages", Journal of the Association for Computing Machinery, Vol. 11, No. 4 (October, 1964), pp. 365–399. J. Hartmanis (1971), "Computational Complexity of Random Access Stored Program Machines," Mathematical Systems Theory 5, 3 (1971) pp. 232–245. John Hopcroft, Jeffrey Ullman (1979). Introduction to Automata Theory, Languages and Computation, 1st ed., Reading Mass: Addison-Wesley. . A difficult book centered around the issues of machine-interpretation of "languages", NP-Completeness, etc. Stephen Kleene (1952), Introduction to Metamathematics, North-Holland Publishing Company, Amsterdam, Netherlands. . Donald Knuth (1968), The Art of Computer Programming, Second Edition 1973, Addison-Wesley, Reading, Massachusetts. Cf pages 462-463 where he defines "a new kind of abstract machine or 'automaton' which deals with linked structures." The manuscript was received by the journal on June 15, 1961. In his Appendix II, Lambek proposes a "formal definition of 'program'. He references Melzak (1961) and Kleene (1952) Introduction to Metamathematics. The manuscript was received by the journal on May 15, 1961. Melzak offers no references but acknowledges "the benefit of conversations with Drs. R. Hamming, D. McIlroy and V. Vyssots of the Bell telephone Laborators and with Dr. H. Wang of Oxford University." In particular see chapter 11: Models Similar to Digital Computers and chapter 14: Very Simple Bases for Computability. In the former chapter he defines "Program machines" and in the later chapter he discusses "Universal Program machines with Two Registers" and "...with one register", etc. John C. Shepherdson and H. E. Sturgis (1961) received December 1961 "Computability of Recursive Functions", Journal of the Association of Computing Machinery (JACM) 10:217-255, 1963. An extremely valuable reference paper. In their Appendix A the authors cite 4 others with reference to "Minimality of Instructions Used in 4.1: Comparison with Similar Systems". Kaphengst, Heinz, "Eine Abstrakte programmgesteuerte Rechenmaschine", Zeitschrift fur mathematische Logik und Grundlagen der Mathematik 5 (1959), 366-379. Ershov, A. P. "On operator algorithms", (Russian) Dok. Akad. Nauk 122 (1958), 967-970. English translation, Automat. Express 1 (1959), 20-23. Péter, Rózsa "Graphschemata und rekursive Funktionen", Dialectica 12 (1958), 373. Hermes, Hans "Die Universalität programmgesteuerter Rechenmaschinen". Math.-Phys. Semesterberichte (Göttingen) 4 (1954), 42-53. Arnold Schönhage (1980), Storage Modification Machines, Society for Industrial and Applied Mathematics, SIAM J. Comput. Vol. 9, No. 3, August 1980. Wherein Schōnhage shows the equivalence of his SMM with the "successor RAM" (Random Access Machine), etc. resp. Storage Modification Machines, in Theoretical Computer Science (1979), pp. 36–37 Peter van Emde Boas, "Machine Models and Simulations" pp. 3–66, in: Jan van Leeuwen, ed. Handbook of Theoretical Computer Science. Volume A: Algorithms and Complexity, The MIT PRESS/Elsevier, 1990. (volume A). QA 76.H279 1990. van Emde Boas' treatment of SMMs appears on pp. 32–35. This treatment clarifies Schōnhage 1980—it closely follows but expands slightly the Schōnhage treatment. Both references may be needed for effective understanding. Hao Wang (1957), "A Variant to Turing's Theory of Computing Machines", JACM (Journal of the Association for Computing Machinery) 4; 63–92. Presented at the meeting of the Association, June 23–25, 1954. External links Igblan - Minsky Register Machines Models of computation
23154946
https://en.wikipedia.org/wiki/Green%20Dam%20Youth%20Escort
Green Dam Youth Escort
Green Dam Youth Escort () is content-control software for Windows developed in the People's Republic of China (PRC) which, under a directive from the Ministry of Industry and Information Technology (MIIT), was to take effect on 1 July 2009, as a mandatory pre-install, or have the setup files on an accompanying compact disc, for all new personal computers sold in mainland China, including those imported from abroad. Subsequently, this was changed to be voluntary. End-users, however, are not under a mandate to run the software. As of 30 June 2009, the mandatory pre-installation of the Green Dam software on new computers was delayed to an undetermined date. However, Asian brands Sony, Acer, Asus, BenQ and Lenovo etc. were shipping the software as was originally ordered. On 14 August 2009, Li Yizhong, minister of industry and information technology, announced that computer manufacturers and retailers were no longer obliged to ship the software with new computers for home or business use, but that schools, internet cafes and other public use computers would still be required to run the software. Devoid of state funding since 2009, the business behind the software was on the verge of collapsing by July 2010. According to Beijing Times, the project team under Beijing Dazhang, one of the two companies responsible for development and support of the software, have been disbanded with their office shut down; also in a difficult situation, the team under Zhengzhou Jinhui, the other company, are likely to suffer the same fate at any time. The 20 million users of the software will lose technical support and customer service should the project cease operation. Functions Designed to work with Microsoft Windows operating systems, the software was developed by Zhengzhou Jinhui Computer System Engineering Ltd. (郑州金惠计算机系统工程有限公司 – Jinhui) with input from Beijing Dazheng Human Language Technology Academy Ltd. (北京大正语言知识处理科技有限公司 - Dazheng). The software, commissioned by the Ministry of Industry and Information Technology through open tender worth 41.7 million yuan in May 2008, is at least officially aimed at restricting online pornography however, it may be used for electronic censorship and surveillance in addition to its stated purpose. Green Dam Youth Escort automatically downloads the latest updates of a list of prohibited sites from an online database, and also collects private user data. Bryan Zhang, the founder of Jinhui, said that users would not be permitted to see the list, but would have the option of unblocking sites and uninstalling the software. Additional search terms can also be blocked at the owner's discretion. Scope A notice issued by the Ministry of Industry and Information Technology on 19 May stated that, as of 1 July 2009, manufacturers must ship machines to be sold in China with the software preloaded—either pre-installed or enclosed on a compact disc, and that manufacturers are required to report the number of machines shipped with the software to the government. A separate notice on the ministry's website required schools to install the software on every computer in their purview by the end of May. The ministry shortlisted products from two suppliers, Jinhui and Dazheng. According to the directive, the aim is to "build a healthy and harmonious online environment that does not poison young people's minds". Qin Gang, spokesman for the foreign ministry, said the software would filter out pornography or violence: "The purpose of this is to effectively manage harmful material for the public and prevent it from being spread," adding that "[t]he Chinese government pushes forward the healthy development of the internet. But it lawfully manages the internet". In June 2009, state-run Chinese media announced that the installation of the Green Dam Youth Escort would not be compulsory but an optional package. Trials In 2008, under instructions from political leaders, the MIIT implemented a "community-oriented green open Internet filtering software project" with the support of the Central Civilisation Office and the Ministry of Finance. Its aim was to build a "green, healthy network environment, to protect the healthy growth of young people". Trials commenced in Zhengzhou, Nanjing, Lanzhou, and Xi'an in October 2008 after the ministry negotiated with the software suppliers and 50 web portals to make the software publicly available without charge, and more than 2,000 installations took place. Trials rolled out to 10 more cities, including Chengdu, Shenyang, Harbin, and Qingdao. The ministry claimed that by December 2008, the software had been downloaded more than 100,000 times, and 3 million times since the end of March 2009. Five leading PC vendors in mainland China, Founder, Lenovo, Tongfang, Great Wall and HEDY, also participated in trial installations. Censorship concerns Professor Jonathan Zittrain, of Harvard's Berkman Center said: "Once you've got government-mandated software installed on each machine, the software has the keys to the kingdom... While the justification may be pitched as protecting children and mostly concerning pornography, once the architecture is set up it can be used for broader purposes, such as the filtering of political ideas." Colin Maclay, another Harvard academic, said that Green Dam creates a log file of all of the pages that the user tries to access. "At the moment it's unclear whether that is reported back, but it could be." In fact, the current software filter contains about 85% political keywords, and only 15% pornography-related keywords. Reception and responses Computer industry In June 2009, the computer industry advocacy organization, Computer and Communications Industry Association (CCIA), said the development was "very unfortunate". Ed Black, CCIA president criticised the move as "clearly an escalation of attempts to limit access and the freedom of the internet, [...with] economic and trade as well as cultural and social ramifications." Black said the Chinese were attempting to "not only control their own citizens' access to the internet but to force everybody into being complicit and participate in a level of censorship". The CCIA was reported to be taking up a test case for American tech companies wishing to present "a united front against censorship" and it called on the Obama administration to intervene with Beijing over the requirement that manufacturers pre-install the software on all new computers. On 8 June, Microsoft said that appropriate parental control tools were "an important societal consideration". However, "[i]n this case, we agree with others in industry and around the world that important issues such as freedom of expression, privacy, system reliability and security need to be properly addressed." An international group of business associations urged the government to scrap the Green Dam directive in a letter to Chinese Premier Wen Jiabao. The letter was signed by the heads of 22 organisations representing international businesses, including the U.S. Chamber of Commerce, the European-American Business Council, the Information Technology Industry Council and other associations from North America, Europe, and Japan. In moves which the San Francisco Chronicle suggested were politically motivated by the quest for closer ties, Taiwanese manufacturers Acer, Asus, BenQ announced they were already shipping products with Green Dam as originally ordered, joined by Sony and Lenovo. Public Online polls conducted by leading Chinese web portals revealed poor acceptance of the software by netizens. On Sina and Netease, over 80% of poll participants said they would not consider or were not interested in using the software; on Tencent, over 70% of poll participants said it was unnecessary for new computers to be preloaded with filtering software; on Sohu, over 70% of poll participants said filtering software would not effectively prevent minors from browsing inappropriate websites. A poll conducted by the Southern Metropolis Daily showed similar results. A report by the OpenNet Initiative project acknowledged the broad global support for measures to help parents limit exposure of their children to harmful online material and published a detailed report on the technical and political flaws of this software and its implications. Internet citizens have created a manga-style Moe anthropomorphism named 'Green Dam Girl' (; Japanese: ), similar to the OS-tans. Many versions exist, but the common features are that she is dressed in green, wears a river crab hat, holding a rabbit (the Green Dam mascot) in hand, and armed with a paintbrush to wipe out online filth. She also commonly wears an armband with the word Discipline written on it. On 11 June 2009, a team released a third-party tool aiming to provide users with options to disable the software, change the master password and perform post-uninstallation clean-up (i.e., removing files and registry entries left behind by the uninstaller). Government and manufacturer A BBC News article reported that critics feared the new software could be used by the government to enhance the existing internet censorship system. Jinhui's general manager, [Bryan] Zhang Chenmin, rejected the accusation: "It's a sheer commercial activity, having nothing to do with the government" he said. On 10 June, amidst massive criticism circling within the internet about the software and the MIIT's directive, the Publicity Department of the Communist Party of China Central Committee, the agency responsible for censorship, issued an instruction attributed to "central leaders" requiring the Chinese media to stop publishing questioning or critical opinions. Reports in defense of the official stand appeared subsequently, with a commentary by the state-run Xinhua news agency saying "support largely stems from end users, opposing opinions primarily come from a minority of media outlets and businesses". The instruction also required online forums to block and remove "offensive speech evolved from the topic" promptly. In response to the "public concern, anger and protest" triggered by the government edict, China Daily put forward the case for free choice, saying: "Respect for an individual's right to choice is an important indicator of a free society, depriving them of which is gross transgression." On 15 June, an official of the Department of Software Service under the MIIT downplayed the compulsory aspect of the software: "The PC makers only need to save the setup files of the program on the hard drives of the computers, or provide CD-ROMs containing the program with their PC packages" he said. Users will have the final say on whether or not to install the software, he continued, "so it is misleading to say the government compels PC users to use the software ... The government's role is limited to having the software developed and providing it free". Further critical articles appeared in both the state-run Peoples' Daily and the relatively liberal China Youth Daily, a paper run by the China Youth League of which Chinese President Hu Jintao was a member and a patron. It leads to the belief that support for the MIIT's directive was divided within the Chinese government itself. On the eve of the introduction of the mandatory pre-installation of the Green Dam software on new computers, it was postponed. The MIIT said it would "keep on soliciting opinions to perfect the pre-installation plan." Ministry sources confirmed that the software had been patched, and that the government procurement procedure of the software "had complied with China's Government Procurement Law, which was open, fair, transparent, non-exclusive, [...] under strict supervision" and "in line with regulations of the World Trade Organization" US Government On meeting with officials of the MIIT and the ministry of commerce about Green Dam, American diplomats in China issued a statement: Defects and software issues Functional defects Jinhui claimed that Green Dam recognized pornographic images by analyzing skin-coloured regions, complemented by human face recognition. However, according to a Southern Weekly article, the software is incapable of recognizing pictures of nudity featuring black- or red-skinned characters but sensitive enough to images with large patches of yellow that it censors promotional images of the film Garfield: A Tail of Two Kitties. The article also cited an expert saying that the software's misrecognition of "inappropriate contents" in applications including Microsoft Word can lead it to forcefully close those applications without notifying the user, thus cause data losses. On 21 June 2009, Ming Pao reported that the software detected and censored pictures of Chinese political leaders as pornography. On 11 June 2009, a BBC News article reported that potential faults in the software could lead to a large-scale disaster. The report included comments by Isaac Mao, who said that there were "a series of software flaws", including the unencrypted communications between the software and the company's servers, which could allow hackers access to people's private data or place malicious script on machines on the network to "affect [a] large scale disaster". The software runs only on Microsoft Windows x86, so Microsoft Windows x86-64, Mac OS X, Linux and users of other operating systems are ignored. Even on Microsoft Windows, the software is known to interfere with Internet Explorer and Google Chrome, and is incompatible with Mozilla Firefox. Also on 11 June 2009, a Netease article reported that the master password of the software could be easily cracked. The software stores the MD5 checksum of the password in a text file disguised as a DLL (C:\Windows\System32\kwpwf.dll), thus the password can be arbitrarily set by changing the contents of the file. This was ridiculed by some netizens as the software being crackable by "elementary school students". Researchers from University of Michigan found the uninstaller "appears to effectively remove Green Dam from the computer," whereas some sources state that part of the software (e.g. executables loaded on startup) cannot be removed by its own uninstaller, but that most of it (per either blogs or media reports) was removed according to the PRC government's request. Security vulnerabilities On 11 June 2009, Scott Wolchok, Randy Yao, and J. Alex Halderman from the University of Michigan published an analysis of Green Dam Youth Escort. They located various security vulnerabilities that can allow "malicious sites to steal private data, send spam, or enlist the computer in a botnet" and "the software makers or others to install malicious code during the update process". They recommended that users uninstall the software immediately for protection. Jinhui's general manager, [Bryan] Zhang Chenmin attacked the Wolchok et al. report as irresponsible action and breach of his company's copyright, and said that Jinhui had been ordered to patch the weaknesses. Wolchok et al. indicated the existence of buffer overflow vulnerabilities which they ascribed to programming errors. Buffer overflow may occur when the software performs URL filtering or updates its blacklist filter files due to the use of fixed-length buffers, and can corrupt the execution stack and potentially allow execution of malicious code. Furthermore, the feature of automatic filter update opens the door to the computer being remotely controlled by the software's makers and possibly third parties who manage to impersonate the update server because the updates are delivered via unencrypted HTTP. The report included an example page that exploits the buffer overflow vulnerability to crash the software. On 12 June 2009, an exploit that takes advantage of the same defect to practically deploy shellcode was published on the website milw0rm.com. The author of the exploit claimed that the exploit is able to bypass the DEP and ASLR protection mechanisms on Windows Vista. Alleged software plagiarism and license violation In addition to security vulnerabilities, Wolchok, Yao and Halderman also found that a number of blacklist files used by Green Dam Youth Escort were taken from the censorship program CyberSitter, from Solid Oak Software Inc. The decrypted configuration file references blacklists with download URLs at CyberSitter's website. They also discovered in the software a news bulletin published by CyberSitter in 2004, whose inclusion was conjectured by them to be accidental. A post on the Chinese IT website Solidot published details of the taken files and claimed that the files were outdated. Both the Wolchok et al. report and a technical analysis released on WikiLeaks indicated that software contains code libraries and a configuration file from the BSD-licensed computer vision library OpenCV. The WikiLeaks document said the software violated the BSD license. U.S. lawsuit According to The Wall Street Journal, Solid Oak, which had been apprised of the infringement, announced it would file injunctions on US manufacturers to stop them shipping machines with Green Dam. The report included a response by Jinhui Computer System Engineering Co. denying that they stole anything, quoting Bryan Zhang as saying "That's impossible". Internet lawyer Jonathan Zittrain said that if the computers are only sold in China it would not be a violation of U.S. copyright and the issue "would have to be resolved in a Chinese court under Chinese law". Solid Oak's Mr Milburn was reported by BBC News as saying that he is not sure legal action will be worth the effort, but would also file a complaint with the Federal Bureau of Investigation's Computer Crime Task Force. Hewlett-Packard and Dell were sent cease and desist letters by Solid Oak Software, asking them to respond by 24 June, having determined "without a doubt that Green Dam is indeed pirated, and using 100 percent of our code". In January 2010, Cybersitter filed a $2.2 billion lawsuit against the PRC government and Jinhui Computer System Engineering charging that Green Dam Youth's developers had stolen more than 5,000 lines of code from Cybersitter. In December 2010, a California court denied a motion to have the suit dropped. The motion was filed by Sony, Acer, BenQ and Asustek, who were named as defendants in a list that also includes Chinese PC makers Lenovo and Haier. Reactions of the software's makers According to an addendum to the Wolchok et al. report published on 18 June 2009, makers of Green Dam Youth Escort silently patched the software on 13 June, addressing at least the one particular buffer overflow vulnerability showcased in the original report. In spite of the patch, the software nevertheless remained vulnerable to more sophisticated attacks, as demonstrated by a new example attack page included in the addendum, leading the authors to stand by their previous recommendation that users uninstall the software immediately. According to the same addendum, an update was released on 12 June 2009 to reconfigure the software's filtering blacklists files, which modifies one blacklist and disables the rest. However, files taken from CyberSitter continue to be present on the computer even after the update, and are still used in a pre-update version of the software available from its makers' website. Another update was released on 17 June 2009 to include OpenCV's BSD license into the software's help file to address the license violation issue. Loss of funding The project was reportedly dead because the ministry refused to continue funding the project. The Beijing Times reported that Beijing Dazheng Human Language Technology Academy had closed the office for the Green Dam project and up to 30 IT engineers were made redundant, and that co-developer Zhengzhou Jinhui Computer System Engineering, would soon run into financial difficulties through lack of funding. However, Dazheng said it had been forced to downsize (and not shut) the Green Dam unit due to financial constraints. Dazheng's general manager said his company received 19.9 million yuan in the first year and had not received payment since, and that its commitment to providing support and updates for the product was costing 7 million yuan annually. Critics said the lack of transparency in the funding cut cast the Ministry in a bad light. In 2010, other commentators, whilst noting no change in the government's policy towards policing the Internet, said the de facto abandonment of the project was an admission of error. See also Internet censorship in the People's Republic of China Golden Shield Project, also known as the "Great Firewall of China" Content-control software References Content-control software Internet censorship in China Science and technology in the People's Republic of China
13925784
https://en.wikipedia.org/wiki/John%20Coll
John Coll
John Alexander Coll was a British computer specialist. While teaching physics at Oundle school he built a number of computers and was involved in Micro Users in Secondary Education (MUSE). He helped write the functional description for the BBC Computer and played an important role in convincing senior management at the BBC that it could be done. He later wrote the BBC Microcomputer User Guide which was supplied by Acorn Computers with the BBC Micro and appeared regularly on the television programmes Making the Most of the Micro and Micro Live which featured the computer. Professional career He taught physics at Keil School and then at Oundle School where he was also head of Electronics and was also a tutor at Laxton House. At Oundle he learnt to program the school's Data General Nova 2 computer alongside a number of pupils, built a Motorola 6800 based microcomputer from scratch, designing and etching the printed circuit boards personally and then purchased and built a kit SWTPC 6800-based computer which was made available to the pupils. His relationship with SWTPC's UK operation helped many former pupils gain gap-year and full-time jobs and a foothold into the computer industry. He was also active with the organisation 'Micro Users in Secondary Education (MUSE)'. With David Allen, he was then asked by the British Broadcasting Corporation to help draw up the functional description for a computer which would be used as part of a television series to teach computer literacy. Of John, of the team at the BBC said "It was John’s drive, determination and sheer brilliance that really pulled the whole thing off". He later wrote the BBC Microcomputer User Guide with David Allen which was supplied by Acorn Computers with the BBC Micro, he appeared regularly on the television programmes Making the Most of the Micro and Micro Live and wrote many articles for Personal Computer World during its early year. John also invested his time in people and he wanted to realize the potential in people. In his part in philanthropy, John was mainly focused on educating people about IT. Through his company Connection Software he started off the charity Educated Horizons, which funded students from disadvantaged backgrounds from the Chikomba District, to pursue further education in higher institutions of learning in Zimbabwe. He also equipped many High Schools in the Harare Archdiocese with computers and other IT equipment to ensure the smooth studying of technical subjects like Computer Science. He was the Patron of the St Francis of Assisi Computer Science class (2010). Until his death on 23 December 2014 John ran Connection Software, a telecoms software house and ASP specialising in SMS, MMS and VOIP. Publications The BBC Microcomputer User Guide was written by John Coll and edited by David Allen for the British Broadcasting Corporation. References External links Connection Software British computer scientists 2014 deaths
1600199
https://en.wikipedia.org/wiki/The%20Royal%20Opera
The Royal Opera
The Royal Opera is a British opera company based in central London, resident at the Royal Opera House, Covent Garden. Along with the English National Opera, it is one of the two principal opera companies in London. Founded in 1946 as the Covent Garden Opera Company, the company had that title until 1968. It brought a long annual season and consistent management to a house that had previously hosted short seasons under a series of impresarios. Since its inception, it has shared the Royal Opera House with the dance company now known as The Royal Ballet. When the company was formed, its policy was to perform all works in English, but since the late 1950s most operas have been performed in their original language. From the outset, performers have comprised a mixture of British and Commonwealth singers and international guest stars, but fostering the careers of singers from within the company was a consistent policy of the early years. Among the many guest performers have been Maria Callas, Plácido Domingo, Kirsten Flagstad, Hans Hotter, Birgit Nilsson, Luciano Pavarotti and Elisabeth Schwarzkopf. Among those who have risen to international prominence from the ranks of the company are Geraint Evans, Joan Sutherland, Kiri Te Kanawa and Jon Vickers. The company's growth under the management of David Webster from modest beginnings to parity with the world's greatest opera houses was recognised by the grant of the title "The Royal Opera" in 1968. Under Webster's successor, John Tooley, appointed in 1970, The Royal Opera prospered, but after his retirement in 1988, there followed a period of instability and the closure of the Royal Opera House for rebuilding and restoration between 1997 and 1999. The 21st century has seen a stable managerial regime once more in place. The company has had six music directors since its inception: Karl Rankl, Rafael Kubelík, Georg Solti, Colin Davis, Bernard Haitink and Antonio Pappano. History Background From the mid-19th century, opera had been presented on the site of Covent Garden's Royal Opera House, at first by Michael Costa's Royal Italian Opera company. After a fire, the new building opened in 1858 with The Royal English Opera company, which moved there from the Theatre Royal, Drury Lane. From the 1860s until the Second World War, various syndicates or individual impresarios presented short seasons of opera at the Royal Opera House (so named in 1892), sung in the original language, with star singers and conductors. Pre-war opera was described by the historian Montague Haltrecht as "international, dressy and exclusive". During the war, the Royal Opera House was leased by its owners, Covent Garden Properties Ltd, to Mecca Ballrooms who used it profitably as a dance hall. Towards the end of the war, the owners approached the music publishers Boosey and Hawkes to see if they were interested in taking a lease of the building and staging opera (and ballet) once more. Boosey and Hawkes took a lease, and granted a sub-lease at generous terms to a not-for-profit charitable trust established to run the operation. The chairman of the trust was Lord Keynes. There was some pressure for a return to the pre-war regime of starry international seasons. Sir Thomas Beecham, who had presented many Covent Garden seasons between 1910 and 1939 confidently expected to do so again after the war. However, Boosey and Hawkes, and David Webster, whom they appointed as chief executive of the Covent Garden company, were committed to presenting opera all year round, in English with a resident company. It was widely assumed that this aim would be met by inviting the existing Sadler's Wells Opera Company to become resident at the Royal Opera House. Webster successfully extended just such an invitation to the Sadler's Wells Ballet Company, but he regarded the sister opera company as "parochial". He was determined to set up a new opera company of his own. The British government had recently begun to give funds to subsidise the arts, and Webster negotiated an ad hoc grant of £60,000 and an annual subsidy of £25,000, enabling him to proceed. Beginnings: 1946–1949 Webster's first priority was to appoint a musical director to build the company from scratch. He negotiated with Bruno Walter and Eugene Goossens, but neither of those conductors was willing to consider an opera company with no leading international stars. Webster appointed a little-known Austrian, Karl Rankl, to the post. Before the war, Rankl had acquired considerable experience in charge of opera companies in Germany, Austria and Czechoslovakia. He accepted Webster's invitation to assemble and train the principals and chorus of a new opera company, alongside a permanent orchestra that would play in both operas and ballets. The new company made its debut in a joint presentation, together with the Sadler's Wells Ballet Company, of Purcell's The Fairy-Queen on 12 December 1946. The first production by the opera company alone was Carmen, on 14 January 1947. Reviews were favourable. The Times said: All the members of the cast for the production were from Britain or the Commonwealth. Later in the season, one of England's few pre-war international opera stars, Eva Turner, appeared as Turandot. For the company's second season, eminent singers from continental Europe were recruited, including Ljuba Welitsch, Elisabeth Schwarzkopf, Paolo Silveri, Rudolf Schock and Set Svanholm. Other international stars who were willing to re-learn their roles in English for the company in its early years included Kirsten Flagstad and Hans Hotter for The Valkyrie. Nevertheless, even as early as 1948, the opera in English policy was weakening; the company was obliged to present some Wagner performances in German to recruit leading exponents of the main roles. At first Rankl conducted all the productions; he was dismayed when eminent guest conductors including Beecham, Clemens Krauss and Erich Kleiber were later invited for prestige productions. By 1951 Rankl felt that he was no longer valued, and announced his resignation. In Haltrecht's view, the company that Rankl built up from nothing had outgrown him. In the early years, the company sought to be innovative and widely accessible. Ticket prices were kept down: in the 1949 season 530 seats were available for each performance at two shillings and sixpence. In addition to the standard operatic repertory, the company presented operas by living composers such as Britten, Vaughan Williams, Bliss, and, later, Walton. The young stage director Peter Brook was put in charge of productions, bringing a fresh and sometimes controversial approach to stagings. 1950s After Rankl's departure the company engaged a series of guest conductors while Webster sought a new musical director. His preferred candidates, Erich Kleiber, John Barbirolli, Josef Krips, Britten and Rudolf Kempe, were among the guests but none would take the permanent post. It was not until 1954 that Webster found a replacement for Rankl in Rafael Kubelík. Kubelík announced immediately that he was in favour of continuing the policy of singing in the vernacular: "Everything that the composer has written should be understood by the audience; and that is not possible if the opera is sung in a language with which they are not familiar". This provoked a public onslaught by Beecham, who continued to maintain that it was impossible to produce more than a handful of English-speaking opera stars, and that importing singers from continental Europe was the only way to achieve first-rate results. Despite Beecham's views, by the mid-1950s the Covent Garden company included many British and Commonwealth singers who were already or were soon to be much sought after by overseas opera houses. Among them were Joan Carlyle, Marie Collier, Geraint Evans, Michael Langdon, Elsie Morison, Amy Shuard, Joan Sutherland, Josephine Veasey and Jon Vickers. Nevertheless, as Lords Goodman and Harewood put it in a 1969 report for the Arts Council, "[A]s time went on the operatic centre of British life began to take on an international character. This meant that, while continuing to develop the British artists, it was felt impossible to reach the highest international level by using only British artists or singing only in English". Guest singers from mainland Europe in the 1950s included Maria Callas, Boris Christoff, Victoria de los Ángeles, Tito Gobbi and Birgit Nilsson. Kubelík introduced Janáček's Jenůfa to British audiences, sung in English by a mostly British cast. The verdict of the public on whether operas should be given in translation or the original was clear. In 1959, the opera house stated in its annual report, "[T]he percentage attendance at all opera in English was 72 per cent; attendance at the special productions marked by higher prices was 91 per cent … it is 'international' productions with highly priced seats that reduce our losses". The opera in English policy was never formally renounced. On this subject, Peter Heyworth wrote in The Observer in 1960 that Covent Garden had "quickly learned the secret that underlies the genius of British institutions for undisturbed change: it continued to pay lip service to a policy that it increasingly ignored". By the end of the 1950s, Covent Garden was generally regarded as approaching the excellence of the world's greatest opera companies. Its sister ballet company had achieved international recognition and was granted a royal charter in 1956, changing its title to "The Royal Ballet"; the opera company was close to reaching similar eminence. Two landmark productions greatly enhanced its reputation. In 1957, Covent Garden presented the first substantially complete professional staging at any opera house of Berlioz's vast opera The Trojans, directed by John Gielgud and conducted by Kubelík. The Times commented, "It has never been a success; but it is now". In 1958 the present theatre's centenary was marked by Luchino Visconti's production of Verdi's Don Carlos, with Vickers, Gobbi, Christoff, Gré Brouwenstijn and Fedora Barbieri, conducted by Carlo Maria Giulini. The work was then a rarity, and had hitherto been widely regarded as impossible to stage satisfactorily, but Visconti's production was a triumph. 1960s Kubelík did not renew his contract when it expired, and from 1958 there was an interregnum until 1961, covered by guest conductors including Giulini, Kempe, Tullio Serafin, Georg Solti and Kubelík himself. In June 1960 Solti was appointed musical director from the 1961 season onwards. With his previous experience in charge of the Munich and Frankfurt opera houses, he was at first uncertain that Covent Garden, not yet consistently reaching the top international level, was a post he wanted. Bruno Walter persuaded him otherwise, and he took up the musical directorship in August 1961. The press gave him a cautious welcome, but there was some concern about a drift away from the company's original policies: Solti, however, was an advocate of opera in the vernacular, and promoted the development of British and Commonwealth singers in the company, frequently casting them in his recordings and important productions in preference to overseas artists. Among those who came to prominence during the decade were Gwyneth Jones and Peter Glossop. Solti demonstrated his belief in vernacular opera with a triple bill in English of L'heure espagnole, Erwartung and Gianni Schicchi. Nevertheless, Solti and Webster had to take into account the complete opposition on the part of such stars as Callas to opera in translation. Moreover, as Webster recognised, the English-speaking singers wanted to learn their roles in the original so that they could sing them in other countries and on record. Increasingly, productions were in the original language. In the interests of musical and dramatic excellence, Solti was a strong proponent of the stagione system of scheduling performances, rather than the traditional repertory system. By 1967, The Times said, "Patrons of Covent Garden today automatically expect any new production, and indeed any revival, to be as strongly cast as anything at the Met in New York, and as carefully presented as anything in Milan or Vienna". The company's repertory in the 1960s combined the standard operatic works and less familiar pieces. The five composers whose works were given most frequently were Verdi, Puccini, Wagner, Mozart and Richard Strauss; the next most performed composer was Britten. Rarities performed in the 1960s included operas by Handel and Janáček (neither composer's works being as common in the opera house then as now), and works by Gluck (Iphigénie en Tauride), Poulenc (The Carmelites), Ravel (L'heure espagnole) and Tippett (King Priam). There was also a celebrated production of Schoenberg's Moses and Aaron in the 1965–66 and 1966–67 seasons. In the mainstream repertoire, a highlight of the decade was Franco Zeffirelli's production of Tosca in 1964 with Callas, Renato Cioni and Gobbi. Among the guest conductors who appeared at Covent Garden during the 1960s were Otto Klemperer, Pierre Boulez, Claudio Abbado and Colin Davis. Guest singers included Jussi Björling, Mirella Freni, Sena Jurinac, Irmgard Seefried and Astrid Varnay. The company made occasional appearances away from the Royal Opera House. Touring within Britain was limited to centres with large enough theatres to accommodate the company's productions, but in 1964 the company gave a concert performance of Otello at the Proms in London. Thereafter an annual appearance at the Proms was a regular feature of the company's schedule throughout the 1960s. In 1970, Solti led the company to Germany, where they gave Don Carlos, Falstaff and a new work by Richard Rodney Bennett. All but two of the principals were British. The public in Munich and Berlin were, according to the Frankfurter Allgemeine Zeitung, "beside themselves with enthusiasm". In 1968, on the recommendation of the Home Secretary, James Callaghan, the Queen conferred the title "The Royal Opera" on the company. It was the third stage company in the UK to be so honoured, following the Royal Ballet and the Royal Shakespeare Company. 1970 to 1986 Webster retired in June 1970. The music critic Charles Osborne wrote, "When he retired, he handed over to his successor an organization of which any opera house in the world might be proud. No memorial could be more appropriate". The successor was Webster's former assistant, John Tooley. One of Webster's last important decisions had been to recommend to the board that Colin Davis should be invited to take over as musical director when Solti left in 1971. It was announced in advance that Davis would work in tandem with Peter Hall, appointed director of productions. Peter Brook had briefly held that title in the company's early days, but in general the managerial structure of the opera company differed markedly from that of the ballet. The latter had always had its own director, subordinate to the chief executive of the opera house but with, in practice, a great degree of autonomy. The chief executive of the opera house and the musical director exercised considerably more day-to-day control over the opera company Appointing a substantial theatrical figure such as Hall was an important departure. Hall, however, changed his mind, and did not take up the appointment, going instead to run the National Theatre. His defection, and the departure to Australian Opera of the staff conductor Edward Downes, a noted Verdi expert, left the company weakened on both production and musical sides. Like his predecessors, Davis experienced hostility from sections of the audience in his early days in charge. His first production after taking over was a well-received Le nozze di Figaro, in which Kiri Te Kanawa achieved immediate stardom, but booing was heard at a "disastrous" Nabucco in 1971, and his conducting of Wagner's Ring was at first compared unfavourably with that of his predecessor. The Covent Garden board briefly considered replacing him, but was dissuaded by its chairman, Lord Drogheda. Davis's Mozart was generally admired; he received much praise for reviving the little-known La clemenza di Tito in 1974. Among his other successes were The Trojans and Benvenuto Cellini. Under Davis, the opera house introduced promenade performances, giving, as Bernard Levin wrote, "an opportunity for those (particularly the young, of course) who could not normally afford the price of stalls tickets to sample the view from the posher quarters at the trifling cost of £3 and a willingness to sit on the floor". Davis conducted more than 30 operas during his 15-year tenure, but, he said, "people like [Lorin] Maazel, Abbado and [Riccardo] Muti would only come for new productions". Unlike Rankl, and like Solti, Davis wanted the world's best conductors to come to Covent Garden. He ceded the baton to guests for new productions including Der Rosenkavalier, Rigoletto and Aida. In The Times, John Higgins wrote, "One of the hallmarks of the Davis regime was the flood of international conductors who suddenly arrived at Covent Garden. While Davis has been in control perhaps only three big names have been missing from the roster: Karajan, Bernstein and Barenboim". Among the high-profile guests conducting Davis's company were Carlos Kleiber for performances of Der Rosenkavalier (1974), Elektra (1977), La bohème (1979) and Otello (1980), and Abbado conducting Un ballo in maschera (1975), starring Plácido Domingo and Katia Ricciarelli. In addition to the standard repertoire, Davis conducted such operas as Berg's Lulu and Wozzeck, Tippett's The Knot Garden and The Ice Break, and Alexander Zemlinsky's Der Zwerg and Eine florentinische Tragödie. Among the star guest singers during the Davis years were the sopranos Montserrat Caballé and Leontyne Price, the tenors Carlo Bergonzi, Nicolai Gedda and Luciano Pavarotti and the bass Gottlob Frick. British singers appearing with the company included Janet Baker, Heather Harper, John Tomlinson and Richard Van Allan. Davis's tenure, at that time the longest in The Royal Opera's history, closed in July 1986 not with a gala, but, at his insistence, with a promenade performance of Fidelio with cheap admission prices. 1987 to 2002 To succeed Davis, the Covent Garden board chose Bernard Haitink, who was then the musical director of the Glyndebourne Festival. He was highly regarded for the excellence of his performances, though his repertory was not large. In particular, he was not known as an interpreter of the Italian opera repertoire (he conducted no Puccini and only five Verdi works during his music directorship at Covent Garden). His tenure began well; a cycle of the Mozart Da Ponte operas directed by Johannes Schaaf was a success, and although a Ring cycle with the Russian director Yuri Lyubimov could not be completed, a substitute staging of the cycle directed by Götz Friedrich was well received. Musically and dramatically the company prospered into the 1990s. A 1993 production of Die Meistersinger, conducted by Haitink and starring John Tomlinson, Thomas Allen, Gösta Winbergh and Nancy Gustafson, was widely admired, as was Richard Eyre's 1994 staging of La traviata, conducted by Solti and propelling Angela Gheorghiu to stardom. For some time, purely musical considerations were overshadowed by practical and managerial crises at the Royal Opera House. Sir John Tooley retired as general director in 1988, and his post was given to the television executive Jeremy Isaacs. Tooley later forsook his customary reticence and pronounced the Isaacs period a disaster, citing poor management that failed to control inflated manning levels with a consequent steep rise in costs and ticket prices. The uneasy relations between Isaacs and his colleagues, notably Haitink, were also damaging. Tooley concluded that under Isaacs "Covent Garden had become a place of corporate entertainment, no longer a theatre primarily for opera and ballet lovers". Isaacs was widely blamed for the poor public relations arising from the 1996 BBC television series The House, in which cameras were permitted to film the day-to-day backstage life of the opera and ballet companies and the running of the theatre. The Daily Telegraph commented, "For years, the Opera House was a byword for mismanagement and chaos. Its innermost workings were exposed to public ridicule by the BBC fly-on-the-wall series The House". In 1995, The Royal Opera announced a "Verdi Festival", of which the driving force was the company's leading Verdian, Sir Edward Downes, by now returned from Australia. The aim was to present all Verdi's operas, either on stage or in concert performance, between 1995 and the centenary of Verdi's death, 2001. Those operas substantially rewritten by the composer in his long career, such as Simon Boccanegra, were given in both their original and revised versions. The festival did not manage to stage a complete Verdi cycle; the closure of the opera house disrupted many plans, but as The Guardian put it, "Downes still managed to introduce, either under his own baton or that of others, most of the major works and many of the minor ones by the Italian master." The most disruptive event of the decade for both the opera and the ballet companies was the closure of the Royal Opera House between 1997 and 1999 for major rebuilding. The Independent on Sunday asserted that Isaacs "hopelessly mismanaged the closure of the Opera House during its redevelopment". Isaacs, the paper states, turned down the chance of a temporary move to the Lyceum Theatre almost next door to the opera house, pinning his hopes on a proposed new temporary building on London's South Bank. That scheme was refused planning permission, leaving the opera and ballet companies homeless. Isaacs resigned in December 1996, nine months before the expiry of his contract. Haitink, dismayed by events, threatened to leave, but was persuaded to stay and keep the opera company going in a series of temporary homes in London theatres and concert halls. A semi-staged Ring cycle at the Royal Albert Hall gained superlative reviews and won many new admirers for Haitink and the company, whose members included Tomlinson, Anne Evans and Hildegard Behrens. After Isaacs left, there was a period of managerial instability, with three chief executives in three years. Isaacs's successor, Genista McIntosh, resigned in May 1997 after five months, citing ill-health. Her post was filled by Mary Allen, who moved into the job from the Arts Council. Allen's selection did not comply with the council's rules for such appointments, and following a critical House of Commons Select committee report on the management of the opera house she resigned in March 1998, as did the entire board of the opera house, including the chairman, Lord Chadlington. A new board appointed Michael Kaiser as general director in September 1998. He oversaw the restoration of the two companies' finances and the re-opening of the opera house. He was widely regarded as a success, and there was some surprise when he left in June 2000 after less than two years to run the Kennedy Center in Washington, D.C. The last operatic music to be heard in the old house had been the finale of Falstaff, conducted by Solti with the singers led by Bryn Terfel, in a joint opera and ballet farewell gala in July 1997. When the house reopened in December 1999, magnificently restored, Falstaff was the opera given on the opening night, conducted by Haitink, once more with Terfel in the title role. 2002 to date Following years of disruption and conflict, stability was restored to the opera house and its two companies after the appointment in May 2001 of a new chief executive, Tony Hall, formerly a senior executive at the BBC. The following year Antonio Pappano succeeded Haitink as music director of The Royal Opera. Following the redevelopment, a second, smaller auditorium, the Linbury Studio Theatre has been made available for small-scale productions by The Royal Opera and The Royal Ballet, for visiting companies, and for work produced in the ROH2 programme, which supports new work and developing artists. The Royal Opera encourages young singers at the start of their careers with the Jette Parker Young Artists Programme; participants are salaried members of the company and receive daily coaching in all aspects of opera. In addition to the standard works of the operatic repertoire, The Royal Opera has presented many less well known pieces since 2002, including Cilea's Adriana Lecouvreur, Massenet's Cendrillon, Prokofiev's The Gambler, Rimsky-Korsakov's The Tsar's Bride, Rossini's Il turco in Italia, Steffani's Niobe, and Tchaikovsky's The Tsarina's Slippers. Among the composers whose works were premiered were Thomas Adès, Harrison Birtwistle, Lorin Maazel, and Nicholas Maw. Productions in the first five years of Pappano's tenure ranged from Shostakovich's Lady Macbeth of Mtsensk (2004) to Stephen Sondheim's Sweeney Todd (2003) starring Thomas Allen and Felicity Palmer. Pappano's Ring cycle, begun in 2004 and staged as a complete tetralogy in 2007, was praised like Haitink's before it for its musical excellence; it was staged in a production described by Richard Morrison in The Times as "much derided for mixing the homely … the wacky and the cosmic". During Pappano's tenure, his predecessors Davis and Haitink have both returned as guests. Haitink conducted Parsifal, with Tomlinson, Christopher Ventris and Petra Lang in 2007, and Davis conducted four Mozart operas between 2002 and 2011, Richard Strauss's Ariadne auf Naxos in 2007 and Humperdinck's Hansel and Gretel in 2008. In 2007, Sir Simon Rattle conducted a new production of Debussy's Pelléas et Mélisande starring Simon Keenlyside, Angelika Kirchschlager and Gerald Finley. The company visited Japan in 2010, presenting a new production of Manon and the Eyre production of La traviata. While the main company was abroad, a smaller company remained in London, presenting Niobe, Così fan tutte and Don Pasquale at Covent Garden. In 2010, the Royal Opera House received a government subsidy of just over £27m, compared with a subsidy of £15m in 1998. This sum was divided between the opera and ballet companies and the cost of running the building. Compared with opera houses in mainland Europe, Covent Garden's public subsidy has remained low as a percentage of its income – typically 43%, compared with 60% for its counterpart in Munich. In the latter part of the 2000s The Royal Opera gave an average of 150 performances each season, lasting from September to July, of about 20 operas, nearly half of which were new productions. Productions in the 2011–12 season included a new opera (Miss Fortune) by Judith Weir, and the first performances of The Trojans at Covent Garden since 1990, conducted by Pappano, and starring Bryan Hymel, Eva-Maria Westbroek and Anna Caterina Antonacci. From the start of the 2011–12 season Kasper Holten became Director of The Royal Opera, joined by John Fulljames as Associate Director of Opera. At the end of the 2011–12 season ROH2, the contemporary arm of the Royal Opera House, was closed. Responsibility for contemporary programming was split between the Studio programmes of The Royal Opera and The Royal Ballet. Since the start of the 2012–13 season, The Royal Opera has continued to mount around 20 productions and around seven new productions each season. The 2012–13 season opened with a revival of Der Ring des Nibelungen, directed by Keith Warner; new productions that season included Robert le diable, directed by Laurent Pelly, Eugene Onegin, directed by Holten, La donna del lago, directed by Fulljames, and the UK premiere of Written on Skin, composed by George Benjamin and directed by Katie Mitchell. Productions by the Studio Programme included the world premiere of David Bruce's The Firework-Maker's Daughter (inspired by Philip Pullman's novel of the same name), directed by Fulljames, and the UK stage premiere of Gerald Barry's The Importance of Being Earnest, directed by Ramin Gray. New productions in the 2013–14 season included Les vêpres siciliennes, directed by Stefan Herheim, Parsifal, directed by Stephen Langridge, Don Giovanni, directed by Holten, Die Frau ohne Schatten, directed by Claus Guth, and Manon Lescaut, directed by Jonathan Kent, and in the Studio Programme the world premiere of Luke Bedford's Through His Teeth, and the London premiere of Luca Francesconi's Quartett (directed by Fulljames). This season also saw the first production of a three-year collaboration between The Royal Opera and Welsh National Opera, staging Moses und Aron in 2014, Richard Ayre's Peter Pan in 2015 and a new commission in 2016 to celebrate WNO's 70th anniversary. Other events this season included The Royal Opera's first collaboration with Shakespeare's Globe, Holten directing L'Ormindo in the newly opened Sam Wanamaker Playhouse. In The Guardian, Tim Ashley wrote, "A more exquisite evening would be hard to imagine"; Dominic Dromgoole, director of the playhouse expressed the hope that the partnership with the Royal Opera would become an annual fixture. The production was revived in February 2015. In March 2021, the ROH announced simultaneously the latest extension of Pappano's contract as its music director until the 2023-2024 season, and the scheduled conclusion of Pappano's tenure as ROH music director at the close of the 2023-2024 season. Managerial and musical heads, 1946 to date References Notes Footnotes Sources Further reading External links British opera companies Covent Garden Musical groups established in 1946 Opera in London
2765516
https://en.wikipedia.org/wiki/Cellular%20multiprocessing
Cellular multiprocessing
Cellular multiprocessing is a multiprocessing computing architecture designed initially for Intel central processing units from Unisys, a worldwide information technology consulting services and solutions company. It consists of the partitioning of processors into separate computing environments running different operating systems. Providing up to 32 processors that are crossbar connected to 64GB of memory and 96 PCI cards, a CMP system provides mainframe-like architecture using Intel CPUs. CMP supports Windows NT and Windows 2000 Server, AIX, Novell NetWare and UnixWare and can be run as one large SMP system or multiple systems with variant operating systems. There is a concept of creating CPU Partitions in CMPs, e.g. one can create a full partition of 32 processors, Or one can create two partitions of 16 processors each, these two partitions will be visible to the OS installed as two machines. Similarly for 32 processors it is possible to create 32 partitions at max each having a single CPU. Unisys' CMP is the only server architecture to take full advantage of Microsoft's Windows 2000 Datacenter Server operating system's support for 32 processors. In case of LINUX/UNIX OS the CMP technology is proven to be very best, whereas in case of Windows 2003 Servers installations, there are certain limits for partitions having number of CPUs, like for a windows 2003 installation the maximum CPU in a partition can only be 4, if more CPUs are assigned severe performance degrade are observed. Even on 8 CPU partition the performance is comparable to the performance of a 2 processors partition. A CMP subpod contains four x86 or Itanium CPUs, which connect through a third-level memory cache to the crossbar. Each crossbar supports two subpods, two direct I/O bridges (DIBs) and can connect to four memory storage units (MSUs). Unisys is also providing CMP server technology to Compaq, Dell, Hewlett-Packard and ICL, under OEM agreements. See also Asymmetric Multi-Processing Symmetric Multi-Processing References Parallel computing
23277715
https://en.wikipedia.org/wiki/Bhagavad%20Gita
Bhagavad Gita
The translations and interpretations of the Gita have been so diverse that these have been used to support apparently contradictory political and philosophical values. For example, state Galvin Flood and Charles Martin, these interpretations have been used to support "pacifism to aggressive nationalism" in politics, from "monism to theism" in philosophy. According to William Johnson, the synthesis of ideas in the Gita is such that it can bear almost any shade of interpretation. A translation "can never fully reproduce an original and no translation is transparent", states Richard Davis, but in the case of Gita the linguistic and cultural distance for many translators is large and steep which adds to the challenge and affects the translation. For some native translators, their personal beliefs, motivations, and subjectivity affect their understanding, their choice of words and interpretation. Some translations by Indians, with or without Western co-translators, have "orientalist", "apologetic", "Neo-Vedantin" or "guru phenomenon" bias. According to the exegesis scholar Robert Minor, the Gita is "probably the most translated of any Asian text", but many modern versions heavily reflect the views of the organization or person who does the translating and distribution. In Minor's view, the Harvard scholar Franklin Edgerton's English translation and Richard Garbe's German translation are closer to the text than many others. According to Larson, the Edgerton translation is remarkably faithful, but it is "harsh, stilted, and syntactically awkward" with an "orientalist" bias and lacks "appreciation of the text's contemporary religious significance". The Gita in other languages The Gita has also been translated into European languages other than English. In the sixteenth and seventeenth centuries, in the Mughal Empire, multiple discrete Persian translations of the Gita were completed. In 1808, passages from the Gita were part of the first direct translation of Sanskrit into German, appearing in a book through which Friedrich Schlegel became known as the founder of Indian philology in Germany. The most significant French translation of the Gita, according to J. A. B. van Buitenen, was published by Emile Senart in 1922. Swami Rambhadracharya released the first Braille version of the scripture, with the original Sanskrit text and a Hindi commentary, on 30 November 2007. The Gita Press has published the Gita in multiple Indian languages. R. Raghava Iyengar translated the Gita into Tamil in sandam metre poetic form. The Bhaktivedanta Book Trust associated with ISKCON has re-translated and published A.C. Bhaktivedanta Swami Prabhupada's 1972 English translation of the Gita in 56 non-Indian languages. Vinoba Bhave has written the Geeta in Marathi language as Geetai i.e. Mother Geeta in the similar shloka form. Paramahansa Yogananda's commentary on the Bhagavad Gita called God Talks with Arjuna: The Bhagavad Gita has been translated into Spanish, German, Thai and Hindi so far. The book is significant in that unlike other commentaries of the Bhagavad Gita, which focus on karma yoga, jnana yoga, and bhakti yoga in relation to the Gita, Yogananda's work stresses the training of one's mind, or raja yoga. Bhashya (commentaries) Bhagavad Gita integrates various schools of thought, notably Vedanta, Samkhya and Yoga, and other theistic ideas. It remains a popular text for commentators belonging to various philosophical schools. However, its composite nature also leads to varying interpretations of the text and historic scholars have written bhashya (commentaries) on it. According to Mysore Hiriyanna, the Gita is "one of the hardest books to interpret, which accounts for the numerous commentaries on it—each differing from the rest in one essential point or the other". According to Richard Davis, the Gita has attracted much scholarly interest in Indian history and some 227 commentaries have survived in the Sanskrit language alone. It has also attracted commentaries in regional vernacular languages for centuries, such as the one by Sant (Saint) Dnyaneshwar in Marathi language (13th century). Classical commentaries The Bhagavad Gita is referred to in the Brahma Sutras, and numerous scholars including Shankara, Bhaskara, Abhinavagupta of Shaivism tradition, Ramanuja and Madhvacharya wrote commentaries on it. Many of these commentators state that the Gita is "meant to be a moksa-shastra (moksasatra), and not a dharmasastra, an arthasastra or a kamasastra", states Sharma. Śaṅkara (c. 800 CE) The oldest and most influential surviving commentary was published by Adi Shankara (Śaṅkarācārya). Shankara interprets the Gita in a monist, nondualistic tradition (Advaita Vedanta). Shankara prefaces his comments by stating that the Gita is popular among the laity, that the text has been studied and commented upon by earlier scholars (these texts have not survived), but "I have found that to the laity it appears to teach diverse and quite contradictory doctrines". He calls the Gita as "an epitome of the essentials of the whole Vedic teaching". To Shankara, the teaching of the Gita is to shift an individual's focus from the outer, impermanent, fleeting objects of desire and senses to the inner, permanent, eternal atman-Brahman-Vasudeva that is identical, in everything and in every being. Abhinavagupta (c. 1000 CE) Abhinavagupta was a theologian and philosopher of the Kashmir Shaivism (Shiva) tradition. He wrote a commentary on the Gita as Gitartha-Samgraha, which has survived into the modern era. The Gita text he commented on is a slightly different recension than the one of Adi Shankara. He interprets its teachings in the Shaiva Advaita (monism) tradition quite similar to Adi Shankara, but with the difference that he considers both Self and matter to be metaphysically real and eternal. Their respective interpretations of jnana yoga are also somewhat different, and Abhinavagupta uses Atman, Brahman, Shiva, and Krishna interchangeably. Abhinavagupta's commentary is notable for its citations of more ancient scholars, in a style similar to Adi Shankara. However, the texts he quotes have not survived into the modern era. Rāmānuja (c. 1100 CE) Ramanuja was a Hindu theologian, philosopher, and an exponent of the Sri Vaishnavism (Vishnu) tradition in 11th and early 12th century. Like his Vedanta peers, Ramanuja wrote a bhashya (commentary) on the Gita. Ramanuja's disagreed with Adi Shankara's interpretation of the Gita as a text on nondualism (Self and Brahman are identical), and instead interpreted it as a form of dualistic and qualified monism philosophy (Vishishtadvaita). Madhva (c. 1250 CE) Madhva, a commentator of the Dvaita (modern taxonomy) Tatvavada (actually quoted by Madhva) Vedanta school, wrote a commentary on the Bhagavad Gita, which exemplifies the thinking of the Tatvavada school (Dvaita Vedanta). According to Christopher Chapelle, in the Madhva's school there is "an eternal and complete distinction between the Supreme, the many Selfs, and matter and its divisions". His commentary on the Gita is called . Madhva's commentary has attracted secondary works by pontiffs of the Dvaita Vedanta monasteries such as Padmanabha Tirtha, Jayatirtha, and Raghavendra Tirtha. Keśava Kāśmīri (c. 1479 CE) Keśava Kāśmīri Bhaṭṭa, a commentator of Dvaitādvaita Vedanta school, wrote a commentary on the Bhagavad Gita named . The text states that Dasasloki—possibly authored by Nimbarka—teaches the essence of the Gita; the Gita tattva prakashika interprets the Gita also in a hybrid monist-dualist manner. Vallabha (1481–1533 CE) Vallabha, the proponent of "Suddhadvaita" or pure non-dualism, wrote a commentary on the Gita, the Sattvadipika. According to him, the true Self is the Supreme Brahman. Bhakti is the most important means of attaining liberation. Gauḍīya Vaiṣṇava Commentaries Chaitanya Mahaprabhu (b. 1486 CE). Commentaries on various parts of the Gita are in the Gaudiya Vaishnavism Bhakti Vedanta tradition (achintya bheda abheda); in part a foundation of the ISKCON (Hare Krishna) interpretation of the Gita. Others Other classical commentators include Bhāskara (c. 900 CE) disagreed with Adi Shankara, wrote his own commentary on both Bhagavad Gita and Brahma Sutras in the tradition. According to Bhaskara, the Gita is essentially Advaita, but not quite exactly, suggesting that "the Atman (Self) of all beings are like waves in the ocean that is Brahman". Bhaskara also disagreed with Shankara's formulation of the Maya doctrine, stating that prakriti, atman and Brahman are all metaphysically real. Yamunacharya, Ramanuja's teacher, summarised the teachings of the Gita in his Gitartha sangraham. Nimbarka (1162 CE) followed Bhaskara, but it is unclear if he ever wrote a commentary. The commentary Gita tattva prakashika is generally attributed to a student named Kesava Bhatta in his tradition, written in a hybrid monist-dualist manner, which states that Dasasloki—possibly authored by Nimbarka—teaches the essence of the Gita. Dnyaneshwar's (1290 CE) commentary Dnyaneshwari ( Jnaneshwari or Bhavarthadipika) is the oldest surviving literary work in the Marathi language, one of the foundations of the Varkari tradition in Maharashtra (Bhakti movement, Eknath, Tukaram). The commentary interprets the Gita in the Advaita Vedanta tradition. Dnyaneshwar belonged to the Nath yogi tradition. His commentary on the Gita is notable for stating that it is the devotional commitment and love with inner renunciation that matters, not the name Krishna or Shiva, either can be used interchangeably. Vallabha II, a descendant of Vallabha (1479 CE), wrote the commentary Tattvadeepika in the Suddha-Advaita tradition. Madhusudana Saraswati's commentary Gudhartha Deepika is in the Advaita Vedanta tradition. Hanumat's commentary Paishacha-bhasya is in the Advaita Vedanta tradition. Anandagiri's commentary Bhashya-vyakhyanam is in the Advaita Vedanta tradition. Nilkantha's commentary Bhava-pradeeps is in the Advaita Vedanta tradition. Shreedhara's (1400 CE) commentary Avi gita is in the Advaita Vedanta tradition. Dhupakara Shastri's commentary Subodhini is in the Advaita Vedanta tradition. Raghuttama Tirtha's (1548-1596), commentary Prameyadīpikā Bhavabodha is in the Dvaita Vedanta tradition. Raghavendra Tirtha's (1595-1671) commentary Artha samgraha is in the Dvaita Vedanta tradition. Vanamali Mishra's (1650-1720) commentary Gitagudharthacandrika is quite similar to Madhvacharya's commentary and is in the Dvaita Vedanta tradition. Purushottama (1668–1781 CE), Vallabha's follower, wrote a commentary. Modern-era commentaries Among notable modern commentators of the Bhagavad Gita are Bal Gangadhar Tilak, Vinoba Bhave, Mahatma Gandhi (who called its philosophy Anasakti Yoga), Sri Aurobindo, Sarvepalli Radhakrishnan, B. N. K. Sharma, Osho, and Chinmayananda. Chinmayananda took a syncretistic approach to interpret the text of the Gita. Tilak wrote his commentary Shrimadh Bhagavad Gita Rahasya while in jail during the period 1910–1911 serving a six-year sentence imposed by the colonial government in India for sedition. While noting that the Gita teaches possible paths to liberation, his commentary places most emphasis on Karma yoga. No book was more central to Gandhi's life and thought than the Bhagavad Gita, which he referred to as his "spiritual dictionary". During his stay in Yeravda jail in 1929, Gandhi wrote a commentary on the Bhagavad Gita in Gujarati. The Gujarati manuscript was translated into English by Mahadev Desai, who provided an additional introduction and commentary. It was published with a foreword by Gandhi in 1946. The version by A. C. Bhaktivedanta Swami Prabhupada, entitled Bhagavad-Gita as It Is, is "by far the most widely distributed of all English Gīta translations" due to the efforts of ISKCON. Its publisher, the , estimates sales at twenty-three million copies, a figure which includes the original English edition and secondary translations into fifty-six other languages. The Prabhupada commentary interprets the Gita in the Gaudiya Vaishnavism tradition of Chaitanya, quite similar to Madhvacharya's Dvaita Vēdanta ideology. It presents Krishna as the Supreme, a means of saving mankind from the anxiety of material existence through loving devotion. Unlike in Bengal and nearby regions of India where the Bhagavata Purana is the primary text for this tradition, the devotees of Prabhupada's ISKCON tradition have found better reception for their ideas by those curious in the West through the Gita, according to Richard Davis. In 1966, Mahārishi Mahesh Yogi published a partial translation. An abridged version with 42 verses and commentary was published by Ramana Maharishi. Bhagavad Gita – The song of God, is a commentary by Swami Mukundananda. Paramahansa Yogananda's two-volume commentary on the Bhagavad Gita, called God Talks with Arjuna: The Bhagavad Gita, was released 1995 and is available in 5 language. The book is significant in that unlike other commentaries of the Bhagavad Gita, which focus on karma yoga, jnana yoga, and bhakti yoga in relation to the Gita, Yogananda's work stresses the training of one's mind, or raja yoga. It is published by Self-Realization Fellowship/Yogoda Satsanga Society of India. Eknath Easwaran's commentary interprets the Gita for his collection of problems of daily modern life. Other modern writers such as Swami Parthasarathy and Sādhu Vāsvāni have published their own commentaries. Academic commentaries include those by Jeaneane Fowler, Ithamar Theodor, and Robert Zaehner. A collection of Christian commentaries on the Gita has been edited by Catherine Cornille, comparing and contrasting a wide range of views on the text by theologians and religion scholars. The book The Teachings of Bhagavad Gita: Timeless Wisdom for the Modern Age by Richa Tilokani offers a woman's perspective on the teachings of the Bhagavad Gita in a simplified and reader-friendly spiritual format. Swami Dayananda Saraswati published a four-volume Bhagavad Gītā, Home Study Course in 1998 based on transcripts from his teaching and commentary of the Bhagavad Gītā in the classroom. This was later published in 2011 in a new edition and nine volume format. Galyna Kogut and Rahul Singh published An Atheist Gets the Gita, a 21st-century interpretation of the 5,000-year-old text. Reception Narendra Modi, the 14th prime minister of India, called the Bhagavad Gita "India's biggest gift to the world". Modi gave a copy of it to the then President of the United States of America, Barack Obama in 2014 during his U.S. visit. With its translation and study by Western scholars beginning in the early 18th century, the Bhagavad Gita gained a growing appreciation and popularity. According to the Indian historian and writer Khushwant Singh, Rudyard Kipling's famous poem "If—" is "the essence of the message of The Gita in English." Praise and popularity The Bhagavad Gita has been highly praised, not only by prominent Indians including Mahatma Gandhi and Sarvepalli Radhakrishnan, but also by Aldous Huxley, Henry David Thoreau, J. Robert Oppenheimer, Ralph Waldo Emerson, Carl Jung, Herman Hesse, and Bülent Ecevit. At a time when Indian nationalists were seeking an indigenous basis for social and political action against colonial rule, Bhagavad Gita provided them with a rationale for their activism and fight against injustice. Bal Gangadhar Tilak and Mahatma Gandhi used the text to help inspire the Indian independence movement. Mahatma Gandhi expressed his love for the Gita in these words: Jawaharlal Nehru, the first Prime Minister of independent India, commented on the Gita: A. P. J. Abdul Kalam, 11th President of India, despite being a Muslim, used to read Bhagavad Gita and recite mantras. J. Robert Oppenheimer, American physicist and director of the Manhattan Project, learned Sanskrit in 1933 and read the Bhagavad Gita in the original form, citing it later as one of the most influential books to shape his philosophy of life. Oppenheimer later recalled that, while witnessing the explosion of the Trinity nuclear test, he thought of verses from the Bhagavad Gita (XI,12): Years later he would explain that another verse had also entered his head at that time: Ralph Waldo Emerson, remarked the following after his first study of the Gita, and thereafter frequently quoted the text in his journals and letters, particularly the "work with inner renunciation" idea in his writings on man's quest for spiritual energy: The world's largest Bhagavad Gita is in the ISKCON Temple Delhi, which is the world's largest sacred book of any religion. It weighs 800 kg and measures over 2.8 metres. It was unveiled by Narendra Modi, the Prime Minister of India on 26 February 2019. On 27 February 2021, the Bhagavad Gita, was launched into outer space in a SD card, on a PSLV-C51 rocket launched by the Indian Space Research Organisation (ISRO) from the Satish Dhawan Space Centre in Sriharikota. Criticisms and apologetics War with self The Gita presents its teaching in the context of a war where the warrior Arjuna is in inner crisis about whether he should renounce and abandon the battlefield, or fight and kill. He is advised by Krishna to do his sva-dharma, a term that has been variously interpreted. According to the Indologist Paul Hacker, the contextual meaning in the Gita is the "dharma of a particular varna". Neo-Hindus such as Bankim Chandra Chatterjee, states Hacker, have preferred to not translate it in those terms, or "dharma" as religion, but leave Gita's message as "everyone must follow his sva-dharma". According to Chatterjee, the Hindus already understand the meaning of that term. To render it in English for non-Hindus for its better understanding, one must ask what is the sva-dharma for the non-Hindus? The Lord, states Chatterjee, created millions and millions of people, and he did not ordain dharma only for Indians [Hindus] and "make all the others dharma-less", for "are not the non-Hindus also his children"? According to Chatterjee, the Krishna's religion of Gita is "not so narrow-minded". This argument, states Hacker, is an attempt to "universalize Hinduism". The Gita has been cited and criticized as a Hindu text that supports varna-dharma and the caste system. B. R. Ambedkar, born in a Dalit family and the principal architect of the Constitution of India, criticized the text for its stance on caste and for "defending certain dogmas of religion on philosophical grounds". According to Jimmy Klausen, Ambedkar in his essay Krishna and his Gita stated that the Gita was a "tool" of Brahmanical Hinduism and for its latter-day saints such as Mahatma Gandhi and Lokmanya Tilak. To Ambedkar, states Klausen, it is a text of "mostly barbaric, religious particularisms" offering "a defence of the kshatriya duty to make war and kill, the assertion that varna derives from birth rather than worth or aptitude, and the injunction to perform karma" neither perfunctorily nor egotistically. Similar criticism of the Gita has been published by Damodar Dharmananda Kosambi, another Marxist historian. Nadkarni and Zelliot present the opposite view, citing early Bhakti saints of the Krishna-tradition such as the 13th-century Dnyaneshwar. According to Dnyaneshwar, the Gita starts off with the discussion of sva-dharma in Arjuna's context but ultimately shows that caste differences are not important. For Dnyaneshwar, people err when they see themselves distinct from each other and Krishna, and these distinctions vanish as soon as they accept, understand and enter with love unto Krishna. According to Swami Vivekananda, sva-dharma in the Gita does not mean "caste duty", rather it means the duty that comes with one's life situation (mother, father, husband, wife) or profession (soldier, judge, teacher, doctor). For Vivekananda, the Gita was an egalitarian scripture that rejected caste and other hierarchies because of its verses such as 13.27—28, which states "He who sees the Supreme Lord dwelling equally in all beings, the Imperishable in things that perish, he sees verily. For seeing the Lord as the same everywhere present, he does not destroy the Self by the Self, and thus he goes to the highest goal." Aurobindo modernises the concept of dharma and svabhava by internalising it, away from the social order and its duties towards one's personal capacities, which leads to a radical individualism, "finding the fulfilment of the purpose of existence in the individual alone." He deduced from the Gita the doctrine that "the functions of a man ought to be determined by his natural turn, gift, and capacities", that the individual should "develop freely" and thereby would be best able to serve society. Gandhi's view differed from Aurobindo's view. He recognised in the concept of sva-dharma his idea of svadeshi (sometimes spelled swadeshi), the idea that "man owes his service above all to those who are nearest to him by birth and situation." To him, svadeshi was "sva-dharma applied to one's immediate environment." According to Jacqueline Hirst, the universalist neo-Hindu interpretations of dharma in the Gita is modernism, though any study of pre-modern distant foreign cultures is inherently subject to suspicions about "control of knowledge" and bias on the various sides. Hindus have their own understanding of dharma that goes much beyond the Gita or any particular Hindu text. Further, states Hirst, the Gita should be seen as a "unitary text" in its entirety rather than a particular verse analyzed separately or out of context. Krishna is presented as a teacher who "drives Arjuna and the reader beyond initial preconceptions". The Gita is a cohesively knit pedagogic text, not a list of norms. Modern-Hinduism Novel interpretations of the Gita, along with apologetics on it, have been a part of the modern era revisionism and renewal movements within Hinduism. Bankim Chandra Chatterji, the author of Vande Mataram – the national song of India, challenged orientalist literature on Hinduism and offered his interpretations of the Gita, states Ajit Ray. Bal Gangadhar Tilak interpreted the karma yoga teachings in Gita as a "doctrine of liberation" taught by Hinduism, while Sarvepalli Radhakrishnan stated that the Bhagavad Gita teaches a universalist religion and the "essence of Hinduism" along with the "essence of all religions", rather than a private religion. Vivekananda's works contained numerous references to the Gita, such as his lectures on the four yogas – Bhakti, Jnana, Karma, and Raja. Through the message of the Gita, Vivekananda sought to energise the people of India to reclaim their dormant but strong identity. Aurobindo saw Bhagavad Gita as a "scripture of the future religion" and suggested that Hinduism had acquired a much wider relevance through the Gita. Sivananda called Bhagavad Gita "the most precious jewel of Hindu literature" and suggested its introduction into the curriculum of Indian schools and colleges. According to Ronald Neufeldt, it was the Theosophical Society that dedicated much attention and energy to the allegorical interpretation of the Gita, along with religious texts from around the world, after 1885 and given H. P. Blavatsky, Subba Rao and Anne Besant writings. Their attempt was to present their "universalist religion". These late 19th-century theosophical writings called the Gita as a "path of true spirituality" and "teaching nothing more than the basis of every system of philosophy and scientific endeavor", triumphing over other "Samkhya paths" of Hinduism that "have degenerated into superstition and demoralized India by leading people away from practical action". Political violence In the Gita, Krishna persuades Arjuna to wage war where the enemy includes some of his own relatives and friends. In light of the Ahimsa (non-violence) teachings in Hindu scriptures, the Gita has been criticized as violating the Ahimsa value, or alternatively, as supporting political violence. The justification of political violence when peaceful protests and all else fails, states Varma, has been a "fairly common feature of modern Indian political thought" along with the "mighty antithesis of Gandhian thought on non-violence". During the independence movement in India, Hindus considered active "burning and drowning of British goods" while technically illegal under colonial legislation, were viewed as a moral and just war for the sake of liberty and righteous values of the type Gita discusses. According to Paul Schaffel the influential Hindu nationalist V.D. Savarkar "often turned to Hindu scripture such as the Bhagavad Gita, arguing that the text justified violence against those who would harm Mother India." Mahatma Gandhi credited his commitment for ahimsa to the Gita. For Gandhi, the Gita is teaching that people should fight for justice and righteous values, that they should never meekly suffer injustice to avoid a war. According to the Indologist Ananya Vajpeyi, the Gita does not elaborate on the means or stages of war, nor on ahimsa, except for stating that "ahimsa is virtuous and characterizes an awakened, steadfast, ethical man" in verses such as 13.7–10 and 16.1–5. For Gandhi, states Vajpeyi, ahimsa is the "relationship between self and other" while he and his fellow Indians battled against the colonial rule. Gandhian ahimsa is in fact "the essence of the entire Gita", according to Vajpeyi. The teachings of the Gita on ahimsa are ambiguous, states Arvind Sharma, and this is best exemplified by the fact that Nathuram Godse stated the Gita as his inspiration to do his dharma after he assassinated Mahatma Gandhi. Thomas Merton, the Trappist monk and author of books on Zen Buddhism, concurs with Gandhi and states that the Gita is not teaching violence nor propounding a "make war" ideology. Instead, it is teaching peace and discussing one's duty to examine what is right and then act with pure intentions, when one's faces difficult and repugnant choices. Adaptations Philip Glass retold the story of Gandhi's early development as an activist in South Africa through the text of the Gita in the opera Satyagraha (1979). The entire libretto of the opera consists of sayings from the Gita sung in the original Sanskrit. In Douglas Cuomo's Arjuna's dilemma, the philosophical dilemma faced by Arjuna is dramatised in operatic form with a blend of Indian and Western music styles. The 1993 Sanskrit film, Bhagavad Gita, directed by G. V. Iyer won the 1993 National Film Award for Best Film. The 1995 novel by Steven Pressfield, and its adaptation as the 2000 golf movie The Legend of Bagger Vance by Robert Redford has parallels to the Bhagavad Gita, according to Steven J. Rosen. Steven Pressfield acknowledges that the Gita was his inspiration, the golfer character in his novel is Arjuna, the caddie is Krishna, states Rosen. The movie, however, uses the plot but glosses over the teachings unlike in the novel. See also Ashtavakra Gita Avadhuta Gita Bhagavata Purana The Ganesha Gita Puranas Self-consciousness (Vedanta) Uddhava Gita Vedas Prasthanatrayi Vyadha Gita Notes References Citations Sources Printed sources Palshikar, Sanjay. Evil and the Philosophy of Retribution: Modern Commentaries on the Bhagavad-Gita (Routledge, 2015) Online sources External links Bhagavad Gita Shloka in Sanskrit Bhagvat Geeta – Dialogues of Kreeshna and Arjoon by Charles Wilkins An audio podcast summary for the Baghavad Gita is available from the Book Matrix Bhagavad-Gita Bhagavid Gita article in the Internet Encyclopedia of Philosophy Ancient yoga texts Dialogues Gaudiya Vaishnavism Hindu philosophy Hindu texts Krishna Kurukshetra Mahabharata Religious texts Sanskrit texts Vaishnava texts Works of unknown authorship
24272165
https://en.wikipedia.org/wiki/VP/MS
VP/MS
VP/MS (Visual Product Modeling System) is a family of software components developed by CSC that support product development and product lifecycle management. Insurance companies (among other users in business and IT) use VP/MS to manage the rules, clauses, formulas and calculations associated with savings and both life and non-life insurance products. With VP/MS all calculations and queries for purposes such as quotes and administration are supported by a central repository of product definitions. VP/MS supports processes like product definition and administration, product testing and documentation, design checks, visualization and cross-platform usage of products. In addition to hosting product definitions, VP/MS is a modeling language. It provides a graphical interface (GUI) for creating business rules as components and models. VP/MS is platform independent – products can be ported to any administration or illustration system or deployed over the Internet – and makes use of the Eclipse platform for developing software. Product server VP/MS is a product server – a software tool that hosts all knowledge on insurance and other products centrally and provides it to application systems in various deployment scenarios and across various platforms. The outcome of a VP/MS-designed model is modular, portable calculation rules. VP/MS rendered calculation rules are in turn incorporated with associated applications, such as VP/MS Designer or J-VP/MS, to create (respectively) GUIs or a calculations architecture compatible with existing software architecture. Systems used by insurance policy administrators, product brokers and web servers ultimately rely on libraries of VP/MS rendered architecture for the production of product illustrations and calculations. VP/MS users Industries VP/MS is industry-neutral – it is a generic tool that is not designed to be used exclusively within a specific industry. VP/MS has been deployed within non-insurance applications as a general rules engine. However, since it was developed within an insurance context, VP/MS is applied broadly and extensively in insurance. Among users of VP/MS are life insurers and providers of pensions, property and casualty insurers and health insurers Other sectors where VP/MS’s underlying rules management capabilities are applied include banking, energy and utilities. Design VP/MS Workbench is the main environment for modeling in VP/MS. As such, it is used extensively in the back office during the design or maintenance of product rules. Actuaries, financial modelers, business analysts, product specialists and programmers are among those using VP/MS during this phase. A number of supplementary components support this phase of the product lifecycle. Examples are VP/MS Documentation Suite, VP/MS Test Suite and VP/MS Checker. Product managers use VP/MS Model Manager for an overview, product release and versioning, team collaboration and access control. VP/MS Runtime is then responsible for sharing a single instance of a product across various platforms. Implementation After the design phase, VP/MS hosted architecture is supplied to the organization via related applications. For example, IT specialists use VP/MS Designer and J-VP/MS to integrate model libraries with end-user applications. The product server is sold as part of other applications under different names. Multi platform capabilities J-VP/MS integrates VP/MS calculation rules into existing software architectures via standard interfaces and technologies such as Java EE, XML-based SOAP, WSDL and Struts. Summarized history of VP/MS 1995 – VP/MS was designed by a team of M+I Unternehmensberatung GmbH (an Austrian consulting company, founded by Gerhard Friedrich and partners) with Interunfall Versicherung AG, a subsidiary insurance company of Generali Austria as the first client. Software development was done by CAF GmbH (a German software development company). It is originally named Versicherungsprodukt-Modellierungssystem (which translates to "insurance product modeling system"). 1996 - M+I and CAF start to sell VP/MS to insurance companies in Germany, Austria and Switzerland. 1998 – Development of VP/MS Designer by CAF. 1999 - PMS Micado (the German subsidiary of the US based Policy Management Systems Corporation) takes over CAF and obtains a worldwide general license to develop and sell VP/MS from M+I and Generali. 2001 – CSC takes over ownership of Policy Management Systems Corporation and all subsidiaries. Introduction of J-VP/MS and integration of VP/MS into CSC offerings. 2003 – Development of Eclipse-based VP/MS Model Manager. 2005 – Development of Eclipse-based VP/MS Test Suite. 2006 – Development of Eclipse-based VP/MS Documentation Suite. 2008 – Development of Eclipse-based Workbench and VP/MS Checker. 2017 - CSC is now a part of DXC Technology In 2009 there are over 140 companies in 24 countries using VP/MS, e.g. Axa, Generali and Uniqa. A market research source listed the following VP/MS users in the USA in 2008: American National, New York Life, Ohio National and Symetra. See also ACORD Custom software Financial data vendors Financial modeling Financial services Modeling language Records management Risk management Software as a service Software developer Notes External links Computer-related introductions in 1997 Component-based software engineering Business software
70097
https://en.wikipedia.org/wiki/Compiler-compiler
Compiler-compiler
In computer science, a compiler-compiler or compiler generator is a programming tool that creates a parser, interpreter, or compiler from some form of formal description of a programming language and machine. The most common type of compiler-compiler is more precisely called a parser generator. They only handle syntactic analysis. The input of a parser generator is a grammar file, typically written in Backus–Naur form (BNF) or extended Backus–Naur form (EBNF) that defines the syntax of a target programming language; The output is a source code of a parser for the programming language. The output of the (compiled) parser source code is a parser. It may be either standalone or embedded. This parser takes as an input a source code of the target programming language source and perform some action or outputs an abstract syntax tree (AST). Parser generators do not handle the semantics of the AST, or the generation of machine code for the target machine. A metacompiler is a software development tool used mainly in the construction of compilers, translators, and interpreters for other programming languages. The input to a metacompiler is a computer program written in a specialized programming metalanguage designed mainly for the purpose of constructing compilers. The language of the compiler produced is called the object language. The minimal input producing a compiler is a metaprogram specifying the object language grammar and semantic transformations into an object program. Variants A typical parser generator associates executable code with each of the rules of the grammar that should be executed when these rules are applied by the parser. These pieces of code are sometimes referred to as semantic action routines since they define the semantics of the syntactic structure that is analyzed by the parser. Depending upon the type of parser that should be generated, these routines may construct a parse tree (or abstract syntax tree), or generate executable code directly. One of the earliest (1964), surprisingly powerful, versions of compiler-compilers is META II, which accepted an analytical grammar with output facilities that produce stack machine code, and is able to compile its own source code and other languages. Among the earliest programs of the original Unix versions being built at Bell Labs was the two-part lex and yacc system, which was normally used to output C programming language code, but had a flexible output system that could be used for everything from programming languages to text file conversion. Their modern GNU versions are flex and bison. Some experimental compiler-compilers take as input a formal description of programming language semantics, typically using denotational semantics. This approach is often called 'semantics-based compiling', and was pioneered by Peter Mosses' Semantic Implementation System (SIS) in 1978. However, both the generated compiler and the code it produced were inefficient in time and space. No production compilers are currently built in this way, but research continues. The Production Quality Compiler-Compiler (PQCC) project at Carnegie Mellon University does not formalize semantics, but does have a semi-formal framework for machine description. Compiler-compilers exist in many flavors, including bottom-up rewrite machine generators (see JBurg) used to tile syntax trees according to a rewrite grammar for code generation, and attribute grammar parser generators (e.g. ANTLR can be used for simultaneous type checking, constant propagation, and more during the parsing stage). Metacompilers Metacompilers reduce the task of writing compilers by automating the aspects that are the same regardless of the object language. This makes possible the design of domain-specific languages which are appropriate to the specification of a particular problem. A metacompiler reduces the cost of producing translators for such domain-specific object languages to a point where it becomes economically feasible to include in the solution of a problem a domain-specific language design. As a metacompiler's metalanguage will usually be a powerful string and symbol processing language, they often have strong applications for general-purpose applications, including generating a wide range of other software engineering and analysis tools. Besides being useful for domain-specific language development, a metacompiler is a prime example of a domain-specific language, designed for the domain of compiler writing. A metacompiler is a metaprogram usually written in its own metalanguage or an existing computer programming language. The process of a metacompiler, written in its own metalanguage, compiling itself is equivalent to self-hosting compiler. Most common compilers written today are self-hosting compilers. Self-hosting is a powerful tool, of many metacompilers, allowing the easy extension of their own metaprogramming metalanguage. The feature that separates a metacompiler apart from other compiler compilers is that it takes as input a specialized metaprogramming language that describes all aspects of the compilers operation. A metaprogram produced by a metacompiler is as complete a program as a program written in C++, BASIC or any other general programming language. The metaprogramming metalanguage is a powerful attribute allowing easier development of computer programming languages and other computer tools. Command line processors, text string transforming and analysis are easily coded using metaprogramming metalanguages of metacompilers. A full featured development package includes a linker and a run time support library. Usually, a machine-oriented system programming language, such as C or C++, is needed to write the support library. A library consisting of support functions needed for the compiling process usually completes the full metacompiler package. The meaning of metacompiler In computer science, the prefix meta is commonly used to mean about (its own category). For example, metadata are data that describe other data. A language that is used to describe other languages is a metalanguage. Meta may also mean on a higher level of abstraction. A metalanguage operates on a higher level of abstraction in order to describe properties of a language. Backus–Naur form (BNF) is a formal metalanguage originally used to define ALGOL 60. BNF is a weak metalanguage, for it describes only the syntax and says nothing about the semantics or meaning. Metaprogramming is the writing of computer programs with the ability to treat programs as their data. A metacompiler takes as input a metaprogram written in a specialized metalanguages (a higher level abstraction) specifically designed for the purpose of metaprogramming. The output is an executable object program. An analogy can be drawn: That as a C++ compiler takes as input a C++ programming language program, a metacompiler takes as input a metaprogramming metalanguage program. Forth metacompiler Many advocates of the language Forth call the process of creating a new implementation of Forth a meta-compilation and that it constitutes a metacompiler. The Forth definition of metacompiler is: "A metacompiler is a compiler which processes its own source code, resulting in an executable version of itself." This Forth use of the term metacompiler is disputed in mainstream computer science. See Forth (programming language) and History of compiler construction. The actual Forth process of compiling itself is a combination of a Forth being a self-hosting extensible programming language and sometimes cross compilation, long established terminology in computer science. Metacompilers are a general compiler writing system. Besides the Forth metacompiler concept being indistinguishable from self-hosting and extensible language. The actual process acts at a lower level defining a minimum subset of forth words, that can be used to define additional forth words, A full Forth implementation can then be defined from the base set. This sounds like a bootstrap process. The problem is that almost every general purpose language compiler also fits the Forth metacompiler description. When (self-hosting compiler) X processes its own source code, resulting in an executable version of itself, X is a metacompiler. Just replace X with any common language, C, C++, Pascal, COBOL, Fortran, Ada, Modula-2, etc. And X would be a metacompiler according to the Forth usage of metacompiler. A metacompiler operates at an abstraction level above the compiler it compiles. It only operates at the same (self-hosting compiler) level when compiling itself. One has to see the problem with this definition of metacompiler. It can be applied to most any language. However, on examining the concept of programming in Forth, adding new words to the dictionary, extending the language in this way is metaprogramming. It is this metaprogramming in Forth that makes it a metacompiler. Programming in Forth is adding new words to the language. Changing the language in this way is metaprogramming. Forth is a metacompiler because Forth is a language specifically designed for metaprogramming. Programming in Forth is extending Forth adding words to the Forth vocabulary creates a new Forth dialect. Forth is a specialized metacompiler for Forth language dialects. History The first compiler-compiler to use that name was written by Tony Brooker in 1960 and was used to create compilers for the Atlas computer at the University of Manchester, including the Atlas Autocode compiler. The early history of metacompilers is closely tied with the history of SIG/PLAN Working group 1 on Syntax Driven Compilers. The group was started primarily through the effort of Howard Metcalfe in the Los Angeles area. In the fall of 1962 Howard Metcalfe designed two compiler-writing interpreters. One used a bottom-to-top analysis technique based on a method described by Ledley and Wilson. The other used a top-to-bottom approach based on a work by Glennie to generate random English sentences from a context-free grammar. At the same time, Val Schorre described two "meta machines". One generative and one analytic. The generative machine was implemented and produced random algebraic expressions. Meta I the first metacompiler was implemented by Schorre on an IBM 1401 at UCLA in January 1963. His original interpreters and metamachines were written directly in a pseudo-machine language. META II, however, was written in a higher-level metalanguage able to describe its own compilation into the pseudo-machine language. Lee Schmidt at Bolt, Beranek, and Newman wrote a metacompiler in March 1963 that utilized a CRT display on the time-sharing PDP-l. This compiler produced actual machine code rather than interpretive code and was partially bootstrapped from Meta I. Schorre bootstrapped Meta II from Meta I during the Spring of 1963. The paper on the refined metacompiler system presented at the 1964 Philadelphia ACM conference is the first paper on a metacompiler available as a general reference. The syntax and implementation technique of Schorre's system laid the foundation for most of the systems that followed. The system was implemented on a small 1401, and was used to implement a small ALGOL-like language. Many similar systems immediately followed. Roger Rutman of AC Delco developed and implemented LOGIK, a language for logical design simulation, on the IBM 7090 in January 1964. This compiler used an algorithm that produced efficient code for Boolean expressions. Another paper in the 1964 ACM proceedings describes Meta III, developed by Schneider and Johnson at UCLA for the IBM 7090. Meta III represents an attempt to produce efficient machine code, for a large class of languages. Meta III was implemented completely in assembly language. Two compilers were written in Meta III, CODOL, a compiler-writing demonstration compiler, and PUREGOL, a dialect of ALGOL 60. (It was pure gall to call it ALGOL). Late in 1964, Lee Schmidt bootstrapped the metacompiler EQGEN, from the PDP-l to the Beckman 420. EQGEN was a logic equation generating language. In 1964, System Development Corporation began a major effort in the development of metacompilers. This effort includes powerful metacompilers, Bookl, and Book2 written in Lisp which have extensive tree-searching and backup ability. An outgrowth of one of the Q-32 systems at SDC is Meta 5. The Meta 5 system incorporates backup of the input stream and enough other facilities to parse any context-sensitive language. This system was successfully released to a wide number of users and had many string-manipulation applications other than compiling. It has many elaborate push-down stacks, attribute setting and testing facilities, and output mechanisms. That Meta 5 successfully translates JOVIAL programs to PL/I programs demonstrates its power and flexibility. Robert McClure at Texas Instruments invented a compiler-compiler called TMG (presented in 1965). TMG was used to create early compilers for programming languages like B, PL/I and ALTRAN. Together with metacompiler of Val Schorre, it was an early inspiration for the last chapter of Donald Knuth's The Art of Computer Programming. The LOT system was developed during 1966 at Stanford Research Institute and was modeled very closely after Meta II. It had new special-purpose constructs allowing it to generate a compiler which could in turn, compile a subset of PL/I. This system had extensive statistic-gathering facilities and was used to study the characteristics of top-down analysis. SIMPLE is a specialized translator system designed to aid the writing of pre-processors for PL/I, SIMPLE, written in PL/I, is composed of three components: An executive, a syntax analyzer and a semantic constructor. The TREE-META compiler was developed at Stanford Research Institute in Menlo Park, California. April 1968. The early metacompiler history is well documented in the TREE META manual. TREE META paralleled some of the SDC developments. Unlike earlier metacompilers it separated the semantics processing from the syntax processing. The syntax rules contained tree building operations that combined recognized language elements with tree nodes. The tree structure representation of the input was then processed by a simple form of unparse rules. The unparse rules used node recognition and attribute testing that when matched resulted in the associated action being performed. In addition like tree element could also be tested in an unparse rule. Unparse rules were also a recursive language being able to call unparse rules passing elements of thee tree before the action of the unparse rule was performed. The concept of the metamachine originally put forth by Glennie is so simple that three hardware versions have been designed and one actually implemented. The latter at Washington University in St. Louis. This machine was built from macro-modular components and has for instructions the codes described by Schorre. CWIC (Compiler for Writing and Implementing Compilers) is the last known Schorre metacompiler. It was developed at Systems Development Corporation by Erwin Book, Dewey Val Schorre and Steven J. Sherman With the full power of (lisp 2) a list processing language optimizing algorithms could operate on syntax generated lists and trees before code generation. CWIC also had a symbol table built into the language. With the resurgence of domain-specific languages and the need for parser generators which are easy to use, easy to understand, and easy to maintain, metacompilers are becoming a valuable tool for advanced software engineering projects. Other examples of parser generators in the yacc vein are ANTLR, Coco/R, CUP, GNU Bison, Eli, FSL, SableCC, SID (Syntax Improving Device), and JavaCC. While useful, pure parser generators only address the parsing part of the problem of building a compiler. Tools with broader scope, such as PQCC, Coco/R and DMS Software Reengineering Toolkit provide considerable support for more difficult post-parsing activities such as semantic analysis, code optimization and generation. Schorre metalanguages The earliest Schorre metacompilers, META I and META II, were developed by D. Val Schorre at UCLA. Other Schorre based metacompilers followed. Each adding improvements to language analysis and/or code generation. In programming it is common to use the programming language name to refer to both the compiler and the programming language, the context distinguishing the meaning. A C++ program is compiled using a C++ compiler. That also applies in the following. For example, META II is both the compiler and the language. The metalanguages in the Schorre line of metacompilers are functional programming languages that use top down grammar analyzing syntax equations having embedded output transformation constructs. A syntax equation: <name> = <body>; is a compiled test function returning success or failure. <name> is the function name. <body> is a form of logical expression consisting of tests that may be grouped, have alternates, and output productions. A test is like a bool in other languages, success being true and failure being false. Defining a programming language analytically top down is natural. For example, a program could be defined as: program = $declaration; Defining a program as a sequence of zero or more declaration(s). In the Schorre META X languages there is a driving rule. The program rule above is an example of a driving rule. The program rule is a test function that calls declaration, a test rule, that returns success or failure. The $ loop operator repeatedly calling declaration until failure is returned. The $ operator is always successful, even when there are zero declaration. Above program would always return success. (In CWIC a long fail can bypass declaration. A long-fail is part of the backtracking system of CWIC) The character sets of these early compilers were limited. The character / was used for the alternant (or) operator. "A or B" is written as A / B. Parentheses ( ) are used for grouping. A (B / C) Describes a construct of A followed by B or C. As a boolean expression it would be A and (B or C) A sequence X Y has an implied X and Y meaning. ( ) are grouping and / the or operator. The order of evaluation is always left to right as an input character sequence is being specified by the ordering of the tests. Special operator words whose first character is a "." are used for clarity. .EMPTY is used as the last alternate when no previous alternant need be present. X (A / B / .EMPTY) Indicates that X is optionally followed by A or B. This is a specific characteristic of these metalanguages being programming languages. Backtracking is avoided by the above. Other compiler constructor systems may have declared the three possible sequences and left it up to the parser to figure it out. The characteristics of the metaprogramming metalanguages above are common to all Schorre metacompilers and those derived from them. META I META I was a hand compiled metacompiler used to compile META II. Little else is known of META I except that the initial compilation of META II produced nearly identical code to that of the hand coded META I compiler. META II Each rule consists optionally of tests, operators, and output productions. A rule attempts to match some part of the input program source character stream returning success or failure. On success the input is advanced over matched characters. On failure the input is not advanced. Output productions produced a form of assembly code directly from a syntax rule. TREE-META TREE-META introduced tree building operators :<node_name> and [<number>] moving the output production transforms to unparse rules. The tree building operators were used in the grammar rules directly transforming the input into an abstract syntax tree. Unparse rules are also test functions that matched tree patterns. Unparse rules are called from a grammar rule when an abstract syntax tree is to be transformed into output code. The building of an abstract syntax tree and unparse rules allowed local optimizations to be performed by analyzing the parse tree. Moving of output productions to the unparse rules made a clear separation of grammar analysis and code production. This made the programming easier to read and understand. CWIC In 1968-1970, Erwin Book, Dewey Val Schorre, and Steven J. Sherman developed CWIC. (Compiler for Writing and Implementing Compilers) at System Development Corporation Charles Babbage Institute Center for the History of Information Technology (Box 12, folder 21), CWIC is a compiler development system composed of three special-purpose, domain specific, languages, each intended to permit the description of certain aspects of translation in a straight forward manner. The syntax language is used to describe the recognition of source text and the construction from it to an intermediate tree structure. The generator language is used to describe the transformation of the tree into appropriate object language. The syntax language follows Dewey Val Schorre's previous line of metacompilers. It most resembles TREE-META having tree building operators in the syntax language. The unparse rules of TREE-META are extended to work with the object based generator language based on LISP 2. CWIC includes three languages: Syntax: Transforms the source program input, into list structures using grammar transformation formula. A parsed expression structure is passed to a generator by placement of a generator call in a rule. A tree is represented by a list whose first element is a node object. The language has operators, < and >, specifically for making lists. The colon : operator is used to create node objects. :ADD creates an ADD node. The exclamation ! operator combines a number of parsed entries with a node to make a tree . Trees created by syntax rules are passed to generator functions, returning success or failure. The syntax language is very close to TREE-META. Both use a colon to create a node. CWIC's tree building exclamation !<number> functions the same as TREE-META's [<number>]. Generator: a named series of transforming rules, each consisting of an unparse, pattern matching, rule. and an output production written in a LISP 2 like language. the translation was to IBM 360 binary machine code. Other facilities of the generator language generalized output. MOL-360: an independent mid level implementation language for the IBM System/360 family of computers developed in 1968 and used for writing the underlying support library. Generators language Generators Language had semantics similar to Lisp. The parse tree was thought of as a recursive list. The general form of a Generator Language function is: function-name(first-unparse_rule) => first-production_code_generator (second-unparse_rule) => second-production_code_generator (third-unparse_rule) => third-production_code_generator ... The code to process a given tree included the features of a general purpose programming language, plus a form: <stuff>, which would emit (stuff) onto the output file. A generator call may be used in the unparse_rule. The generator is passed the element of unparse_rule pattern in which it is placed and its return values are listed in (). For example: expr_gen(ADD[expr_gen(x),expr_gen(y)]) => <AR + (x*16)+y;> releasereg(y); return x; (SUB[expr_gen(x),expr_gen(y)])=> <SR + (x*16)+y;> releasereg(y); return x; (MUL[expr_gen(x),expr_gen(y)])=> . . . (x)=> r1 = getreg(); load(r1, x); return r1; ... That is, if the parse tree looks like (ADD[<something1>,<something2>]), expr_gen(x) would be called with <something1> and return x. A variable in the unparse rule is a local variable that can be used in the production_code_generator. expr_gen(y) is called with <something2> and returns y. Here is a generator call in an unparse rule is passed the element in the position it occupies. Hopefully in the above x and y will be registers on return. The last transforms is intended to load an atomic into a register and return the register. The first production would be used to generate the 360 "AR" (Add Register) instruction with the appropriate values in general registers. The above example is only a part of a generator. Every generator expression evaluates to a value that con then be further processed. The last transform could just as well have been written as: (x)=> return load(getreg(), x); In this case load returns its first parameter, the register returned by getreg(). the functions load and getreg are other CWIC generators. CWIC addressed domain-specific languages before the term domain-specific language existed From the authors of CWIC: "A metacompiler assists the task of compiler-building by automating its non creative aspects, those aspects that are the same regardless of the language which the produced compiler is to translate. This makes possible the design of languages which are appropriate to the specification of a particular problem. It reduces the cost of producing processors for such languages to a point where it becomes economically feasible to begin the solution of a problem with language design." Examples ANTLR GNU Bison Coco/R, Coco-2 DMS Software Reengineering Toolkit, a program transformation system with parser generators Epsilon Grammar Studio Lemon parser generator LRStar: LR(*) parser generator META II parboiled, a Java library for building parsers. Packrat parser PackCC, a packrat parser with left recursion support. PQCC, a compiler-compiler that is more than a parser generator. Syntax Improving Device (SID) SYNTAX, an integrated toolset for compiler construction. TREE-META Yacc Xtext XPL JavaCC See also Parsing expression grammar LL parser LR parser Simple LR parser LALR parser GLR parser Domain analysis Domain-specific language History of compiler construction History of compiler construction#Self-hosting compilers Metacompilation Program transformation References and notes Further reading External links Computer50.org, Brooker Autocodes Catalog.compilertools.net, The Catalog of Compiler Construction Tools Labraj.uni-mb.si, Lisa Skenz.it, Jflex and Cup resources Parsing Compiler construction Metaprogramming Pattern matching programming languages Program transformation tools Extensible syntax programming languages Domain-specific programming languages Program analysis Software design Compiler theory
251799
https://en.wikipedia.org/wiki/Machine%20embroidery
Machine embroidery
Machine embroidery is an embroidery process whereby a sewing machine or embroidery machine is used to create patterns on textiles. It is used commercially in product branding, corporate advertising, and uniform adornment. It is also used in the fashion industry to decorate garments and apparel. Machine embroidery is used by hobbyists and crafters to decorate gifts, clothing, and home decor. Examples include designs on quilts, pillows, and wall hangings. There are multiple types of machine embroidery. Free-motion sewing machine embroidery uses a basic zigzag sewing machine. Designs are done manually. Most commercial embroidery is done with link stitch embroidery. In link stitch embroidery, patterns may be manually or automatically controlled. Link Stitch embroidery is also known as chenille embroidery, and was patented by Pulse Microsystems in 1994. More modern computerized machine embroidery uses an embroidery machine or sewing/embroidery machine that is controlled with a computer that embroiders stored patterns. These machines may have multiple heads and threads. History Before computers were affordable, most machine embroidery was completed by punching designs on paper tape that then ran through an embroidery machine. One error could ruin an entire design, forcing the creator to start over. Machine embroidery dates back to 1964, when Tajima started to manufacture and sell TAJIMA Multi-head Automatic Embroidery machines. In 1973 Tajima introduced the TMB Series 6-needle (6 color) full-automatic color-change embroidery machine. A few years later, in 1978, Tajima started manufacturing the TMBE Series Bridge Type Automatic Embroidery machines. These machines introduced electronic 6-needle automatic color change technology. In 1980 the first computerized embroidery machines were introduced to the home market. Wilcom introduced the first computer graphics embroidery design system to run on a minicomputer. Melco, an international distribution network formed by Randal Melton and Bill Childs, created the first embroidery sample head for use with large Schiffli looms. These looms spanned several feet across and produced lace patches and large embroidery patterns. The sample head allowed embroiderers to avoid manually sewing the design sample and saved production time. Subsequently, it became the first computerized embroidery machine marketed to home sewers. The economic policy of the Reagan presidency helped propel Melco to the top of the market. At the Show of the Americas in 1980, Melco unveiled the Digitrac, a digitizing system for embroidery machines. The digitized design was composed at six times the size of the embroidered final product. The Digitrac consisted of a small computer,mounted on an X and Y axis on a large white board. It sold for $30,000. The original single-needle sample head sold for $10,000 and included a 1" paper-tape reader and 2 fonts. The digitizer marked common points in the design to create elaborate fill and satin stitch combinations. In 1982, Tajima introduced the world's first electronic chenille embroidery machine, called the TMCE Series Multi-head Electronic Chenille Embroidery Machine. In the same year, they developed the automatic frame changer, a dedicated apparatus for rolled textile embroidery. Also in 1982, Pulse Microsystems introduced Stitchworks, the first PC based embroidery software, and the first software based on outlines rather than stitches. This was monumental to decorators, in that it allowed them to scale and change the properties and parts of their designs easily, on the computer. Designs were output to paper tape, which was read by the embroidery machine. Stitchworks was sold worldwide by Macpherson. Melco patented the ability to sew circles with a satin stitch, as well as arched lettering generated from a keyboard. An operator digitized the design using similar techniques to punching, transferring the results to a 1" paper tape or later to a floppy disk. This design would then be run on the embroidery machine, which stitched out the pattern. Wilcom enhanced this technology in 1982 with the introduction of the first multi-user system, which allowed more than one person to work on the embroidery process, streamlining production times. In 1983, Tajima created the TMLE Series Multi-Head Electronic Lock Stitch Chenille Embroidery machine, followed by the TMEF Series 9-needle Type Electronic Embroidery Machine in 1984. In 1986, Tajima introduced the world's first sequin embroidery machine, enabling designers to combine sequin embroidery with plain embroidery. In 1987, Pulse Microsystems introduced a digital asset management application called DDS, which was a design librarian for embroidery machines. This made it more efficient for machine operators to access their designs. In 1988 Tajima designed the TMLE-D5 series embroidery machines, with a pair arrangement of lock-stitch-handle embroidery heads, which were capable of sewing multiple threads. Brother Industries entered the embroidery industry after several computerized embroidery companies contracted it to provide sewing heads. Pulse Microsystems developed a software for them called PG1. PG1 had a tight integration with the embroidery machine using high level protocol, enabling the machine to pull designs from software, rather than having the software push designs to the machine. This approach is still used today. Melco was acquired by Saurer in 1989. The early 1990s were quiet for machine embroidery, but Tajima introduced a 12 needle machine into their series along with a noise reduction mechanism. In 1995, Tajima added a multi-color (6-color) type to chenille embroidery machines, and announced the ability to mix embroidery machines with plain chenille embroidery. They also began sales of the TLFD Series Laser-cut Embroidery Machines. In 1996, Pulse Microsystems introduced the computational geometry based simulation of hand created chenille using a spiral effect. Following this in 1997, Tajima introduced 15-needle machines, in response to the "multi-color-age". In the late 1990s, Pulse Microsystems introduced networking to embroidery machines. It added a box, which allowed them to network and then pull designs from a central server. It also provided machine feedback, and allowed machines to be optically isolated to protect machines in an industrial environment. Since then, computerized machine embroidery has grown in popularity as costs have fallen for computers, software, and embroidery machines. In the year 2000, Pulse Microsystems introduced Stitchport, which is a server based embroidery engine for embroidery in a browser. This allowed for the factory automation of letter creation. Although they were not yet ready for it, this transformed the apparel industry by allowing manufacturers, stores, and end users access to customized versions of the mass-produced garments and goods they had been buying throughout their lives, with no margin of error. In 2001, Tajima created heater-wire sewing machines, which were innovative, combination machines. In an environment that was finally ready for the individuality that mass-customization allowed, the principles developed for Stitchport were adapted in 2008 for the creation of PulseID. PulseID allows for the automation of personalization, even on the largest industrial scale. In 2013, Tajima released the TMAR-KC Series Multi-Head Emrboidery Machine, equipped with a digitally controlled presser foot. The major embroidery machine companies and software developers are continuing to adapt their commercial systems to market them for home use, including Janome, RNK, Floriani, Tacony Corporation and many more. As costs have fallen for computers, software and home market embroidery machines, the popularity of machine embroidery as a hobby has risen, and as such, many machine manufacturers sell their own lines of embroidery patterns. In addition, many individuals and independent companies also sell embroidery designs, and there are free designs available on the internet. Types of machine embroidery Free-motion machine embroidery In free-motion machine embroidery, embroidered designs are created by using a basic zigzag sewing machine. As this type of machine is used primarily for tailoring, it lacks the automated features of a specialized machine. The first zigzag sewing machine was patented by Helen Blanchard. To create free-motion machine embroidery, the embroiderer runs the machine and skillfully moves tightly hooped fabric under the needle to create a design. The "feed dogs" or machine teeth are lowered or covered, and the embroiderer moves the fabric manually. The embroiderer develops the embroidery manually, using the machine's settings for running stitch and fancier built-in stitches. A machine's zigzag stitch can create thicker lines within a design or be used to create a border. As this is a manual process rather than a digital reproduction, any pattern created using free-motion machine embroidery is unique and cannot be exactly reproduced, unlike with computerized embroidery. Cornely hand-guided embroidery This embroidery inherited the name of the Cornely machine. Created in the 19th century to imitate the Beauvais stitch (chain stitch), it is still used today, especially in the fashion industry. Cornely embroidery is a so-called hand-guided embroidery. The operator directs his machine according to the pattern. The fabric is moved by a crank located under the machine. The Cornely also has a universal drive system controlled by a handle. Some models can embroider sequins, cords, braids, etc. There are also Cornely machines performing a classic straight stitch. Computerized machine embroidery Most modern embroidery machines are computer controlled and specifically engineered for embroidery. Industrial and commercial embroidery machines and combination sewing-embroidery machines have a hooping or framing system that holds the framed area of fabric taut under the sewing needle and moves it automatically to create a design from a pre-programmed digital embroidery pattern. Depending on its capabilities, the machine will require varying degrees of user input to read and sew embroidery designs. Sewing-embroidery machines generally have only one needle and require the user to change thread colors during the embroidery process. Multi-needle industrial machines are generally threaded prior to running the design and do not require re-threading. These machines require the user to input the correct color change sequence before beginning to embroider. Some can trim and change colors automatically. The computerized machine embroidery process Machine embroidery is a multi-step process with many variables that impact the quality of the final product, including the type of fabric to be embellished, design size, stabilizer choice and type of thread utilized. The basic steps for creating embroidery with a computerized embroidery machine are as follows: Create an embroidery design file or purchase a stitchable machine embroidery file. Creation may take hours depending on the complexity of the design, and the software can be costly. Edit the design and/or combine with other designs. Export the design file to a (proprietary machine) embroidery file that mostly just contains commands for the embroidery machine. If you bought such a file, you may have to convert the file. Load the embroidery file into the embroidery machine, making sure it is the correct format for the machine and that the stitched design will fit in the appropriate hoop. Determine and mark the location of embroidery placement on the fabric to be embellished. Secure the fabric in a hoop with the appropriate stabilizer, and place it on the machine. Center the needle over the start point of the design. Start and monitor the embroidery machine, watching for errors and issues. Troubleshoot any problems as they arise. The operator should have plenty of needles, bobbins, a can of air (or small air compressor), a small brush, and scissors.. Remove the completed design from machine. Separate the fabric from the hoop and trim the stabilizer, loose threads, etc. List of machine embroidery design file extensions References Embroidery
1459479
https://en.wikipedia.org/wiki/Individual%20Computers
Individual Computers
Individual Computers is a German computer hardware company specializing in retrocomputing accessories for the Commodore 64, Amiga, and PC platforms. Individual Computers produced the C-One reconfigurable computer in 2003. The company is owned and run by Jens Schönfeld. Products Catweasel – Universal format floppy disk drive controller card Retro Replay – Improved version of the C64 Action Replay cartridge Clone-A – Amiga in FPGA website (coming soon?) See the PDF extract of Total Amiga Magazine issue 25 MMC64 – MMC and SD Card reader cartridge MMC Replay – MMC64 and Retro Replay combined in one cartridge, with some improvements Micromys – An adapter that allows connecting PS/2 compatible mice (including wheel-support) to C64 and Amiga joystick-ports (and all other computers that share the same pin-configuration). Amiga clock port compatible addons for MMC64, Retro Replay and MMC Replay: RR-Net: A C64-compatible Network-Interface. Comes in 2 shapes, the old long RR-Net fits Retro Replay and MMC64 (though partly blocking the latter's passthrough expansionport), the new L-shaped RR-Net2 fits MMC64 and MMC Replay and was built with MMC Replay in mind. Silver Surfer: Highspeed RS232 Interface for Amiga 1200 and 600 (with adapter). Fits onto Retro Replay, MMC64/Replay compatibility unknown. mp3@c64: hardware mp3 decoding from SD card. Made for MMC64, Retro Replay and MMC Replay compatibility unknown. Keyrah – An interface that allows the connection of Commodore keyboards to USB-capable computers C-One – reconfigurable computer X-Surf – network card C64 Reloaded - A 1:1 rebuilt C64 motherboard with less power consumption Indivision – Flicker fixer for Amiga computers ACA boards - CPU expansion cards for Amiga computers References External links Individual Computers official website Individual Computers Product Information Wiki Home computer hardware companies Electronics companies of Germany Commodore 64 Amiga companies
9705828
https://en.wikipedia.org/wiki/Technical%20features%20new%20to%20Windows%20Vista
Technical features new to Windows Vista
Windows Vista (formerly codenamed Windows "Longhorn") has many significant new features compared with previous Microsoft Windows versions, covering most aspects of the operating system. In addition to the new user interface, security capabilities, and developer technologies, several major components of the core operating system were redesigned, most notably the audio, print, display, and networking subsystems; while the results of this work will be visible to software developers, end-users will only see what appear to be evolutionary changes in the user interface. As part of the redesign of the networking architecture, IPv6 has been incorporated into the operating system, and a number of performance improvements have been introduced, such as TCP window scaling. Prior versions of Windows typically needed third-party wireless networking software to work properly; this is no longer the case with Windows Vista, as it includes comprehensive wireless networking support. For graphics, Windows Vista introduces a new as well as major revisions to Direct3D. The new display driver model facilitates the new Desktop Window Manager, which provides the tearing-free desktop and special effects that are the cornerstones of the Windows Aero graphical user interface. The new display driver model is also able to offload rudimentary tasks to the GPU, allow users to install drivers without requiring a system reboot, and seamlessly recover from rare driver errors due to illegal application behavior. At the core of the operating system, many improvements have been made to the memory manager, process scheduler, heap manager, and I/O scheduler. A Kernel Transaction Manager has been implemented that can be used by data persistence services to enable atomic transactions. The service is being used to give applications the ability to work with the file system and registry using atomic transaction operations. Audio Windows Vista features a completely re-written audio stack designed to provide low-latency 32-bit floating point audio, higher-quality digital signal processing, bit-for-bit sample level accuracy, up to 144 dB of dynamic range and new audio APIs created by a team including Steve Ball and Larry Osterman. The new audio stack runs at user level, thus increasing stability. Also, the new Universal Audio Architecture (UAA) model has been introduced, replacing WDM audio, which allows compliant audio hardware to automatically work under Windows without needing device drivers from the audio hardware vendor. There are three major APIs in the Windows Vista audio architecture: Windows Audio Session API – Very low-level API for rendering audio, render/capture audio streams, adjust volume etc. This API also provides low latency for audio professionals through WaveRT (wave real-time) port driver. Multimedia Device API – For enumerating and managing audio endpoints. Device Topology API – For discovering the internals of an audio card's topology. Audio stack architecture Applications communicate with the audio driver through Sessions, and these Sessions are programmed through the Windows Audio Session API (WASAPI). In general, WASAPI operates in two modes. In exclusive mode (also called DMA mode), unmixed audio streams are rendered directly to the audio adapter and no other application's audio will play and signal processing has no effect. Exclusive mode is useful for applications that demand the least amount of intermediate processing of the audio data or those that want to output compressed audio data such as Dolby Digital, DTS or WMA Pro over S/PDIF. WASAPI exclusive mode is similar to kernel streaming in function, but no kernel mode programming is required. In shared mode, audio streams are rendered by the application and optionally applied per-stream audio effects known as Local Effects (LFX) (such as per-session volume control). Then the streams are mixed by the global audio engine, where a set of global audio effects (GFX) may be applied. Finally, they're rendered on the audio device. After passing through WASAPI, all host-based audio processing, including custom audio processing, can take place. Host-based processing modules are referred to as Audio Processing Objects, or APOs. All these components operate in user mode, only the audio driver runs in kernel mode. The Windows Kernel Mixer (KMixer) is completely gone. DirectSound and MME are emulated as Session instances rather than being directly connected to the audio driver. This does have the effect of preventing DirectSound from being hardware-accelerated, and completely removes support for DirectSound3D and EAX extensions, however APIs such as ASIO and OpenAL are not affected. Audio performance Windows Vista also includes a new Multimedia Class Scheduler Service (MMCSS) that allows multimedia applications to register their time-critical processing to run at an elevated thread priority, thus ensuring prioritized access to CPU resources for time-sensitive DSP processing and mixing tasks. For audio professionals, a new WaveRT port driver has been introduced that strives to achieve real-time performance by using the multimedia class scheduler and supports audio applications that reduce the latency of audio streams. All the existing audio APIs have been re-plumbed and emulated to use these APIs internally, all audio goes through these three APIs, so that most applications "just work". Issues A fault in the MME WaveIn/WaveOut emulation was introduced in Windows Vista: if sample rate conversion is needed, audible noise is sometimes introduced, such as when playing audio in a web browser that uses these APIs. This is because the internal resampler, which is no longer configurable, defaults to linear interpolation, which was the lowest-quality conversion mode that could be set in previous versions of Windows. The resampler can be set to a high-quality mode via a hotfix for Windows 7 and Windows Server 2008 R2 only. Audio signal processing New digital signal processing functionalities such as Room Correction, Bass Management, Loudness Equalization and Speaker Fill have been introduced. These adapt and modify an audio signal to take best advantage of the speaker configuration a given system has. Windows Vista also includes the ability to calibrate speakers to a given room's acoustics automatically using a software wizard. Windows Vista also includes the ability for audio drivers to include custom DSP effects, which are presented to the user through user-mode System Effect Audio Processing Objects (sAPOs). These sAPOs are also reusable by third-party software. Audio devices support Windows Vista builds on the Universal Audio Architecture, a new class driver definition that aims to reduce the need for third-party drivers, and to increase the overall stability and reliability of audio in Windows. Support for Intel High Definition Audio devices (which replaces Intel's previous AC'97 audio hardware standard) Extended support for USB audio devices: Built-in decoding of padded AC-3 (Dolby Digital), MP3, WMA and WMA Pro streams and outputting as S/PDIF. Support for MIDI "Elements". New support for asynchronous endpoints. IEEE 1394 (aka FireWire) audio support was slated for a future release of Windows Vista, to be implemented as a full class driver, automatically supporting IEEE 1394 AV/C audio devices. Support for audio jack sensing which can detect the audio devices that are plugged into the various audio jacks on a device and inform the user about their configuration. Endpoint Discovery and Abstraction: Audio devices are expressed in terms of audio endpoints such as microphones, speakers, headphones. For example, each recording input (Microphone, Line in etc.) is treated as a separate device, which allows recording from both at the same time. Other audio enhancements A new set of user interface sounds have been introduced, including a new startup sound created with the help of King Crimson's Robert Fripp. The new sounds are intended to complement the Windows Aero graphical user interface, with the new startup sound consisting of two parallel melodies that are played in an intentional "Win-dows Vis-ta" rhythm. According to Jim Allchin, the new sounds are intended to be gentler and softer than the sounds used in previous versions of Windows. Windows Vista also allows controlling system-wide volume or volume of individual audio devices and individual applications separately. This feature can be used from the new Volume Control windows or programmatically using the overhauled audio API. Different sounds can be redirected to different audio devices as well. Windows Vista includes integrated microphone array support which is intended to increase the accuracy of the speech recognition feature and allow a user to connect multiple microphones to a system so that the inputs can be combined into a single, higher-quality source. Microsoft has also included a new high quality voice capture DirectX Media Object (DMO) as part of DirectShow that allows voice capture applications such as instant messengers and speech recognition applications to apply Acoustic Echo Cancellation and microphone array processing to speech signals. Speech recognition Windows Vista is the first Windows operating system to include fully integrated support for speech recognition. Under Windows 2000 and XP, Speech Recognition was installed with Office 2003, or was included in Windows XP Tablet PC Edition. Windows Speech Recognition allows users to control their machine through voice commands, and enables dictation into many applications. The application has a fairly high recognition accuracy and provides a set of commands that assists in dictation. A brief speech-driven tutorial is included to help familiarize a user with speech recognition commands. Training could also be completed to improve the accuracy of speech recognition. Windows Vista includes speech recognition for 8 languages at release time: English (U.S. and British), Spanish, German, French, Japanese and Chinese (traditional and simplified). Support for additional languages is planned for post-release. Speech recognition in Vista utilizes version 5.3 of the Microsoft Speech API (SAPI) and version 8 of the Speech Recognizer. Speech synthesis Speech synthesis was first introduced in Windows with Windows 2000, but it has been significantly enhanced for Windows Vista (code name Mulan). The old voice, Microsoft Sam, has been replaced with two new, more natural sounding voices of generally greater intelligibility: Anna and Lili, the latter of which is capable of speaking Chinese. The screen-reader Narrator which uses these voices has also been updated. Microsoft Agent and other text to speech applications now use the newer SAPI 5 voices. Print Windows Vista includes a redesigned print architecture, built around Windows Presentation Foundation. It provides high-fidelity color printing through improved use of color management, removes limitations of the current GDI-based print subsystem, enhances support for printing advanced effects such as gradients, transparencies, etc., and for color laser printers through the use of XML Paper Specification (XPS). The print subsystem in Windows Vista implements the new XPS print path as well as the legacy GDI print path for legacy support. Windows Vista transparently makes use of the XPS print path for those printers that support it, otherwise using the GDI print path. On documents with intensive graphics, XPS printers are expected to produce much greater quality prints than GDI printers. In a networked environment with a print server running Windows Vista, documents will be rendered on the client machine, rather than on the server, using a feature known as Client Side Rendering. The rendered intermediate form will just be transferred to the server to be printed without additional processing, making print servers more scalable by offloading rendering computation to clients. XPS print path The XPS Print Path introduced in Windows Vista supports high quality 16-bit color printing. The XPS print path uses XML Paper Specification (XPS) as the print spooler file format, that serves as the page description language (PDL) for printers. The XPS spooler format is the intended replacement for the Enhanced Metafile (EMF) format which is the print spooler format in the Graphics Device Interface (GDI) print path. XPS is an XML-based (more specifically XAML-based) color-managed device and resolution independent vector-based paged document format which encapsulates an exact representation of the actual printed output. XPS documents are packed in a ZIP container along with text, fonts, raster images, 2D vector graphics and DRM information. For printers supporting XPS, this eliminates an intermediate conversion to a printer-specific language, increasing the reliability and fidelity of the printed output. Microsoft claims that major printer vendors are planning to release printers with built-in XPS support and that this will provide better fidelity to the original document. At the core of the XPS print path is XPSDrv, the XPS-based printer driver which includes the filter pipeline. It contains a set of filters which are print processing modules and an XML-based configuration file to describe how the filters are loaded. Filters receive the spool file data as input, perform document processing, rendering and PDL post-processing, and then output PDL data for the printer to consume. Filters can perform a single function such as watermarking a page or doing color transformations or they can perform several print processing functions on specific document parts individually or collectively and then convert the spool file to the page description language supported by the printer. Windows Vista also provides improved color support through the Windows Color System for higher color precision and dynamic range. It also supports CMYK colorspace and multiple ink systems for higher print fidelity. The print subsystem also has support for named colors simplifying color definition for images transmitted to printer supporting those colors. The XPS print path can automatically calibrate color profile settings with those being used by the display subsystem. Conversely, XPS print drivers can express the configurable capabilities of the printer, by virtue of the XPS PrintCapabilities class, to enable more fine-grained control of print settings, tuned to the individual printing device. Applications which use the Windows Presentation Foundation for the display elements can directly print to the XPS print path without the need for image or colorspace conversion. The XPS format used in the spool file, represents advanced graphics effects such as 3D images, glow effects, and gradients as Windows Presentation Foundation primitives, which are processed by the printer drivers without rasterization, preventing rendering artifacts and reducing computational load. When the legacy GDI Print Path is used, the XPS spool file is used for processing before it is converted to a GDI image to minimize the processing done at raster level. Print schemas Print schemas provide an XML-based format for expressing and organizing a large set of properties that describe either a job format or print capabilities in a hierarchically structured manner. Print schemas are intended to address the problems associated with internal communication between the components of the print subsystem, and external communication between the print subsystem and applications. Networking Windows Vista contains a new networking stack, which brings large improvements in all areas of network-related functionality. It includes a native implementation of IPv6, as well as complete overhaul of IPv4. IPv6 is now supported by all networking components, services, and the user interface. In IPv6 mode, Windows Vista can use the Link Local Multicast Name Resolution (LLMNR) protocol to resolve names of local hosts on a network which does not have a DNS server running. The new TCP/IP stack uses a new method to store configuration settings that enables more dynamic control and does not require a computer restart after settings are changed. The new stack is also based on a strong host model and features an infrastructure to enable more modular components that can be dynamically inserted and removed. The user interface for configuring, troubleshooting and working with network connections has changed significantly from prior versions of Windows as well. Users can make use of the new "Network Center" to see the status of their network connections, and to access every aspect of configuration. The network can be browsed using Network Explorer, which replaces Windows XP's "My Network Places". Network Explorer items can be a shared device such as a scanner, or a file share. Network Location Awareness uniquely identifies each network and exposes the network's attributes and connectivity type. Windows Vista graphically presents how different devices are connected over a network in the Network Map view, using the LLTD protocol. In addition, the Network Map uses LLTD to determine connectivity information and media type (wired or wireless). Any device can implement LLTD to appear on the Network Map with an icon representing the device, allowing users one-click access to the device's user interface. When LLTD is invoked, it provides metadata about the device that contains static or state information, such as the MAC address, IPv4/IPv6 address, signal strength etc. Support for wireless networks is built into the network stack itself, and does not emulate wired connections, as was the case with previous versions of Windows. This allows implementation of wireless-specific features such as larger frame sizes and optimized error recovery procedures. Windows Vista uses various techniques like Receive Window Auto-scaling, Explicit Congestion Notification, TCP Chimney offload and Compound TCP to improve networking performance. Quality of service (QoS) policies can be used to prioritize network traffic, with traffic shaping available to all applications, even those that do not explicitly use QoS APIs. Windows Vista includes in-built support for peer-to-peer networks and SMB 2.0. For improved network security, Windows Vista supports for 256-bit and 384-bit Diffie-Hellman (DH) algorithms, as well as for 128-bit, 192-bit and 256-bit Advanced Encryption Standard (AES) is included in the network stack itself, while integrating IPsec with Windows Firewall. Kernel and core OS changes The new Kernel Transaction Manager enables atomic transaction operations across different types of objects, most significantly file system and registry operations. The memory manager and processes scheduler have been improved. The scheduler was modified to use the cycle counter register of modern processors to keep track of exactly how many CPU cycles a thread has executed, rather than just using an interval-timer interrupt routine, resulting in more deterministic application behaviour. Many kernel data structures and algorithms have been rewritten. Lookup algorithms now run in constant time, instead of linear time as with previous versions. Windows Vista includes support for condition variables and reader-writer locks. Process creation overhead is reduced by significant improvements to DLL address-resolving schemes. Windows Vista introduces a Protected Process, which differs from usual processes in the sense that other processes cannot manipulate the state of such a process, nor can threads from other processes be introduced in it. A Protected Process has enhanced access to DRM-functions of Windows Vista. However, currently, only the applications using Protected Video Path can create Protected Processes. Thread Pools have been upgraded to support multiple pools per process, as well as to reduce performance overhead using thread recycling. It also includes Cleanup Groups that allow cleanup of pending thread-pool requests on process shutdown. Threaded DPC , conversely to an ordinary DPC (Deferred Procedure Call), decreases the system latency improving the performance of time-sensitive applications, such as audio or video playback. Data Redirection: Also known as data virtualization, this virtualizes the registry and certain parts of the file system for applications running in the protected user context if User Account Control is turned on, enabling legacy applications to run in non-administrator accounts. It automatically creates private copies of files that an application can use when it does not have permission to access the original files. This facilitates stronger file security and helps applications not written with the least user access principle in mind to run under stronger restrictions. Registry virtualization isolates write operations that have a global impact to a per-user location. Reads and writes in the section of the Registry by user-mode applications while running as a standard user, as well as to folders such as "Program Files", are "redirected" to the user's profile. The process of reading and writing on the profile data and not on the application-intended location is completely transparent to the application. Windows Vista supports the PCI Express 1.1 specification, including PCI Express Native Control and ASPM. PCI Express registers, including capability registers, are supported, along with save and restore of configuration data. Native support and generic driver for Advanced Host Controller Interface (AHCI) specification for Serial ATA drives, SATA Native Command Queuing, Hot plugging, and AHCI Link Power Management. Full support for the ACPI 2.0 specification, and parts of ACPI 3.0. Support for throttling power usage of individual devices has been improved. Windows Vista SP1 supports Windows Hardware Error Architecture (WHEA). Kernel-mode Plug-And-Play enhancements include support for PCI multilevel rebalance, partial arbitration of resources to support PCI subtractive bridges, asynchronous device start and enumeration operations to speed system startup, support for setting and retrieving custom properties on a device, an enhanced ejection API to allow the caller to determine if and when a device has been successfully ejected, and diagnostic tracing to facilitate improved reliability. The startup process for Windows Vista has changed completely in comparison to earlier versions of Windows. The NTLDR boot loader has been replaced by a more flexible system, with NTLDR's functionality split between two new components: winload.exe and Windows Boot Manager. A notable change is that the Windows Boot Manager is invoked by pressing the space bar instead of the F8 function key. The F8 key still remains assigned for advanced boot options once the Windows Boot Manager menu appears. On UEFI systems, beginning with Windows Vista Service Pack 1, the x64 version of Windows Vista has the ability to boot from a disk with a GUID Partition Table. Windows Vista includes a completely overhauled and rewritten Event logging subsystem, known as Windows Event Log which is XML-based and allows applications to more precisely log events, offers better views, filtering and categorization by criteria, automatic log forwarding, centrally logging and managing events from a single computer and remote access. Windows Vista includes an overhauled Task Scheduler that uses hierarchical folders of tasks. The Task Scheduler can run programs, send email, or display a message. The Task Scheduler can also now be triggered by an XPath expression for filtering events from the Windows Event Log, and can respond to a workstation's lock or unlock, and as well as the connection or disconnection to the machine from a Remote Desktop. The Task Scheduler tasks can be scripted in VBScript, JScript, or PowerShell. Restart Manager: The Restart Manager works with Microsoft's update tools and websites to detect processes that have files in use and to gracefully stop and restart services to reduce the number of reboots required after applying updates as far as possible for higher levels of the software stack. Kernel updates, logically, still require the system to be restarted. In addition, the Restart Manager provides a mechanism for applications to stop and then restart programs. Applications that are written specifically to take advantage of the new Restart Manager features using the API can be restarted and restored to the same state and with the same data as before the restart. Using the Application Recovery and Restart APIs in conjunction with the Restart Manager enables applications to control what actions are taken on their behalf by the system when they fail or crash such as recovering unsaved data or documents, restarting the application, and diagnosing and reporting the problem using Windows Error Reporting. When shutting down or restarting Windows, previous Windows versions either forcibly terminated applications after waiting for few seconds, or allowed applications to entirely cancel shutdown without informing the user. Windows Vista now informs the user in a full-screen interface if there are running applications when exiting Windows or allows continuing with or cancelling the initiated shutdown. The reason registered, if any, for cancelling a shutdown by an application using the new ShutdownBlockReasonCreate API is also displayed. Clean service shutdown: Services in Windows Vista have the capability of delaying the system shutdown in order to properly flush data and finish current operations. If the service stops responding, the system terminates it after 3 minutes. Crashes and restart problems are drastically reduced since the Service Control Manager is not terminated by a forced shutdown anymore. Boot process Windows Vista introduces an overhaul of the previous Windows NT operating system loader architecture NTLDR. Used by versions of Windows NT since its inception with Windows NT 3.1, NTLDR has been completely replaced with a new architecture designed to address modern firmware technologies such as the Unified Extensible Firmware Interface. The new architecture introduces a firmware-independent data store and is backward compatible with previous versions of the Windows operating system. Memory management Windows Vista features a Dynamic System Address Space that allocates virtual memory and kernel page tables on-demand. It also supports very large registry sizes. Includes enhanced support for Non-Uniform Memory Access (NUMA) and systems with large memory pages. Windows Vista also exposes APIs for accessing the NUMA features. Memory pages can be marked as read-only, to prevent data corruption. New address mapping scheme called Rotate Virtual Address Descriptors (VAD). It is used for the advanced Video subsystem. Swapping in of memory pages and system cache include prefetching and clustering, to improve performance. Performance of Address Translation Buffers has been enhanced. Heap layout has been modified to provide higher performance on 64-bit and Symmetric multiprocessing (SMP) systems. The new heap structure is also more scalable and has low management overhead, especially for large heaps. Windows Vista automatically tunes up the heap layout for improved fragmentation management. The Low Fragmentation Heap (LFH) is enabled by default. Lazy initialization of heap initializes only when required, to improve performance. The Windows Vista memory manager does not have a 64 kb read-ahead cache limitation unlike previous versions of Windows and can thus improve file system performance dramatically. File systems Transactional NTFS allows multiple file/folder operations to be treated as a single operation, so that a crash or power failure won't result in half-completed file writes. Transactions can also be extended to multiple machines. Image Mastering API (IMAPI v2) enables DVD burning support for applications, in addition to CD burning. IMAPI v2 supports multiple optical drives, even recording to multiple drives simultaneously, unlike IMAPI in Windows XP which only supported recording with one optical drive at a time. In addition, multiple filesystems are supported. Applications using IMAPI v2 can create, and burn disc images—it is extensible in the sense that developers can write their own specific media formats and create their own file systems for its programming interfaces. IMAPI v2 is implemented as a DLL rather than as a service as was the case in Windows XP, and is also scriptable using VBScript. IMAPI v2 is also available for Windows XP. With the Windows Feature Pack for Storage installed, IMAPI 2.0 supports Recordable Blu-ray Disc (BD-R) and Rewritable Blu-ray Disc (BD-RE) media as well. Windows DVD Maker can burn DVD-Video discs, while Windows Explorer can burn data on DVDs (DVD±R, DVD±R DL, DVD±R RW) in addition to DVD-RAM and CDs. Live File System: A writable UDF file system. The Windows UDF file system (UDFS) implementation was read-only in OS releases prior to Windows Vista. In Windows Vista, Packet writing (incremental writing) is supported by UDFS, which can now format and write to all mainstream optical media formats (MO, CDR/RW, DVD+R/RW, DVD-R/RW/RAM). Write support is included for UDF format versions up to and including 2.50, with read support up to 2.60. UDF symbolic links, however, are not supported. Common Log File System (CLFS) API provides a high-performance, general-purpose log-file subsystem that dedicated user-mode and kernel-mode client applications can use and multiple clients can share to optimize log access and for data and event management. File encryption support superior to that available in Encrypting File System in Windows XP, which will make it easier and more automatic to prevent unauthorized viewing of files on stolen laptops or hard drives. File System Mini Filters model which are kernel mode non-device drivers, to monitor filesystem activity, have been upgraded in Windows Vista. The Registry filtering model adds support for redirecting calls and modifying parameters and introduces the concept of altitudes for filter registrations. Registry notification hooks, introduced in Windows XP, and recently enhanced in Windows Vista, allow software to participate in registry related activities in the system. Support of UNIX-style symbolic links. Previous Windows versions had support for a type of cross-volume reparse points known as junction points and hard links. However, junction points could be created only for directories and stored absolute paths, whereas hardlinks could be created for files but were not cross-volume. NTFS symbolic links can be created for any object and are cross-volume, cross-host (work over UNC paths), and store relative paths. However, the cross-host functionality of symbolic links does not work over the network with previous versions of Windows or other operating systems, only with computers running Windows Vista or a later Windows operating system. Symbolic links can be created, modified and deleted using the Mklink utility which is included with Windows Vista. Microsoft has published some developer documentation on symbolic links in the MSDN documentation. In addition, Windows Explorer is now symbolic link-aware and deleting a symbolic link from Explorer just deletes the link itself and not the target object. Explorer also shows the symbolic link target in the object's properties and shows a shortcut icon overlay on a junction point. A new tab, "Previous Versions", in the Properties dialog for any file or folder, provides read-only snapshots of files on local or network volumes from an earlier point in time. This feature is based on the Volume Shadow Copy technology. A new file-based disk image format called Windows Imaging Format (WIM), which can be mounted as a partition, or booted from. An associated tool called ImageX provides facilities to create and maintain these image files. Self-healing NTFS: In previous Windows versions, NTFS marked the volume "dirty" upon detecting file-system corruption and CHKDSK was required to be run by taking the volume "offline". With self-healing NTFS, an NTFS worker thread is spawned in the background which performs a localized fix-up of damaged data structures, with only the corrupted files/folders remaining unavailable without locking out the entire volume. The self-healing behavior can be turned on for a volume with the fsutil repair set C: 1 command where C presents the volume letter. New /B switch in CHKDSK for NTFS volumes which clears marked bad sectors on a volume and reevaluates them. Windows Vista has support for hard disk drives with large physical sector sizes (> 512 bytes per sector drives) if the drive supports 512-bytes logical sectors / emulation (called Advanced Format/512E). Drives with both 4k logical and 4k physical sectors are not supported. The NLS casing table in NTFS has been updated so that partitions formatted with Windows Vista will be able to see the proper behavior for the 100+ mappings that have been added to Unicode but were not added to Windows. Windows Vista Service Pack 1 and later have built-in support for exFAT. Drivers Windows Vista introduces an improved driver model, Windows Driver Foundation which is an opt-in framework to replace the older Windows Driver Model. It includes: Windows Display Driver Model (WDDM), previously referred to as Longhorn Display Driver Model (LDDM), designed for graphics performance and stability. A new Kernel-Mode Driver Framework, which will also be available for Windows XP and Windows 2000. A new user-mode driver model called the User-Mode Driver Framework. In Windows Vista, WDDM display drivers have two components, a kernel mode driver (KMD) that is very streamlined, and a user-mode driver that does most of the intense computations. With this model, most of the code is moved out of kernel mode. The audio subsystem also runs largely in user-mode to prevent impacting negatively on kernel performance and stability. Also, printer drivers in kernel mode are not supported. User-mode drivers are not able to directly access the kernel but use it through a dedicated API. User-mode drivers are supported for devices which plug into a USB or FireWire bus, such as digital cameras, portable media players, PDAs, mobile phones and mass storage devices, as well as "non-hardware" drivers, such as filter drivers and other software-only drivers. This also allows for drivers which would typically require a system reboot (video card drivers, for example) to install or update without needing a reboot of the machine. If the driver requires access to kernel-mode resources, developers can split the driver so that part of it runs in kernel-mode and part of it runs in user-mode. These features are significant because a majority of system crashes can be traced to improperly installed or unstable third-party device drivers. If an error occurs the new framework allows for an immediate restart of the driver and does not impact the system. User-Mode Driver Framework is available for Windows XP and is included in Windows Media Player 11. Kernel-mode drivers on 64-bit versions of Windows Vista must be digitally signed; even administrators will not be able to install unsigned kernel-mode drivers. A boot-time option is available to disable this check for a single session of Windows. Installing user-mode drivers will still work without a digital signature. Signed drivers are required for usage of PUMA, PAP (Protected Audio Path), and PVP-OPM subsystems. Driver packages that are used to install driver software are copied in their entirety into a "Driver Store", which is a repository of driver packages. This ensures that drivers that need to be repaired or reinstalled won't need to ask for source media to get "fresh" files. The Driver Store can also be preloaded with drivers by an OEM or IT administrator to ensure that commonly used devices (e.g. external peripherals shipped with a computer system, corporate printers) can be installed immediately. Adding, removing and viewing drivers from the "Driver Store" is done using A new setting in Device Manager allows deleting the drivers from the Driver Store when uninstalling the hardware. Support for Windows Error Reporting; information on an "unknown device" is reported to Microsoft when a driver cannot be found on the system, via Windows Update, or supplied by the user. OEMs can hook into this system to provide information that can be returned to the user, such as a formal statement of non-support of a device for Windows Vista, or a link to a web site with support information, drivers, etc. Processor Power Management Windows Vista includes the following changes and enhancements in processor power management: Native operating system support for PPM on multiprocessor systems, including systems using processors with multiple logical threads, multiple cores, or multiple physical sockets. Support for all ACPI 2.0 and 3.0 processor objects. User configurable system cooling policy, minimum and maximum processor states. Operating system coordination of performance state transitions between dependent processors. Elimination of the processor dynamic throttling policies used in Windows XP and Windows Server 2003. More flexible use of the available range of processor performance states through system power policy. The static use of any linear throttle state on systems that are not capable of processor performance states. Exposure of multiple power policy parameters that original equipment manufacturers (OEMs) may tune to optimize Windows Vista use of PPM features. In-box drivers for processors from all leading processor manufacturers. A generic processor driver that allows the use of processor-specific controls for performance state transitions. An improved C3 entry algorithm, where a failed C3 entry does not cause demotion to C2. Removal of support for legacy processor performance state interfaces. Removal of support for legacy mobile processor drivers. System performance SuperFetch caches frequently-used applications and documents in memory, and keeps track of when commonly used applications are usually loaded, so that they can be pre-cached and it also prioritizes the programs currently used over background tasks. SuperFetch aims to negate the negative performance effect of having anti-virus or backup software run when the user is not at the computer. Superfetch is able to learn at what time of a given day an application is used and so it can be pre-cached. ReadyBoost, makes PCs running Windows Vista more responsive by using flash memory on a USB drive (USB 2.0 only), SD Card, Compact Flash, or other form of flash memory, in order to boost system performance. When such a device is plugged in, the Windows Autoplay dialog offers an additional option to use it to speed up the system; an additional "ReadyBoost" tab is added to the drive's properties dialog where the amount of space to be used can be configured. ReadyBoot uses an in-RAM cache to optimize the boot process if the system has 700MB or more memory. The size of the cache depends on the total RAM available, but is large enough to create a reasonable cache and yet allow the system the memory it needs to boot smoothly. ReadyBoot uses the same ReadyBoost service. ReadyDrive is the name Microsoft has given to its support for hybrid drives, a new design of hard drive developed by Samsung and Microsoft. Hybrid drives incorporate non-volatile memory into the drive's design, resulting in lower power needs, as the drive's spindles do not need to be activated for every write operation. Windows Vista can also make use of the NVRAM to increase the speed of booting and returning from hibernation. Windows Vista features Prioritized I/O which allows developers to set application I/O priorities for read/write disk operations, similar to how currently application processes/threads can be assigned CPU priorities. I/O has been enhanced with I/O asynchronous cancellation and I/O scheduling based on thread priority. Background applications running in low priority I/O do not disturb foreground applications. Applications like Windows Defender, Automatic Disk Defragmenter and Windows Desktop Search (during indexing) already use this feature. Windows Media Player 11 also supports this technology to offer glitch-free multimedia playback. The Offline Files feature, which maintains a client-side cache of files shared over a network, has been significantly improved. When synchronizing the changes in the cached copy to the remote version, the Bitmap Differential Transfer protocol is used so that only the changed blocks in the cached version are transferred, but when retrieving changes from the remote copy, the entire file is downloaded. are synchronized on a per-share basis and encrypted on a per-user basis and users can force Windows to work in offline mode or online mode or sync manually from the Sync Center. The Sync Center can also report sync errors and resolve sync conflicts. Also, if network connectivity is restored, file handles are redirected to the remote share transparently. Delayed service start allows services to start a short while after the system has finished booting and initial busy operations, so that the system boots up faster and performs tasks quicker than before. Enable advanced performance option for hard disks: When enabled, the operating system may cache disk writes as well as disk reads. In previous Windows operating systems, only the disk's internal disk caching, if any, was utilised for disk write operations when the disk cache was enabled by the user. Enabling this option causes Windows to make use of its own local cache in addition to this, which speeds up performance, at the expense of a little more risk of data loss during a sudden loss of power. Programmability .NET Framework 3.0 Windows Vista is the first client version of Windows to ship with the .NET Framework. Specifically, it includes .NET Framework 2.0 and .NET Framework 3.0 (previously known as WinFX) but not version 1.0 or 1.1. The .NET Framework is a set of managed code APIs that is slated to succeed Win32. The Win32 API is also present in Windows Vista, but does not give direct access to all the new functionality introduced with the .NET Framework. In addition, .NET Framework is intended to give programmers easier access to the functionality present in Windows itself. .NET Framework 3.0 includes APIs such as ADO.NET, ASP.NET, Windows Forms, among others, and adds four core frameworks to the .NET Framework: Windows Presentation Foundation (WPF) Windows Communication Foundation (WCF) Windows Workflow Foundation (WF) Windows CardSpace WPF Windows Presentation Foundation (codenamed Avalon) is the overhaul of the graphical subsystem in Windows and the flagship resolution independent API for 2D and 3D graphics, raster and vector graphics (XAML), fixed and adaptive documents (XPS), advanced typography, animation (XAML), data binding, audio and video in Windows Vista. WPF enables richer control, design, and development of the visual aspects of Windows programs. Based on DirectX, it renders all graphics using Direct3D. Routing the graphics through Direct3D allows Windows to offload graphics tasks to the GPU, reducing the workload on the computer's CPU. This capability is used by the Desktop Window Manager to make the desktop, all windows and all other shell elements into 3D surfaces. WPF applications can be deployed on the desktop or hosted in a web browser (XBAP). The 3D capabilities in WPF are limited compared to what's available in Direct3D. However, WPF provides tighter integration with other features like user interface (UI), documents, and media. This makes it possible to have 3D UI, 3D documents, and 3D media. A set of built-in controls is provided as part of WPF, containing items such as button, menu, and list box controls. WPF provides the ability to perform control composition, where a control can contain any other control or layout. WPF also has a built-in set of data services to enable application developers to bind data to the controls. Images are supported using the Windows Imaging Component. For media, WPF supports any audio and video formats which Windows Media Player can play. In addition, WPF supports time-based animations, in contrast to the frame-based approach. This delinks the speed of the animation from how slow or fast the system is performing. Text is anti-aliased and rendered using ClearType. WPF uses Extensible Application Markup Language (XAML), which is a variant of XML, intended for use in developing user interfaces. Using XAML to develop user interfaces also allows for separation of model and view. In XAML, every element maps onto a class in the underlying API, and the attributes are set as properties on the instantiated classes. All elements of WPF may also be coded in a .NET language such as C#. The XAML code is ultimately compiled into a managed assembly in the same way all .NET languages are, which means that the use of XAML for development does not incur a performance cost. WCF Windows Communication Foundation (codenamed Indigo) is a new communication subsystem to enable applications, in one machine or across multiple machines connected by a network, to communicate. WCF programming model unifies Web Services, .NET Remoting, Distributed Transactions, and Message Queues into a single Service-oriented architecture model for distributed computing, where a server exposes a service via an interface, defined using XML, to which clients connect. WCF runs in a sandbox and provides the enhanced security model all .NET applications provide. WCF is capable of using SOAP for communication between two processes, thereby making WCF based applications interoperable with any other process that communicates via SOAP. When a WCF process communicates with a non-WCF process, XML based encoding is used for the SOAP messages but when it communicates with another WCF process, the SOAP messages are encoded in an optimized binary format, to optimize the communication. Both the encodings conform to the data structure of the SOAP format, called Infoset. Windows Vista also incorporates Microsoft Message Queuing 4.0 (MSMQ) that supports subqueues, poison messages (messages which continually fail to be processed correctly by the receiver), and transactional receives of messages from a remote queue. WF Windows Workflow Foundation is a Microsoft technology for defining, executing and managing workflows. This technology is part of .NET Framework 3.0 and therefore targeted primarily for the Windows Vista operating system. The Windows Workflow Foundation runtime components provide common facilities for running and managing the workflows and can be hosted in any CLR application domain. Workflows comprise 'activities'. Developers can write their own domain-specific activities and then use them in workflows. Windows Workflow Foundation also provides a set of general-purpose 'activities' that cover several control flow constructs. It also includes a visual workflow designer. The workflow designer can be used within Visual Studio 2005, including integration with the Visual Studio project system and debugger. Windows CardSpace Windows CardSpace (codenamed InfoCard), a part of .NET Framework 3.0, is an implementation of Identity Metasystem, which centralizes acquiring, usage and management of digital identity. A digital identity is represented as logical Security Tokens, that each consist of one or more Claims, which provide information about different aspects of the identity, such as name, address etc. Any identity system centers around three entities — the User who is to be identified, an Identity Provider who provides identifying information regarding the User, and Relying Party who uses the identity to authenticate the user. An Identity Provider may be a service like Active Directory, or even the user who provides an authentication password, or biometric authentication data. A Relying Party issues a request to an application for an identity, by means of a Policy that states what Claims it needs and what will be the physical representation of the security token. The application then passes on the request to Windows CardSpace, which then contacts a suitable Identity Provider and retrieves the Identity. It then provides the application with the Identity along with information on how to use it. Windows CardSpace also keeps a track of all Identities used, and represents them as visually identifiable virtual cards, accessible to the user from a centralized location. Whenever an application requests any identity, Windows CardSpace informs the user about which identity is being used and needs confirmation before it provides the requestor with the identity. Windows CardSpace presents an API that allows any application to use Windows CardSpace to handle authentication tasks. Similarly, the API allows Identity Providers to hook up with Windows CardSpace. To any Relying Party, it appears as a service which provides authentication credentials. Other .NET Framework APIs Microsoft UI Automation (UIA) is a managed code API replacing Microsoft Active Accessibility to drive user interfaces. UIA is designed to serve both assistive technology and test-automation requirements. .NET Framework 3.0 also includes a managed code speech API which has similar functionality to SAPI 5 but is suitable to be used by managed code applications. Media Foundation Media Foundation is a set of COM-based APIs to handle audio and video playback that provides DirectX Video Acceleration 2.0 and better resilience to CPU, I/O, and memory stress for glitch-free low-latency playback of audio and video. It also enables high color spaces through the multimedia processing pipeline. DirectShow and Windows Media SDK will be gradually deprecated in future versions. Search The Windows Vista Instant Search index can also be accessed programmatically using both managed as well as native code. Native code connects to the index catalog by using a Data Source Object retrieved from Windows Vista shell's Indexing Service OLE DB provider. Managed code use the MSIDXS ADO.NET provider with the index catalog name. A catalog on a remote machine can also be specified using a UNC path. The criteria for the search is specified using a SQL-like syntax. The default catalog is called SystemIndex and it stores all the properties of indexed items with a predefined naming pattern. For example, the name and location of documents in the system is exposed as a table with the column names System. ItemName and System. ItemURL respectively. An SQL query can directly refer these tables and index catalogues and use the MSIDXS provider to run queries against them. The search index can also be used via OLE DB, using the CollatorDSO provider. However, OLE DB provider is read-only, supporting only SELECT and GROUP ON SQL statements. The Windows Search API can also be used to convert a search query written using Advanced Query Syntax (or Natural Query Syntax, the natural language version of AQS) to SQL queries. It exposes a method GenerateSQLFromUserQuery method of the ISearchQueryHelper interface. Searches can also be performed using the search-ms: protocol, which is a pseudo protocol that lets searches be exposed as an URI. It contains all the operators and search terms specified in AQS. It can refer to saved search folders as well. When such a URI is activated, Windows Search, which is registered as a handler for the protocol, parses the URI to extract the parameters and perform the search. Networking Winsock Kernel (WSK) is a new transport-independent kernel-mode Network Programming Interface (NPI) for that provides TDI client developers with a sockets-like programming model similar to those supported in user-mode Winsock. While most of the same sockets programming concepts exist as in user-mode Winsock such as socket, creation, bind, connect, accept, send and receive, Winsock Kernel is a completely new programming interface with unique characteristics such as asynchronous I/O that uses IRPs and event callbacks to enhance performance. TDI is supported in Windows Vista for backward compatibility. Windows Vista includes a specialized QoS API called qWave (Quality Windows Audio/Video Experience), which is a pre-configured quality of service module for time dependent multimedia data, such as audio or video streams. qWave uses different packet priority schemes for real-time flows (such as multimedia packets) and best-effort flows (such as file downloads or e-mails) to ensure that real time data gets as little delays as possible, while providing a high quality channel for other data packets. Windows Filtering Platform allows external applications to access and hook into the packet processing pipeline of the networking subsystem. Cryptography Windows Vista features an update to the Microsoft Crypto API known as Cryptography API: Next Generation (CNG). CNG is an extensible, user mode and kernel mode API that includes support for Elliptic curve cryptography and a number of newer algorithms that are part of the National Security Agency (NSA) Suite B. It also integrates with the smart card subsystem by including a Base CSP module which encapsulates the smart card API so that developers do not have to write complex CSPs. Other features and changes Support for Unicode 5.0 A number of new fonts: Latin fonts: Calibri, Cambria, Candara, Consolas (monotype), Constantia, and Corbel. Segoe UI, previously used in Windows XP Media Center Edition, is also included, despite licensing issues with Linotype. Meiryo, supporting the new and modified characters of the JIS X 0213:2004 standard Non-Latin fonts: Microsoft JhengHei (Chinese Traditional), Microsoft YaHei (Chinese Simplified), Majalla UI (Arabic), Gisha (Hebrew), Leelawadee (Thai) and Malgun Gothic (Korean). Support for Adobe CFF/Type2 fonts, which provides support for contextual and discretionary ligatures. When accessing files with the ANSI character set, if the total path length is more than the maximum allowed 260 characters, Windows Vista automatically uses the alternate short names (which has an 8.3 limit) to shorten the total path length. In Unicode mode, this is not done as the maximum allowed length is 32,000. The long "Documents and Settings" folder is now just "Users", although a symbolic link called "Documents and Settings" is kept for compatibility. The paths of several special folders under the user profile have changed. New support for infrared receivers and Bluetooth 2.0 wireless standards; devices supporting these can transfer files and sync data wirelessly to a Windows Vista computer with no additional software. A non-administrator user can share only the folders under his user profile. In addition, all users have a Public folder which is shared, though an administrator can override this. Network Projection is used to detect and use network-connected projectors. It can be used to display a presentation, or share a presentation with the machine which hosts the projector. Users can do this over a network so multiple sources can be connected at different times without having to keep moving the sources or projectors around. The network projector can be connected to the network via wireless or cable (LAN) technology to make it even more flexible. Users can not only connect to the network projector remotely but can also remotely configure it. Network projectors are designed to transmit and display still images, such as photographs and slides —not high-bandwidth transmissions, such as video streams. The projector can transmit video, but the playback quality is often poor. Binary %windir%\system32\NetProj.exe implement Network Projection feature. New monitor configuration APIs make it possible to adjust the monitor's display area, save and restore display settings, calibrate color and use vendor-specific monitor features. Overall too, Windows Vista is designed to be more resolution-independent than its predecessors, with a particular focus on higher resolutions and high DPI displays . Windows Presentation Foundation and WPF applications are fully resolution-independent. Also, Transient Multimon Manager, a new feature that uses the monitor's EDID enables automatic detection, setup and proper configuration of additional or multiple displays as they are attached and removed, on the fly. The settings are saved on a per-display basis when possible, so that users can move among multiple displays with no manual configuration. Windows Vista includes a WSD-WIA class driver that enables all devices compliant with Microsoft's Web Services for Scanner (WS-Scan) protocol to work with WIA without any additional driver or software. The Fax service and model are fully account-based. Fax-aware applications such as Windows Fax and Scan can send multiple documents in a single fax submission. The Fax Service API generates TIFF files for each document and merges them into a single TIFF file. Users can right-click a document in Windows Explorer and select Send to Fax Recipient. Windows Vista introduces the 'Assistance Platform' based on MAML. Help and Support is intended to be more meaningful and clear. Guided Help, or Active Content Wizard is an automated tutorial and self-help system available with the release of Windows Vista where a series of animated steps show users how to complete a particular task. It highlights only the options and the parts of screen that are relevant to the task and darkening the rest of the screen. A separate file format is used for ACW help files. The guided help SDK got replaced in Windows 7 with the Windows Troubleshooting Platform. All standard text editing controls and all versions of the 'RichEdit' control now support the Text Services Framework. Also, all Tablet/Ink API applications and all HTML applications which use Internet Explorer's Trident layout engine support the Text Services Framework. Windows Data Access Components (Windows DAC) replace MDAC 2.81 which shipped with Windows XP Service Pack 2. DFS Replication, the successor to File Replication Service, is a state-based replication engine for file replication among DFS shares, which supports replication scheduling and bandwidth throttling. It uses Remote Differential Compression to detect and replicate only the change to files, rather than replicating entire files, if changed. DFS-R is also included with Windows Server 2003 R2. As with Windows XP Professional x64 Edition, in Windows Vista x64, old 16-bit Windows programs are not supported. If 16-bit software needs to be run in 64-bit Windows Vista, virtualization can be used for running a 32-bit operating system. See also Windows Server 2008 Notes and references External links Windows Vista Technical Library Roadmap Making Your Application a Windows Vista Application: The Top Ten Things to Do — from MSDN. New Networking Features in Windows Server 2008 and Windows Vista A list of Vista ReadyBoost compatible devices Windows Vista Windows Vista
1364923
https://en.wikipedia.org/wiki/Electronic%20throttle%20control
Electronic throttle control
Electronic throttle control (ETC) is an automobile technology which electronically "connects" the accelerator pedal to the throttle, replacing a mechanical linkage. A typical ETC system consists of three major components: (i) an accelerator pedal module (ideally with two or more independent sensors), (ii) a throttle valve that can be opened and closed by an electric motor (sometimes referred to as an electric or electronic throttle body (ETB)), and (iii) a powertrain or engine control module (PCM or ECM). The ECM is a type of electronic control unit (ECU), which is an embedded system that employs software to determine the required throttle position by calculations from data measured by other sensors, including the accelerator pedal position sensors, engine speed sensor, vehicle speed sensor, and cruise control switches. The electric motor is then used to open the throttle valve to the desired angle via a closed-loop control algorithm within the ECM. The benefits of electronic throttle control are largely unnoticed by most drivers because the aim is to make the vehicle power-train characteristics seamlessly consistent irrespective of prevailing conditions, such as engine temperature, altitude, and accessory loads. Electronic throttle control is also working 'behind the scenes' to dramatically improve the ease with which the driver can execute gear changes and deal with the dramatic torque changes associated with rapid accelerations and decelerations. Electronic throttle control facilitates the integration of features such as cruise control, traction control, stability control, and precrash systems and others that require torque management, since the throttle can be moved irrespective of the position of the driver's accelerator pedal. ETC provides some benefit in areas such as air-fuel ratio control, exhaust emissions and fuel consumption reduction, and also works in concert with other technologies such as gasoline direct injection. Failure modes There is no mechanical linkage between the accelerator pedal and the throttle valve with electronic throttle control. Instead, the position of the throttle valve (i.e., the amount of air in the engine) is fully controlled by the ETC software via the electric motor. But just opening or closing the throttle valve by sending a new signal to the electric motor is an open loop condition and leads to inaccurate control. Thus, most if not all current ETC systems use closed loop feedback systems, such as PID control, whereby the ECU tells the throttle to open or close a certain amount. The throttle position sensor(s) are continually read and then the software makes appropriate adjustments to reach the desired amount of engine power. There are two primary types of throttle position sensors (TPS): a potentiometer or a non-contact sensor Hall Effect sensor (magnetic device). A potentiometer is a satisfactory way for non-critical applications such as volume control on a radio, but as it has a wiper contact rubbing against a resistance element, dirt and wear between the wiper and the resistor can cause erratic readings. The more reliable solution is the magnetic coupling, which makes no physical contact, so will never be subject to failing by wear. This is an insidious failure as it may not provide any symptoms until there is total failure. All cars having a TPS have what is known as a 'limp-home-mode'. When the car goes into the limp-home-mode it is because the accelerator and engine control computer and the throttle are not talking to each other in a way that they can understand. The engine control computer shuts down the signal to the throttle position motor and a set of springs in the throttle set it to a fast idle, fast enough to get the transmission in gear but not so fast that driving may be dangerous. Software or electronic failures within the ETC have been suspected by some to be responsible for alleged incidents of unintended acceleration. A series of investigations by the U.S. National Highway Traffic Safety Administration (NHTSA) were unable to get to the bottom of all of the reported incidents of unintended acceleration in 2002 and later model year Toyota and Lexus vehicles. A February 2011 report issued by a team from NASA (which studied the source code and electronics for a 2005 Camry model, at the request of NHTSA) did not rule out software malfunctions as a potential cause. In October 2013, the first jury to hear evidence about Toyota's source code (from expert witness Michael Barr (software engineer)) found Toyota liable for the death of a passenger in a September 2007 unintended acceleration collision in Oklahoma. References Vehicle technology
34443174
https://en.wikipedia.org/wiki/Gummi%20%28software%29
Gummi (software)
Gummi is a LaTeX editor. It is a GTK+ application which runs on Linux systems. Features Gummi has many useful features needed to edit LaTeX source code, such as: Live preview: The pdf is shown without the need to compile it manually Snippets: LaTeX snippets can be configured Graphical insertion of tables and images Templates and wizards for new document creation Project management Bibliography management SyncTeX integration However, it lacks some features available in other editors: Compare (available in WinEdt, Vim-LaTeX (LaTeX-suite), TeXmacs...). Graphical insertion of mathematical symbols (available in Gnome LaTeX, TeXnicCenter, Kile, ...). Document structure summary (available in Gnome LaTeX, Kile, ...). Installation Gummi is available in the official repositories of various Linux distributions, such as Arch Linux, Debian, Fedora, Gentoo, and Ubuntu. See also List of text editors Comparison of text editors Comparison of TeX editors References External links TeX SourceForge projects Free TeX editors Linux TeX software TeX editors that use GTK TeX editors
5620714
https://en.wikipedia.org/wiki/Mockup
Mockup
In manufacturing and design, a mockup, or mock-up, is a scale or full-size model of a design or device, used for teaching, demonstration, design evaluation, promotion, and other purposes. A mockup may be a prototype if it provides at least part of the functionality of a system and enables testing of a design. Mock-ups are used by designers mainly to acquire feedback from users. Mock-ups address the idea captured in a popular engineering one-liner: "You can fix it now on the drafting board with an eraser or you can fix it later on the construction site with a sledge hammer". Applications Mockups are used as design tools virtually everywhere a new product is designed. Mockups are used in the automotive device industry as part of the product development process, where dimensions, overall impression, and shapes are tested in a wind tunnel experiment. They can also be used to test consumer reaction. Systems engineering Mockups, wireframes and prototypes are not so cleanly distinguished in software and systems engineering, where mockups are a way of designing user interfaces on paper or in computer images. A software mockup will thus look like the real thing, but will not do useful work beyond what the user sees. A software prototype, on the other hand, will look and work just like the real thing. In many cases it is best to design or prototype the user interface before source code is written or hardware is built, to avoid having to go back and make expensive changes. Early layouts of a World Wide Web site or pages are often called mockups. A large selection of proprietary or open-source software tools are available for this purpose. Military acquisition Mockups are part of the military acquisition process. Mockups are often used to test human factors and aerodynamics, for example. In this context, mockups include wire-frame models. They can also be used for public display and demonstration purposes prior to the development of a prototype, as with the case of the Lockheed Martin F-35 Lightning II mock-up aircraft. Consumer goods Mockups are used in the consumer goods industry as part of the product development process, where dimensions, human factors, overall impression, and commercial art are tested in marketing research. Mockups helps to visualise how all design decisions play together, they are convincing and closely resemble the final product, it can be easily revised rather than much later in production stage, It also helps in visualisation of package design projects in 3D & speed up approvals. Furniture and cabinetry Mockups are commonly required by designers, architects, and end users for custom furniture and cabinetry. The intention is often to produce a full-sized replica, using inexpensive materials in order to verify a design. Mockups are often used to determine the proportions of the piece, relating to various dimensions of the piece itself, or to fit the piece into a specific space or room. The ability to see how the design of the piece relates to the rest of the space is also an important factor in determining size and design. When designing a functional piece of furniture, such as a desk or table, mockups can be used to test whether they suit typical human shapes and sizes. Designs that fail to consider these issues may not be practical to use. Mockups can also be used to test color, finish, and design details which cannot be visualized from the initial drawings and sketches. Mockups used for this purpose can be on a reduced scale. The cost of making mockups is often more than repaid by the savings made by avoiding going into production with a design which needs improvement. Software engineering The most common use of mockups in software development is to create user interfaces that show the end user what the software will look like without having to build the software or the underlying functionality. Software UI mockups can range from very simple hand drawn screen layouts, through realistic bitmaps, to semi functional user interfaces developed in a software development tool. Mockups are often used to create unit tests - there they are usually called mock objects. The main reasons to create such mockups is to be able to test one part of a software system (a unit) without having to use dependent modules. The function of these dependencies is then "faked" using mock objects. This is especially important if the functions that are simulated like this are difficult to obtain (for example because it involves complex computation) or if the result is non-deterministic, such as the readout of a sensor. A common style of software design is Service-oriented architecture (SOA), where many components communicate via protocols such as HTTP. Service virtualization and API mocks and simulators are examples of implementations of mockups or so called over-the-wire test doubles in software systems that are modelling dependent components or microservices in SOA environments. Mockup software can also be used for micro level evaluation, for example to check a single function, and derive results from the tests to enhance the products power and usability on the whole. Architecture At the beginning of a project's construction, architects will often direct contractors to provide material mockups for review. These allow the design team to review material and color selections, and make modifications before product orders are placed. Architectural mockups can also be used for performance testing (such as water penetration at window installations, for example) and help inform the subcontractors how details are to be installed. See also Digital mockup Human-in-the-Loop Military dummy Operations research Pilot experiment References Product development Modeling and simulation Design
31642316
https://en.wikipedia.org/wiki/Metallica%20v.%20Napster%2C%20Inc.
Metallica v. Napster, Inc.
Metallica, et al. v. Napster, Inc. was a 2000 U.S. District Court for the Northern District of California case that focused on copyright infringement, racketeering, and unlawful use of digital audio interface devices. Metallica vs. Napster, Inc. was the first case that involved an artist suing a peer-to-peer file sharing ("P2P") software company. Background Metallica is an American heavy metal band. The band was formed in 1981 in Los Angeles by vocalist/guitarist James Hetfield and drummer Lars Ulrich, and has been based in San Francisco for most of its career. Napster was a pioneering peer-to-peer file sharing Internet service, founded by Shawn Fanning, that emphasized sharing digitally encoded music as MP3 audio files. On April 13, 2000, Metallica filed a lawsuit against the file sharing company Napster. Metallica alleged that Napster was guilty of copyright infringement and racketeering, as defined by the Racketeer Influenced and Corrupt Organizations Act. The lawsuit was filed in the U.S. District Court for the Northern District of California. This case was filed soon after another case was filed against Napster, the A&M Records, Inc. v. Napster, Inc., which included 18 large record companies. Metallica v. Napster, Inc. was the first highly publicized instance of an artist suing a P2P software company, and encouraged several other high-profile artists to sue Napster. Case On July 11, 2000, Metallica drummer Lars Ulrich read testimony before the Senate Judiciary Committee accusing Napster of copyright infringement. He explained that, that year, Metallica discovered that a demo of "I Disappear", a song set to be released with the Mission: Impossible II soundtrack, was being played on the radio. Metallica traced the leak to a file on Napster's peer-to-peer file-sharing network, where the band's entire catalogue was available for free download. Metallica argued that Napster was enabling users to exchange copyrighted MP3 files. Metallica sought a minimum of $10 million in damages, at a rate of $100,000 per illegally downloaded song. Metallica hired NetPD, an online consulting firm, to monitor the Napster service. NetPD produced a list of 335,435 Napster users who were allegedly sharing the band's songs online in violation of copyright laws; the 60,000-page list was delivered to Napster's office. Metallica demanded that their songs be banned from file sharing, and that the users responsible for sharing their music be banned from the service. This led to over 300,000 users being banned from Napster, although software was released that simply altered the Windows registry and allowed users to rejoin the service under a different name. The lawsuit also named several universities to be held accountable for allowing students to illegally download music on their networks, including the University of Southern California, Yale University, and Indiana University. Outcome In March 2001, the federal district court judge ruling over the case, Marilyn Hall Patel, issued a preliminary injunction in Metallica's favor pending the case's resolution. The injunction, which was substantially identical to one ordered in the A&M case, ordered Napster to place a filter on the program within 72 hours or be shut down. Napster was forced to search its system and remove all copyrighted songs by Metallica. Other artists including Dr. Dre, a number of record companies, and the RIAA subsequently filed their own lawsuits which led to the termination of an additional 230,142 Napster accounts. On July 12, 2001 Napster reached a settlement with Metallica and Dr. Dre after Bertelsmann AG BMG became interested in purchasing the rights to Napster for $94 million. The settlement required that Napster block music being shared from any artist that did not want their music to be shared. This $94 million deal was blocked when Judge Peter Walsh ruled that the deal was tainted because Napster Chief Executive Officer Konrad Hilbers, a former Bertelsmann executive, had one foot in the Napster camp and one foot in the Bertelsmann camp. Napster was forced to file for Chapter 7 and liquidate its assets. Napster The Napster program was originally a way for nineteen-year-old Shawn Fanning and his friends throughout the country to trade music in the MP3 format. Fanning and his friends decided to try to increase the number of files available and involve more people by creating a way for users to browse each other’s files and to talk to each other. Napster went live in September 1999 and gained instant popularity. Napster’s number of registered users was doubling every 5–6 weeks. In February 2001, Napster had roughly 80 million monthly users compared to Yahoo’s 54 million monthly users. At its peak Napster facilitated nearly 2 billion file transfers per month and had an estimated net-worth of between 60-80 million dollars. Fanning designed Napster as a searching and indexing program, meaning that files were not downloaded from Napster’s servers but rather from a peer’s computer. Users had to download a program, MusicShare, which would allow them to interact with Napster’s servers. When users would log onto their Napster account, MusicShare would read the names of the MP3 files that the user had made public and would then communicate with Napster’s servers so a complete list of all public files from all users could be compiled. Once logged into Napster a user would simply enter the name of the file they wanted to download and hit the search button to view a list of all the sources that contained the desired file. The user would then click the download button and the Napster server would communicate with the host's MusicShare browser to facilitate a connection and begin the download. This method of file sharing is referred to as peer-to-peer file sharing. P2P Peer-to-peer is a distributed application architecture that partitions tasks or workloads between cooperating users. By joining one of these peer-to-peer network of nodes, the users allow a portion of their resources, such as processing power, disk storage or network bandwidth, to be directly available to other network participants. By utilizing this type of application structure, any MP3s, videos, or other files located on a users' computer are instantly made available to other Napster users for download. This is one of the major reasons Napster was so popular, it was easy to use and had a large number of files for download. Being one of the first of its kind, Napster made a significant contribution to the popularity of the peer-to-peer application structure. Many other software applications followed in Napster's footsteps by using this model including BearShare, Gnutella, Freenet, and today's major application of torrents including BitTorrent. Issues surrounding P2P software One of the largest issues with P2P software is the public assumption that users use these programs strictly for illegal sharing of copyrighted files. There are many other usages associated with P2P software. Some file sharing clients have been known to release confidential personal information, and come bundled with spyware, malware, or other viruses that could enable unsecure, unsigned codes to allow remote access to any file on the user's computer. Artists using P2P for promotion The relationship between music artists and P2P file sharing software is not always about infringing music. In a 2000 study, it was shown that users of Napster who download free music actually spent more money on music. In another study, it was proposed that by downloading free music, users are able to sample new music and find new tastes, which may lead to increased sales. Several artists also supported Napster and used the service for promotion. In 2000, Limp Bizkit signed a $1.8 million deal to promote 23 free concerts. Implications There were many people that were worried that the ruling in the Napster v. Metallica case would affect the future of P2P file sharing and other industries that stemmed from the growing popularity of MP3 music. In RIAA v. Diamond, the Recording Industry Association of America sued Diamond Multimedia Systems for producing a portable MP3 player called the Rio. The RIAA claimed that the Rio did not comply with the Audio Home Recording Act (AHRA), and thus its production should be halted. The Ninth Circuit Court of Appeals ruled that the Rio was not covered by the AHRA and that it was designed simply to enable users to easily listen to MP3 files that were already stored on their personal computers or on other personal storage devices. In the earlier case of Sony Corp. of America v. Universal City Studios, Inc., it was ruled that Sony's VCR, which allowed users to record live television onto cassette tapes to be viewed at a later time, did not violate copyright law. References United States copyright case law United States file sharing case law Peer-to-peer file sharing Metallica
18026384
https://en.wikipedia.org/wiki/MusicMaster%20%28software%29
MusicMaster (software)
MusicMaster is a music scheduling software produced by A-Ware Software (aka MusicMaster, Inc. of Dallas, Texas USA) and used by radio, Internet and television stations. Their main office is located in Dallas, Texas. History MusicMaster was created in 1983 by Joseph Knapp, an engineer, radio programmer and on-air personality for several stations in Ohio and Wisconsin, United States. Knapp believed that the decision-making process of selecting music for airplay could best be done using computers. In 1983, Knapp began writing a program he called Revolve, meant to improve the rotation of songs for airplay. Previously, music scheduling had been done by hand, as disc jockeys selected a song card from the front of a stack, played the track, and returned the card to the back of the stack to ensure that it was equally rotated with all available songs. With the use of computers, better decisions could be made based on a set of programmable rules. For instance, the rule of artist separation would ensure that two songs by the same artist needed to be separated by a given amount of time. After selling the first copy of MusicMaster to WCXI-FM/Detroit, Knapp rewrote the program for the Radio Shack TRS-80 and then for the IBM PC. By 1985, it was licensed for distribution by Tapscan and sold as MusicScan. When a legal dispute ended A-ware's relationship with Tapscan in 1994, Knapp formed his own company and distributed the program as MusicMaster. In 2001, MusicMaster was ported to Microsoft Windows. Overview The heart of MusicMaster is a music database, which is custom built to the user's specifications. This database can include information such as song title, artist, trivia, and any other information the user needs to identify each song. The users can also add attributes to each song to identify the type, era, tempo, mood and other factors that will be used by MusicMaster to control the flow, balance and mix of the scheduled playlists using scheduling rules. The user creates rules based on these attributes using MusicMaster's Rule Tree system. The user assigns the songs to any number of different categories. These Categories are then requested in specific times and patterns throughout the day using any number of Format Clocks. The Categories allow some songs to rotate, or repeat, more often than others in the library. Usually, the newer and more popular songs are heard more often, while the older established hits are heard less frequently. In addition to scheduling rules, MusicMaster also offers an Optimum Goal Scheduling system that selects the best possible song based on a combination of weighted scores. These scores are a measure of the variance from the calculated ideal music rotation as indicated by the rotational mathematics of the music library and Format Clock assignments. The song closest to "perfect" is the one selected for airplay in each position on the playlist. Users can export the MusicMaster playlists to files that are compatible with most radio and television automation systems, such as WideOrbit's WO Automation for Radio, as well as many popular media players. The balance and rotation of non-music elements, such as jingles and sweepers, are also controlled by MusicMaster. Users can automatically match sweepers and jingles to specific song attributes such as artist, tempo, or genre. MusicMaster also offers a Nexus Server that allows third-party developers to directly access and update the MusicMaster scheduling intelligence, database, and playlists through their own software systems for real-time synchronization of data. Broadcast software products such as traffic and billing, research analysis and web services may use the Nexus Server to interact directly with a MusicMaster database. In April 2016, MusicMaster introduced MusicMaster CS, a client-server based version of their popular music scheduling system. The complete product line also includes MusicMaster Solo Edition, designed for individual radio stations in small markets, and MusicMaster Personal Edition, perfect for hobby Internet broadcasters, retail background music, and personal entertainment. In September 2016, A-Ware Software relocated to Dallas, Texas. With that move, the corporate name was officially changed to MusicMaster, Inc.. The company maintained the original name, A-Ware Software, as a Texas corporate alias. The corporate headquarters are located at 8330 Lyndon B Johnson Freeway, Suite B1050, Dallas, Texas, USA 75243. The main phone number is 469-717-0100. Clientele MusicMaster currently generates schedules for radio stations all over the world, as well as music television networks, satellite broadcasters, and cable television channels, Internet streams, syndicated programs, restaurants and nightclubs. Clients include: SiriusXM Satellite Radio, Saga Communications, Inc., MTV Networks / Viacom, MusicChoice, Univision, Entercom, Emmis Communications, E. W. Scripps Company, Cox Media Group, Spanish Broadcasting, Midwest Communications, and many more across the globe. References External links Digital radio Broadcast engineering
24336835
https://en.wikipedia.org/wiki/Sipdroid
Sipdroid
Sipdroid is a voice over IP mobile app for the Android operating system using the Session Initiation Protocol. Sipdroid is free and open source software released under the GPL-3.0-or-later license. History The sipdroid open source project was started in svn on March 12, 2009 by the project author pmerle71. It reached version 1.0 on July 12 of the same year. More recent major releases are: 1.5 (May 29, 2010) with improved video quality, 2.0 (November 18, 2010) with the ability to link to a google voice account, 2.2 (March 25, 2011) with the ability to send video sms messages, 2.7 (May 21, 2012) with improved low-latency capability, and 3.0 (April 22, 2013) with support for TLS. In 2010-2011 it gained popularity partially because of its ability to work with Google's Google Voice service, making calls to traditional telephone numbers while only using the data network. However, after Google Voice removed the ability to connect over SIP on March 8, 2011 this functionality was no longer available. Since this time, sipdroid has been used with many different native SIP deployments and is given as the default android SIP client by some vendors. As of January 5, 2014, it is listed on the google play store as having 1,000,000-5,000,000 installations and has been reviewed by nearly 10,000 people. Fork Lumicall is a fork of Sipdroid by Daniel Pocock that has undergone significant extensions, adding support for encryption (Transport Layer Security, SRTP, ZRTP), Push-to-talk, ENUM dialing and other enhancements. First released on 5 February 2012, Lumicall is distributed via Google Play and F-droid. In 2012, Lumicall's real-time monitoring service, based on gmetric4j, was featured in the book Monitoring with Ganglia. Lumicall was featured in the main track at FOSDEM 2013 during the event Free, Open, Secure and Convenient Communications where the lead developer, Daniel Pocock, was part of a panel discussion on this pressing topic for the free software community. Lumicall supports Interactive Connectivity Establishment, the IETF proposed successor to STUN for users behind NAT. Lumicall interfaces with Android's default dialer application and optionally prompts the user to make an outgoing call using VoIP or the GSM/3G network. Features Two SIP accounts can be used simultaneously Supports STUN for users behind Network address translation (NAT) Video calls (limited support) Sipdroid interfaces with Android's default dialer application and optionally prompts the user to make an outgoing call using Sipdroid or the GSM/3G network. See also Comparison of VoIP software List of SIP software Mobile VoIP References External links Lumicall at Google Play OpenTelecoms (general information about federated VoIP and building infrastructure for Lumicall users) Free and open-source Android software Free VoIP software
68318414
https://en.wikipedia.org/wiki/PRODAFT
PRODAFT
PRODAFT (Proactive Defense Against Future Threats) is a Swiss cyber threat intelligence company primarily known for its global fight against CSAM (Child Sexual Abuse Material) and cyber crime. It was founded in 2012 and currently has operational offices in Switzerland, Netherlands, and Turkey. PRODAFT had published numerous Threat Intelligence reports publicly regarding high-end cybercrime groups such as Fin7/Carbanak, Silverfish, LockBit, FluBot, and others. As part of "Top 100 Swiss Startups" event by VentureLab, PRODAFT has been elected as the "Public's Choice" in the field of Security in Switzerland. In 2020, PRODAFT launched a website to track down pedophiles on the web and show their locations in real-time. Pedophiles were shown on a world-map while they are downloading or watching CSAM online. In 2021, the map is removed from the site as it was not possible for authorized bodies and agencies to utilize any of the personal data (IP address and location) indicated on PEDOMAP project for the purpose of locating/investigating a suspect, due to policy regulations and rules of engagement in respective countries. References 2012 establishments in Switzerland Companies of Turkey Companies of Switzerland Companies of the Netherlands Cybercrime Child pornography crackdowns Software companies established in 2012 Information technology companies of Switzerland Information technology companies of Turkey Information technology companies of the Netherlands Computer security software companies Computer surveillance
2793460
https://en.wikipedia.org/wiki/SpeedTree
SpeedTree
SpeedTree is a group of vegetation programming and modeling software products developed and sold by Interactive Data Visualization, Inc. (IDV) that generates virtual foliage for animations, architecture and in real time for video games and demanding real time simulations. SpeedTree has been licensed to developers of a range of video games for Microsoft Windows, and the Xbox and PlayStation console series since 2002. SpeedTree has been used in more than 40 major films since its release in 2009, including Iron Man 3, Star Trek Into Darkness, Life of Pi and Birdman, and was used to generate the lush vegetation of Pandora, in Avatar. SpeedTree was awarded a Scientific and Technical Academy Award in 2015, presented to IDV founders Michael Sechrest and Chris King, and Senior Engineer Greg Croft. History SpeedTree was conceptualized at IDV in circa 2000, and originated due to the firm's lack of satisfaction with 3rd-party tree-generation software on the market. The initial version of SpeedTreeCAD (CAD standing for "computer-aided design") was developed by IDV for a real-time golf simulation. Although backers pulled out of the golf project, IDV refined the CAD software as a 3D Studio Max plug-in for an animated architectural rendering, dubbing it SpeedTreeMAX. SpeedTreeMAX was released in February 2002, and toward the end of 2002, IDV released SpeedTreeRT, a real-time foliage/tree middleware SDK, which allowed automatic levels of foliage detail, real-time wind effects, and multiple lighting options. IDV eventually released plug-ins for Maya as well, appropriately named SpeedTreeMAYA. In early 2009, IDV discontinued the SpeedTreeMAX and SpeedTreeMAYA plugins, replacing them with SpeedTree Modeler and Compiler products. IDV released SpeedTree 5 in July 2009, a version representing a "complete re-engineering" of the software and the first versions of SpeedTree enabling hand modeling and editing of vegetation models: SpeedTree Modeler (replacing SpeedTreeCAD), SpeedTreeSDK (replacing SpeedTreeRT) and SpeedTree Compiler, which prepares SpeedTree files for real-time rendering. SpeedTree Cinema was first released by IDV in 2009, based on version 5 technology. SpeedTree for Games (version 6) was released on November 7, 2011, and was essentially a re-branded version of SpeedTree 6 (Modeler + Compiler). The product was identified as SpeedTree for Games to distinguish it from other products not meant for gaming/real-time use. SpeedTree Architect was released on October 15, 2012, and is designed for architectural 3D CAD use and 3D fly-throughs. IDV released updated versions of SpeedTree Cinema, SpeedTree Studio and SpeedTree Architect in November 2013. IDV released SpeedTree v7 for Unreal Engine 4 in July 2014. IDV released SpeedTree v7 for Unity 5 on the new engine version's launch date, in March 2015. IDV released SpeedTree for Games v7 on April 16, 2015. IDV and three of its engineers received a Scientific and Technical Academy Award in 2015, for their SpeedTree Cinema product suite. IDV was acquired by Unity Technologies in July of 2021. SpeedTree 9 was released on January 10, 2022. This version added new features such as freehand editing for branch bending. New branch features, zig-zag, jink, texture map skew correction can be found on the branch generator. Mesh Converter to help turn 3d trunk and branch scans into full tree models. Atlas control, HDRI lighting, USD export, material for backside of the leaf geometry Products Suites SpeedTree Cinema was released by IDV in 2009, and saw its first major use in Avatar by James Cameron. SpeedTree Cinema is designed for use in the film industry, and generates high-resolution meshes and high-quality textures for Autodesk 3ds Max, Autodesk Maya and Cinema4D. The Cinema edition includes SpeedTree Modeler, and the complete Tree Model Library designed by IDV, while with some other suites tree packs must be purchased separately. Several members of the SpeedTree line can simulate animated growth of trees and plants and seasonal changes, and can export data for animated wind effects. SpeedTree Studio was released by IDV in 2009 as a less expensive companion to SpeedTree Cinema. It does not include all Cinema features, nor the complete Tree Model Library. SpeedTree Architect was released in 2012, is designed for use in 3D architectural CAD. It generates meshes compatible with typical architectural applications such as Autodesk 3ds Max, Autodesk Maya and Rhino. The Architect edition also exports normal maps and UV maps, for physically-accurate rendering engines such as V-Ray and mental ray. SpeedTree for Games is the edition of SpeedTree for video game development, contrasting with the Subscription edition offered to users of the Unity game development engine and certain versions of the Unreal Engine 4 engine. The Games edition includes the Modeler, Compiler, and SDK. This edition permits game developers to integrate SpeedTree runtime technology into any game engine of their choice. Meshes generated with the system are low poly, with multiple levels of detail, use texture atlases, and are typically stored in an efficient binary format. SpeedTree Subscription Edition is a low-cost edition of SpeedTree Modeler and Runtime, targeted at independent game studios. The licensing fee is a US$19 monthly charge, as well as additional charges for tree packs. Subscribers get access to the SpeedTree editor, the ability to generate 3D models of trees and plants, such plants being exclusively usable with either Unreal Engine 4 or Unity, depending on the license. Subscribers can download additional tree model packs from the Model Library, and pricing varies between packs. Components SpeedTree Modeler is a Windows-based specialized modeling tool for designing foliage. The modeler features a combination of procedural tree generation, and hand-editing tools, to draw trees or transform individual tree parts. Procedural tree generation uses configuration such as branch length, branching angles and bark texture to generate a tree in a variety of formats. Newer versions support a drag-and-drop interface that automatically blends branch intersections and handles branch collisions. SpeedTree Compiler is a software that enables creation of efficient tree models for use in real-time rendering or video games. It generates texture atlases and compiles and optimizes tree models for real-time use. SpeedTree SDK is a multi-platform C++ SDK that efficiently handles rendering of SpeedTree-generated trees and forests. The engine is designed to integrate and operate within a larger game engine, with ready-made support for Unreal Engine, Unity and OGRE. The engine contains optimized systems to cull off-screen trees, and to determine level of detail for on-screen trees. Full source code is available to licensees for use in video games and other real-time applications, and modification of the engine is supported. The engine is built to work with Microsoft Windows, Mac OS X, Xbox, PlayStation and PlayStation Vita. Partners IDV is a licensed middleware partner with PlayStation 3, PlayStation 4, Xbox 360 and Xbox One. IDV has partnered with Epic Games in order to integrate the software with Epic's Unreal Engine 4 and Unreal Engine 3 and the free UDK engine released in November 2009. Partnerships have also been formed between IDV and BigWorld Tech, the Vision Engine by Havok, Multiverse Network, the Gamebryo engine by Emergent Game Technologies and the OGRE open-source rendering engine by Torus Knot. Awards SpeedTree won a Scientific and Technical Academy Award in 2015. SpeedTree was accorded a Primetime Emmy Engineering Award in October 2015. 2015 Develop Industry Excellence Award Winner, in the Design & Creativity Tool category. Develop, a UK-based magazine and website serving the game industry, first recognized industry achievements in a variety of categories in 2003. 2016 Develop Industry Excellence Awards Finalist, in the Design & Creativity Tool/Tech & Services category. 2008 Develop Industry Excellence Awards Finalist, in the Technology & Services, Tools Providers category. 2005 Frontline Award, Middleware category. This award program, sponsored by Game Developer magazine, recognizes exceptional game development tools. Frontline Award Finalist: 2003, 2004, 2006, 2009, 2012 MT2 Top 100: 2003, 2004, 2005, 2006, 2008, 2010, 2011, 2012, 2013. The MT2 Top 100 awards are sponsored by Kerrigan Media International and Military Training Technology to recognize companies and technologies that have made a significant impact in the military training industry. Applications Video game industry SpeedTree for Games was licensed for its first video games, including The Elder Scrolls IV: Oblivion, in December 2002. SpeedTree has been licensed for PC and next-generation console titles in a wide variety of genres. Studios that have used SpeedTree, or published games featuring the technology, include: Activision Bethesda Softworks BioWare Blue Byte Bungie Capcom CD Projekt Red The Creative Assembly DICE Electronic Arts Epic Games Frontier Developments Funcom Productions Gearbox Software Hello Games Insomniac Koei Larian Studios Massive Entertainment Microsoft Game Studios Namco Bandai NCsoft Ninja Theory Obsidian Entertainment Epic Games Poland Piranha Bytes Rockstar Games Rocksteady Studios SEGA Sony Computer Entertainment Sony Interactive Entertainment Square Enix Sucker Punch Take-Two Interactive Ubisoft Volition Warner Bros. WEBZEN Selected recent, upcoming and/or popular titles featuring SpeedTree for Games: Film and animation industry Following the release of SpeedTree Cinema in 2009, SpeedTree saw its first major cinematic use in 2009's Avatar, in which the technology provided the vegetation for the flyover of the planet Pandora in the first frames of the movie, as well as other scenes. Known movies and television productions featuring SpeedTree include: Real-time applications SpeedTree is being used in the following real-time projects and offerings: America's Army project, both the America's Army game and in non-public applications used for training, simulation, education, virtual prototyping and outreach An optional foliage module with the Vega Prime visualization product line. Vega Prime is a 3D visual simulation software package used by the global military industry and in other game and non-game markets. An Apache attack helicopter FLIR simulation developed for the US Army by Camber Corp. for pilot training under night flying conditions The Expresso Fitness Virtual Reality Bike, a cardio exercise system developed by Expresso Fitness and sold to gyms and home users A combat simulation developed by Emergent Game Technologies for the US Department of Defense A project under development by the Germany-based division of European Aeronautic Defence and Space (EADS) The Forest Fire project, developed by the Media Convergence Laboratory (MCL) at the University of Central Florida. The project is helping to determine if a virtual reality presentation of wildfires can influence local residents to invest in prescribed burns and other protective efforts. See also Game engine References External links SpeedTree official website Interactive Data Visualization, Inc. Official Website - the website of the developers of SpeedTree. Gamasutra feature article on SpeedTree 2002 software 3D graphics software Middleware for video games Video game engines Virtual reality
37110580
https://en.wikipedia.org/wiki/PlanGrid
PlanGrid
PlanGrid is a construction productivity software. The platform provides real-time updates and seamless file synchronization over Wi-Fi and cellular networks. PlanGrid replaces paper blueprints, brings the benefits of version control to construction teams, and is a collaborative platform for sharing construction information like field markups, progress photos and issues tracking. PlanGrid is a venture capital-backed company based in San Francisco, California that creates construction software for the iPad, android tablets, iPhone, and Windows in the construction industry by allowing field workers to store, view, and communicate with construction blueprints. Its capability to deal quickly with blueprint changes is aimed at helping the construction industry deal with the costs of paper building plans. The company is headquartered in San Francisco, and was founded in December 2011 by a group of construction engineers and software engineers. The initial product, PlanGrid for iPad, launched March 2012, its iPhone app launched in September 2012, and its Android app launched May 2014. PlanGrid's CEO, Tracy Young, was named to Fast Company's "Most Creative People in Business" in 2015. The company's board of directors include former Salesforce COO George Hu, Sequoia Capital Partner Doug Leone, and former Autodesk CEO Carol Bartz. Autodesk announced plans in November 2018 to acquire PlanGrid for US$875 million. The acquisition was completed on December 20, 2018. Fundraising history The company's seed-stage investors include Y-Combinator, Sam Altman, Paul Buchheit and 500 Startups. In May 2015, PlanGrid raised a $18 million Series A from Sequoia Capital. In November 2015, PlanGrid raised a $40 million Series B led by Tenaya Capital. References External links Technology companies of the United States Software companies based in California Construction software Construction documents Autodesk acquisitions 2018 mergers and acquisitions Software companies of the United States 2012 establishments in California Software companies established in 2012 American companies established in 2012
53467437
https://en.wikipedia.org/wiki/Ying%20Zou
Ying Zou
Ying Zou is a Canadian computer scientist. She is a professor in the Department of Electrical and Computer Engineering at Queen's University and a Canada Research Chair (Tier I) in Software Evolution. She was awarded the IBM CAS Research Faculty Fellow of the Year in 2014 and the IBM Faculty Award in 2007 and 2008. Education Zou has a Bachelor of Engineering (B.Eng) degree from Beijing Polytechnic University, a Masters of Engineering (M.Eng) degree from the Chinese Academy of Space Technology, and a Ph.D. in Electrical and Computer Engineering from the University of Waterloo. Career Zou is a professor and the lead of the Software Evolution and Analytic Lab (SEAL) at Queen's University. Her research interest includes Software Engineering, Software Evolution, Software Analytics, Software Empirical Studies and Service Oriented Architecture (SOA). References Living people Queen's University at Kingston faculty Canadian computer scientists University of Waterloo alumni Canadian people of Chinese descent Year of birth missing (living people)
40874736
https://en.wikipedia.org/wiki/The%20Linux%20Programming%20Interface
The Linux Programming Interface
The Linux Programming Interface: A Linux and UNIX System Programming Handbook is a book written by Michael Kerrisk, which documents the APIs of the Linux kernel and of the GNU C Library (glibc). It covers a wide array of topics dealing with the Linux operating system and operating systems in general, as well as providing a brief history of Unix and how it led to the creation of Linux. It provides many samples of code written in the C programming language, and provides learning exercises at the end of many chapters. Kerrisk is a former writer for the Linux Weekly News and the current maintainer for the Linux man pages project. The Linux Programming Interface is widely regarded as the definitive work on Linux system programming and has been translated into several languages. Jake Edge, writer for LWN.net, in his review of the book, said "I found it to be extremely useful and expect to return to it frequently. Anyone who has an interest in programming for Linux will likely feel the same way." Federico Lucifredi, the product manager for the SUSE Linux Enterprise and openSUSE distributions, also praised the book, saying that "The Linux Programming Encyclopedia would have been a perfectly adequate title for it in my opinion" and called the book "…a work of encyclopedic breadth and depth, spanning in great detail concepts usually spread in a multitude of medium-sized books…" Lennart Poettering, the software engineer best known for PulseAudio and systemd, advises people to "get yourself a copy of The Linux Programming Interface, ignore everything it says about POSIX compatibility and hack away your amazing Linux software". At FOSDEM 2016 Michael Kerrisk, the author of The Linux Programming Interface, explained some of the issues with the Linux kernel's user-space API he and others perceive. It is littered with design errors: APIs which are non-extensible, unmaintainable, overly complex, limited-purpose, violations of standards, and inconsistent. Most of those mistakes can't be fixed because doing so would break the ABI that the kernel presents to user-space binaries. See also Linux kernel interfaces Programming Linux Games References External links The Linux Programming Interface at the publisher's (No Starch Press) Website The Linux Programming Interface Description at Kerrisk's Website API changes The Linux Programming Interface Traditional Chinese Translation Computer programming books Books about Linux 2010 non-fiction books No Starch Press books Interfaces of the Linux kernel
65092839
https://en.wikipedia.org/wiki/AC%2000-69
AC 00-69
The Advisory Circular AC 00-69, Best Practices for Airborne Software Development Assurance Using EUROCAE ED-12( ) and RTCA DO-178( ), initially issued in 2017, supports application of the active revisions of ED-12C/DO-178C and AC 20-115. The AC does not state FAA guidance, but rather provides information in the form of complementary "best practices". Notably, the guidance of FAA Order 8110.49 regarding "Software Change Impact Analysis" was removed in Rev A of that notice in 2018. The best practices that AC 00-69 now describes for Software Change Impact Analysis are much reduced and less prescriptive than what was removed from 8110.49. This AC clarifies that Data Coupling Analysis and Control Coupling Analysis are distinct activities and that both are required for satisfying objective A-7 (8) of ED-12C/DO-178C and ED-12B/DO-178B, adding that data and control coupling analyses rely upon detailed design specification of interfaces and dependencies between components. The AC also recommends that error handling (how the software avoids, detects, and handles runtime error) should be defined in explicit, reviewed design specifications rather than implemented ad hoc in the source code. References External links AC 00-69', Best Practices for Airborne Software Development Assurance Using EUROCAE ED-12( ) and RTCA DO-178( )'' Avionics Safety Software requirements RTCA standards Computer standards
28242998
https://en.wikipedia.org/wiki/OpenChrom
OpenChrom
OpenChrom is an open source software for the analysis and visualization of mass spectrometric and chromatographic data. Its focus is to handle native data files from several mass spectrometry systems (e.g. GC/MS, LC/MS, Py-GC/MS, HPLC-MS), vendors like Agilent Technologies, Varian, Shimadzu, Thermo Fisher, PerkinElmer and others. But also data formats from other detector types are supported recently. OpenChrom supports only the analysis and representation of chromatographic and mass spectrometric data. It has no capabilities for data acquisition or control of vendor hardware. OpenChrom is built on the Eclipse Rich Client Platform (RCP), hence it is available for various operating systems, e.g. Microsoft Windows, macOS and Linux. It is distributed under the Eclipse Public License 1.0 (EPL). Third-party libraries are separated into single bundles and are released under various OSI compatible licenses. History OpenChrom was developed by Philip Wenig (SCJP, LPIC-1) as part of his PhD thesis at the University of Hamburg, Germany. The focus of the thesis was to apply pattern recognition techniques on datasets recorded by analytical pyrolysis coupled with chromatography and mass spectrometry (Py-GC/MS). OpenChrom won the Thomas Krenn Open Source Award 2010 as well as the Eclipse Community Award 2011. The developers are also founding members of the Eclipse Science Working Group. After successful commercialization of contract development and services around the OpenChrom project, vendor Lablicate reinforced the commitment to Free/Libre/Open-Source Software with the release of ChemClipse in October 2016, which serves as the base for all OpenChrom products. Supported data formats Each system vendor stores the recorded analysis data in its own proprietary format. That makes it difficult to compare data sets from different systems and vendors. Furthermore, it's a big drawback for interlaboratory tests. The aim of OpenChrom is to support a wide range of different mass spectrometry data formats natively. OpenChrom takes care that the raw data files can't be modified according to the good laboratory practice. To help scientists OpenChrom supports several open formats to import and export the analysis results. In addition, OpenChrom offers its own open source format (*.ocb) that makes it possible to save the edited chromatogram as well as the peaks and identification results. Mass selective detector Agilent ChemStation *.D (DATA.MS and MSD1.MS) AMDIS Library (*.msl) Bruker Flex MALDI-MS (*.fid) Chromtech (*.dat) CSV (*.csv) Finnigan (*.RAW) Finnigan MAT95 (*.dat) Finnigan ITDS (*.DAT) Finnigan ITS40 (*.MS) Finnigan Element II (*.dat) JCAMP-DX (*.JDX) Microsoft Excel (*.xlsx) mzXML (*.mzXML) mzData (*.mzData) NetCDF (*.CDF) NIST Text (*.msp) Open Chromatography Binary (*.ocb) Peak Loadings (*.mpl) PerkinElmer (*.raw) Varian SMS (*.SMS) Varian XMS (*.XMS) VG MassLab (*.DAT_001;1) Shimadzu (*.qgd) Shimadzu (*.spc) Waters (*.RAW) ZIP (*.zip) Agilent ICP-MS (*.icp) Finnigan ICIS (*.dat) mzML (*.mzML) mzMLb (*.mzMLb) mz5 (*.mz5) mzDB (*.mzDB) SVG (*.svg) MassHunter (*.D) Finnigan ICIS (*.dat) MassLynx (*.RAW) Galactic Grams (*.cgm) AnIML (*.animl) GAML (*.gaml) ... Flame ionization detector Agilent FID (*.D/*.ch) FID Text (*.xy) NetCDF (*.cdf) PerkinElmer (*.raw) Varian (*.run) Finnigan FID (*.dat) Finnigan FID (*.raw) Shimadzu (*.gcd) Arw (*.arw) AnIML (*.animl) GAML (*.gaml) ... Diode-array detection Agilent DAD (*.UV/*.ch) ABSciex Chromulan Shimadzu (*.lcd) Waters Empower AnIML (*.animl) Fourier-transform infrared spectroscopy Thermo Galactics (*.spc) Thermo Fisher Nicolet (*.spa) GAML (*.gaml) Near-infrared spectroscopy Bruker OPUS (*.0) Other formats Peak Loadings (*.mpl) NIST-DB (*.msp) AMDIS (*.msl) AMDIS (*.cal) AMDIS (*.ELU) MassBank (*.txt) SIRIUS (*.ms) Major features OpenChrom offers a variety of features to analyze chromatographic data: Native handling of chromatographic data (MSD and FID) Batch processing support Baseline detector support Peak detector, integrator support Peaks and mass spectrum identifier support Quantitation support Filter support (e.g. Mass Fragment and Scan Removal, noise reduction, Savitzky-Golay smoothing, CODA, backfolding) Retention time shift support Retention index support Chromatogram overlay mode Support for principal component analysis (PCA) Do/undo/redo support Integration of OpenOffice/LibreOffice and Microsoft Office Extensible by plug-ins Chromatogram and peak database support Update support Subtract mass spectra support Releases The software was first released in 2010. Each release is named after a famous scientist. References External links OpenChrom Homepage Bioinformatics software Mass spectrometry software Chemistry software for Linux Science software for MacOS Science software for Windows Chromatography software Eclipse software
47327601
https://en.wikipedia.org/wiki/Electronic%20AppWrapper
Electronic AppWrapper
The Electronic AppWrapper (EAW) was an early commercial electronic software distribution catalog. Originally, the AppWrapper was a traditional printed catalog, which later developed into the Electronic AppWrapper, offering electronic distribution and software licensing for third party developers on NeXT systems. AppWrapper #4 App Store app ran on NeXT, HP-PA RISC, Intel and SUN Sparc and was available via the World Wide Web at paget.com. It is considered to be the first app store. According to Richard Carey, an employee of Paget Press who was present in 1993, the Electronic AppWrapper was first demonstrated to Steve Jobs by Jesse Tayler at NeXTWorld Expo. The EAW went on to receive recognition from Robert Wyatt of Wired magazine and Simson Garfinkel of NeXTWorld magazine. An interview with Jesse Tayler, the lead engineer and inventor of EAW, discussed the early days of AppWrapper and how the transition to the foundation of the World Wide Web and his program had similarities. Some software developers with titles on the EAW have continued over the decades and transitioned into the modern Apple Inc. era. Andrew Stone is one example, who designed programs that were available on the EAW and still designs apps for the App Store today. History In the early 1990s Paget Press, a Seattle based software distribution company, developed the Electronic AppWrapper which was the first electronic App Store on NeXT. Critically, the application storefront itself is what provides a secure, uniform experience that automates the electronic purchase, decryption and installation of software applications or other digital media. The Electronic AppWrapper started initially as a paper catalog, which was released periodically. The AppWrapper was a combination of both a catalog and magazine, which listed the vast majority of software products available for the NeXT Computer. Within the first couple of publications, the AppWrapper began to have a digital counterpart, with the introduction of CD-ROM disks in the back of later issues of what then to be called The Electronic AppWrapper as well as a website at paget.com. EAW is considered the first App Store partly because of Steve Jobs, but because it was the first true application storefront made to search and review software titles. Critically, the storefront application itself provides a standard, secure way to electronically purchase, decrypt and install apps automatically end-to-end. The Electronic AppWrapper was mostly apps with some music or other digital media, the iTunes Music Store was mostly music and some iPod apps. Apple's Garage Band even purveys digital music lessons using the very same iTunes account as used for the iOS App Store, they are all part of the same App Store. Electronic bookstores such as Kindle, Barnes and Noble or Kobo are further examples of successful electronic distribution using the App Store concept. For the Electronic AppWrapper distribution, encryption and the digital rights of the software were universally managed for all participating developers much like stores participating in a shopping mall. Software has always been electronically transferred, and encryption has always been part of computing. The introduction of unified commercial software distribution catalog with a true application storefront to collectively manage and provide encryption for apps and media was a seminal invention. This is because by protecting the digital rights of artists online, the App Store provided the first viable economic and instant distribution mechanism which ultimately exploded the pace of software adoption and created an economic boom. When compared to shipping boxes and printing user manuals, the pace and efficiency provided by the App Store is profound and has changed software distribution forever. During the early development of the Electronic AppWrapper, it became the first commercial software distribution catalog to allow digital data encryption and provide digital rights management for apps, music and data. This was a tremendous advance for the independent developers who could not possibly access the financial resources to publish software boxes across the country and the world, in order to reach their audience. The NeXT Computer initially came without a floppy disk drive, which created an urgent need to invent a new form of software distribution. The AppWrapper contained all kinds of various types of software, including general third party applications, music and media. The invention was part of a movement to protect the rights of third party developers and distribute software without the expense of printing manuals and delivering boxes, something that today is seen universally as then norm. Other advantages of the EAW included levelling the playing field for software distribution. It allowed independent or smaller software companies to distribute their apps quickly, and compete with larger companies with more established distribution channels. The EAW also provided ways that software updates could reach existing customers, something that was uncommon at the time. The product was first demonstrated to Steve Jobs at the NeXTWorld Expo in 1993. The Electronic AppWrapper received recognition later in the year, with a senior editor at NeXTWORLD Magazine, Simson Garfinkel rated The Electronic AppWrapper 4 3/4 Cubes (out of 5), in his formal review. Also, Paget's Electronic AppWrapper was named a finalist in the highly competitive InVision Multimedia '93 awards in January, 1993 and won the Best of Breed award for Content and Information at NeXTWORLD Expo in May, 1993. Following the development of the AppWrapper and its subsequent use of the early Internet in its early days, The AppWrapper went on to feature in Wired magazine, where they stated that it was at the time the best way to distribute and license software. Mechanics The Electronic AppWrapper operated by taking a percentage of each sale of the software it listed. Due to the scale of the operation in the early days, the price was negotiated individually with each developer. References 1991 software Software distribution NeXT
216454
https://en.wikipedia.org/wiki/IPAQ
IPAQ
The iPAQ is a Pocket PC and personal digital assistant, first unveiled by Compaq in April 2000; the name was borrowed from Compaq's earlier iPAQ Desktop Personal Computers. Since Hewlett-Packard's acquisition of Compaq, the product has been marketed by HP. The devices use a Windows Mobile interface. In addition to this, there are several Linux distributions that will also operate on some of these devices. Earlier units were modular. "Sleeve" accessories, technically called jackets, which slide around the unit and add functionality such as a card reader, wireless networking, GPS, and even extra batteries were used. Later versions of iPAQs have most of these features integrated into the base device itself, some including GPRS mobile-telephony (sim-card slot and radio). HP's line-up of iPAQ devices includes PDA-devices, smartphones and GPS-navigators. A substantial number of current and past devices are outsourced from Taiwanese HTC corporation. History The iPAQ was developed by Compaq based on the SA-1110 "Assabet" and SA-1111 "Neponset" reference boards that were engineered by a StrongARM development group located at Digital Equipment Corporation's Hudson Massachusetts facility. At the time when these boards were in development, this facility was acquired by Intel. When the "Assabet" board is combined with the "Neponset" companion processor board they provide support for 32 megabytes of SDRAM in addition to CompactFlash and PCMCIA slots along with an I2S or AC-Link serial audio bus, PS/2 mouse and trackpad interfaces, a USB host controller and 18 additional GPIO pins. Software drivers for a CompactFlash ethernet device, IDE storage devices such as the IBM Microdrive and the Lucent WaveLAN/IEEE 802.11 Wifi device were also available. An earlier StrongARM SA-1100 based research handheld device call the "Itsy" had been developed at Digital Equipment Corporation's Western Research Laboratory (later to become the Compaq Western Research Laboratory). The first iPAQ Pocket PC was the H3600 series, released in 2000. It ran Microsoft's Pocket PC 2000 operating system, and featured a 240 x 320 pixel 4096-color LCD, 32 MB of RAM, and 16 MB of ROM. Compaq released a similarly-designed H3100 series Pocket PC in January, 2001. It was a lower-priced model with a 15-greyscale monochrome LCD, 16 MB of RAM, and a dark grey D-pad instead of the chrome D-pad of its predecessor. The H3600 series was succeeded by the H3800 and H3900 series, which retained the same form factor, but had a different button layout. Soon after HP's merger with Compaq in 2002, HP discontinued its Jornada line of Microsoft Windows powered Pocket PCs, and continued the iPAQ line that started under Compaq. In June 2003, HP retired the h3xxx line of iPAQs and introduced the h1xxx line of iPAQs targeted at price conscious buyers, the h2xxx consumer line, and the h5xxx line, targeted at business customers. They were sold pre-installed with the Windows Mobile for Pocket PC 2003 Operating System. The h63xx series of iPAQs running the Phone Edition of Windows Mobile 2003, the hx47xx series and the rz17xx series, both running the Second Edition of Windows Mobile 2003 were introduced in August 2004. In August 2004, HP released the rz17xx and rx3xxx series of Mobile Media Companions. These devices were aimed at consumers, rather than the traditional corporate audience. Emphasis was placed on media features, like NEVO TV Remote and Mobile Media. They ran on Windows Mobile 2003SE. In February 2005, the iPAQ Mobile Messenger hw6500 series was introduced to selected media at the 3GSM conference in Cannes, France. It was replaced a year later by the hw6900 series, running on Windows Mobile 5. In 2007, the iPAQ rx4000 Mobile Media Companion PDA/media devices and rx5000 Travel Companion PDA/GPS devices were released. Both series of iPAQs work on the Windows Mobile 5 Operating System (WM5), as do the hx2000 and hw6900 series. The first HP Windows Mobile 6 device, the iPAQ 500 Series Voice Messenger, with the Windows Mobile 6 Standard Operating System (WM6), and numeric pad, was released in the same year. The entire iPAQ line was completely revamped by the introduction of five new iPAQ series to complement the introduction of the iPAQ 500 Series Voice Messenger earlier in the year. The models announced were the 100 Series Classic Handheld, the 200 Series Enterprise Handheld, the 300 Series Travel Companion, the 600 Series Business Navigator and the 900 Series Business Messenger. The 100 and 200 Series are regular touchscreen PDAs without phone functionality running WM6. The 300 Series Travel Companion is not a PDA; marketed as a Personal Navigation Device, it is a handheld GPS unit operating on the Windows CE 5.0 core Operating System with a custom user interface. The 600 and 900 series are phones with integrated GPS and 3G capabilities, running the WM6 Professional. The 600 series possesses a numeric pad and the 900 series features a full QWERTY keyboard. Hewlett-Packard introduced a smartphone iPAQ Pocket PC that looks like a regular cell phone and has VoIP capability. The series is the HP iPAQ 500 Series Voice Messenger. In December 2009, HP released the iPAQ Glisten, running on Windows Mobile 6.5. As of April 2011, no new models have been announced. HP continues to advertise the 111 series and the Glisten on its website, however. As such, the status and fate of the iPAQ line is unclear. In mid-August 2011, HP announced that they are discontinuing all webOS devices, and possibly mobile devices. It is unclear if this move will affect the iPAQ line, although they are producing several new iPAQs for Nederlandse Spoorwegen as of November 2011 (11/11). Model list Jacket-compatible These older models are compatible with the iPAQ Jacket which can accept 1× CompactFlash, 1× PC Card or 2× PC Card slots. iPAQ jacket PN 173396-001 PCMCIA (PC port) 1× internal Li-ion battery PN 167648 3.7 V 1500 mAh (upgradable). Newer models SDIO can support up to 2GB. Alternative operating systems for the iPAQ OpenEmbedded The OpenEmbedded distribution is (as of 2016) the only actively maintained Linux distribution for the iPAQ models, by way of the meta-handheld layer. Familiar Linux An alternative Linux-based OS available for the iPAQ was Familiar. It stopped being actively maintained in 2007. It was available with the Opie or GPE GUI environment, or as a base Linux system with no GUI if preferred. Both Opie and GPE provided the usual PIM suite (calendar, contacts, to do list, and notes) as well as a long list of other applications. Support for handwriting recognition, on-screen keyboard, bluetooth, IrDA and add-on hardware such as keyboards are standard in both environments. The v0.8.4 (2006-08-20) version supports HP iPAQ H3xxx and H5xxx series of handhelds, and introduced initial support for the HP iPAQ H2200, Hx4700, and H6300 series. Intimate Linux On devices with added storage (primarily microdrives) there is a modified port of Debian called Intimate. In addition to a standard X11 desktop, Intimate also offered the Opie, GPE and Qtopia suites. (Qtopia was a QT-based PIM suite with an optional commercial license.) NetBSD NetBSD will install and run on iPAQ. Plan 9 from Bell Labs Plan 9 from Bell Labs runs on some iPAQs. The nickname of the architecture is "bitsy," after the name of the ARM-based chipsets used in many of the machines. The "Installation on Ipaq" part of the wiki states: "These instructions are for a Compaq Ipaq and have been tested only on models H3630 and H3650 with 32MB of RAM." In regards to iPAQs, the page on the wiki titled "Supported PDAs" only mentions that the "H3630 and H3650 are known to work." Ångström distribution See Ångström distribution Upgrades The hx2000 series and some later models are upgradeable to newer versions of Windows Mobile. These upgrades could be purchased from HP. Windows Mobile 2003 could be installed on the H3950, H3970, h5450 models and possibly other models of the H3xxx series with sufficient ROM capacity. Other "cooked" (ready to run) roms have been provided by the group known as the xda-developers and are available for the hx2000 series, the hx4700 and others. The upgradeable versions for the hx2000 and hx4700 include Windows Mobile 6.0, 6.1 and 6.5 which are the newest releases of the Windows Mobile platform. Internal Li-ion battery iPAQ models 3100–3700 are fitted with internal Li-ion battery PN 167648 3.7 V 1500 mAh which can be replaced with a 2200 mAh unit. The same battery is used in the iPAQ jacket PN 173396-001 PCMCIA (PC port), which may also be upgraded to a 2200 mAh unit. The 3800/3900 series are fitted with a 1700 mAh cell as standard, also upgradeable to 2200 mAh. Compaq presumably upgraded the battery to cope with the faster CPU's power requirements. RAM upgrades It is possible to have the internal RAM of an iPAQ H3970 and hx4700 upgraded to 128 MB by using a specialist service to replace the surface-mount BGA RAM chips. See also HP Touchpad HP Slate Personal digital assistant Windows Mobile Hewlett-Packard Jornada (PDA) – The predecessor of sorts to the iPAQ line. HTC HD2 SuperWaba – Free and open software development kit for Pocket PC and Linux iPAQs Pocket PC imageon References HP PDAs Windows Mobile Classic devices Windows Mobile Professional devices Windows Mobile Standard devices Mobile computers Embedded Linux Windows CE devices Compaq PDAs
551732
https://en.wikipedia.org/wiki/LeJOS
LeJOS
leJOS is a firmware replacement for Lego Mindstorms programmable bricks. Different variants of the software support the original Robotics Invention System, the NXT, and the EV3. It includes a Java virtual machine, which allows Lego Mindstorms robots to be programmed in the Java programming language. It also includes 'iCommand.jar' which allows you to communicate via bluetooth with the original firmware of the Mindstorm. It is often used for teaching Java to first-year computer science students . The leJOS-based robot Jitter flew around on the International Space Station in December 2001. Pronunciation According to the official website: In English, the word is similar to Legos, except there is a J for Java, so the correct pronunciation would be Ley-J-oss. If you are brave and want to pronounce the name in Spanish, there is a word "lejos" which means far, and it is pronounced Lay-hoss. The name leJOS was conceived by José Solórzano, based on the acronym for Java Operating System (JOS), the name of another operating system for the RCX, legOS, and the Spanish word "lejos." History leJOS was originally conceived as TinyVM and developed by José Solórzano in late 1999. It started out as a hobby open source project, which he later forked into what is known today as leJOS. Many contributors joined the project and provided important enhancements. Among them, Brian Bagnall, Jürgen Stuber and Paul Andrews, who later took over the project as José essentially retired from it. As of August 20, 2006, the original leJOS for the RCX has been discontinued with the 3.0 release. Soon afterwards, iCommand, a library to control the NXT from a Bluetooth-enabled computer via LCP, was released. This library made use of the standard Lego firmware. This library was later superseded by leJOS NXJ 0.8. In January 2007, a full port to the new Lego Mindstorms NXT was released as a firmware replacement. This is far faster (x15 or so) than the RCX version, has more memory available, a menu system, Bluetooth support using the Bluecove library, and allows access to many other NXT features. In 2008, versions 0.5, 0.6 and 0.7 were released. In addition to numerous improvements to the core classes, the Eclipse plugin was released along with a new version of the tutorial. In 2009, there were 2 more major releases: 0.8 and 0.85. In May 2011 0.9 was released. Broadly speaking, the releases have concentrated on improvements to navigation algorithms, as well as support for numerous 3rd party sensors and the Eclipse plug-in. In 2013, development began on a port to the Lego Mindstorms EV3 brick. In 2014, the 0.5 and 0.6 alpha versions were released. In 2015, beta versions 0.9 and 0.9.1 were released. Since November 2014 leJOS is used in a slightly adapted version also in the open-source project Open Roberta. Architecture leJOS NXJ provides support for access to the robot's I²C ports. This allows access to the standard sensors and motors (ultrasonic distance sensor, touch sensor, sound sensor and light sensor). Other companies, such as MindSensors and HiTechnic have extended this basic set by providing advanced sensors, actuators and multiplexers. leJOS NXJ includes Java APIs for these products. By taking advantage of the object-oriented structure of Java, the developers of LeJOS NXJ have been able to hide the implementation details of sensors and actuators behind multiple interfaces. This allows the robotics developer to work with high-level abstractions without having to worry about details like the hexadecimal addresses of hardware components. The project includes implementations of the commonly used feedback controller, the PID controller and the Kalman filter noise reduction algorithm. leJOS NXJ also provides libraries that support more abstract functions such as navigation, mapping and behavior based robotics. Here is a simple leJOS program: import lejos.nxt.Motor; import lejos.nxt.Button; public class Example { public static void main(String[] args) { Motor.A.forward(); Button.waitForPress(); Motor.A.backward(); Button.waitForPress(); System.exit(1); } } Community Since the first alpha release of leJOS NXJ in 2007, the project has had a consistently active following. Between January 2007 and October 2011 there were over 225,000 downloads In 2011 the downloads averaged between 4000 and 6000 a month In 2011 over 500 topics were discussed in the forums. Each topic often generated several hundred posts. Between May 2012 and March 2013 there were over 36,000 download of release 0.91 The core development team has been a relatively small group. Contributions are accepted from other members of the community. Several of the interfaces to third party sensors and actuators have been contributed by members outside the core team. The platform has been used in university robotics courses, undergraduate research projects and as a platform for robotics research. NXJ and the Java platform As leJOS NXJ is a Java project, it builds on the wealth of functionality inherent in the Java platform. There are leJOS NXJ plugins for the two leading Java IDEs: Eclipse and NetBeans. Robotics developers can take advantage of the standard functionality of an IDE (code completion, refactoring and testing frameworks) as well as point-and-click implementation of NXJ functions: compiling, linking and uploading. A wealth of java open source projects (such as Apache Math) are likewise available to the NXJ robotics developer. See also List of Java virtual machines Lego Mindstorms Robotics Invention System URBI Robotics suite References Further reading Brian Bagnall (2011). Intelligence Unleashed: Creating LEGO NXT Robots with Java. Variant Press Brian Bagnall (2002). Core LEGO Mindstorms Programming. Prentice Hall PTR. Giulio Ferrari et al. (2002). Programming LEGO Mindstorms with Java. Syngress. Max Schöebel et al. (2015). Roberta - EV3 Programmieren mit Java. Fraunhofer Verlag. External links Step-by-Step Instructions for installing and running leJOS Installing NXT and leJOS on 64 bit Windows Ebook: Develop leJOS programs step by step Lego Mindstorms Embedded operating systems Java virtual machine Robot programming languages 1999 software 1999 in robotics
66159172
https://en.wikipedia.org/wiki/Berserk%20Bear
Berserk Bear
Berserk Bear (aka Crouching Yeti, Dragonfly, Dragonfly 2.0, DYMALLOY, Energetic Bear, Havex, IRON LIBERTY, Koala, or TeamSpy) is a Russian cyber espionage group, sometimes known as an advanced persistent threat. According to the United States, the group is composed of "FSB hackers," either those directly employed by the FSB or Russian civilian, criminal hackers coerced into contracting as FSB hackers while still freelancing or moonlighting as criminal hackers. Activities Berserk Bear specializes in compromising utilities infrastructure, especially that belonging to companies responsible for water or energy distribution. It has performed these activities in at least Germany and the U.S. These operations are targeted towards surveillance and technical reconnaissance. Berserk Bear has also targeted many state, local, and tribal government and aviation networks in the U.S., and as of October 1, 2020, had exfiltrated data from at least two victim servers. In particular, Berserk Bear is believed to have infiltrated the computer network of the city of Austin, Texas, during 2020. The group is capable of producing its own advanced malware, although it sometimes seeks to mimic other hacking groups and conceal its activities. See also 2020 United States federal government data breach Cozy Bear Fancy Bear Russian FSB References Hacking in the 2020s Information technology in Russia Russian advanced persistent threat groups
63973
https://en.wikipedia.org/wiki/Wi-Fi
Wi-Fi
Wi-Fi () is a family of wireless network protocols, based on the IEEE 802.11 family of standards, which are commonly used for local area networking of devices and Internet access, allowing nearby digital devices to exchange data by radio waves. These are the most widely used computer networks in the world, used globally in home and small office networks to link desktop and laptop computers, tablet computers, smartphones, smart TVs, printers, and smart speakers together and to a wireless router to connect them to the Internet, and in wireless access points in public places like coffee shops, hotels, libraries and airports to provide the public Internet access for mobile devices. WiFi is a trademark of the non-profit Wi-Fi Alliance, which restricts the use of the term Wi-Fi Certified to products that successfully complete interoperability certification testing. the Wi-Fi Alliance consisted of more than 800 companies from around the world. over 3.05 billion Wi-Fi enabled devices are shipped globally each year. Wi-Fi uses multiple parts of the IEEE 802 protocol family and is designed to interwork seamlessly with its wired sibling, Ethernet. Compatible devices can network through wireless access points to each other as well as to wired devices and the Internet. The different versions of Wi-Fi are specified by various IEEE 802.11 protocol standards, with the different radio technologies determining radio bands, and the maximum ranges, and speeds that may be achieved. Wi-Fi most commonly uses the UHF and SHF radio bands; these bands are subdivided into multiple channels. Channels can be shared between networks but only one transmitter can locally transmit on a channel at any moment in time. Wi-Fi's wavebands have relatively high absorption and work best for line-of-sight use. Many common obstructions such as walls, pillars, home appliances, etc. may greatly reduce range, but this also helps minimize interference between different networks in crowded environments. An access point (or hotspot) often has a range of about indoors while some modern access points claim up to a range outdoors. Hotspot coverage can be as small as a single room with walls that block radio waves, or as large as many square kilometres (miles) using many overlapping access points with roaming permitted between them. Over time the speed and spectral efficiency of Wi-Fi have increased. some versions of Wi-Fi, running on suitable hardware at close range, can achieve speeds of 9.6 Gbit/s (gigabit per second). History A 1985 ruling by the U.S. Federal Communications Commission released parts of the ISM bands for unlicensed use for communications. These frequency bands include the same 2.4 GHz bands used by equipment such as microwave ovens and are thus subject to interference. A Prototype Test Bed for a wireless local area network was developed in 1992 by researchers from the Radiophysics Division of CSIRO in Australia. About the same time in The Netherlands in 1991, the NCR Corporation with AT&T Corporation invented the precursor to 802.11, intended for use in cashier systems, under the name WaveLAN. NCR's Vic Hayes, who held the chair of IEEE 802.11 for 10 years, along with Bell Labs Engineer Bruce Tuch, approached IEEE to create a standard and were involved in designing the initial 802.11b and 802.11a standards within the IEEE. They have both been subsequently inducted into the Wi-Fi NOW Hall of Fame. The first version of the 802.11 protocol was released in 1997, and provided up to 2 Mbit/s link speeds. This was updated in 1999 with 802.11b to permit 11 Mbit/s link speeds, and this proved popular. In 1999, the Wi-Fi Alliance formed as a trade association to hold the Wi-Fi trademark under which most products are sold. The major commercial breakthrough came with Apple Inc. adopting Wi-Fi for their iBook series of laptops in 1999. It was the first mass consumer product to offer Wi-Fi network connectivity, which was then branded by Apple as AirPort. This was in collaboration with the same group that helped create the standard Vic Hayes, Bruce Tuch, Cees Links, Rich McGinn, and others from Lucent. Wi-Fi uses a large number of patents held by many different organizations. In April 2009, 14 technology companies agreed to pay Australia's CSIRO $1 billion for infringements on CSIRO patents. Australia claims Wi-Fi is an Australian invention, at the time the subject of a little controversy. CSIRO won a further $220 million settlement for Wi-Fi patent-infringements in 2012, with global firms in the United States required to pay CSIRO licensing rights estimated at an additional $1 billion in royalties. In 2016, the CSIRO wireless local area network (WLAN) Prototype Test Bed was chosen as Australia's contribution to the exhibition A History of the World in 100 Objects held in the National Museum of Australia. Etymology and terminology The name Wi-Fi, commercially used at least as early as August 1999, was coined by the brand-consulting firm Interbrand. The Wi-Fi Alliance had hired Interbrand to create a name that was "a little catchier than 'IEEE 802.11b Direct Sequence'." Phil Belanger, a founding member of the Wi-Fi Alliance, has stated that the term Wi-Fi was chosen from a list of ten potential names invented by Interbrand. The name Wi-Fi has no further meaning, and was never officially a shortened form of "Wireless Fidelity". Nevertheless, the Wi-Fi Alliance used the advertising slogan "The Standard for Wireless Fidelity" for a short time after the brand name was created, and the Wi-Fi Alliance was also called the "Wireless Fidelity Alliance Inc" in some publications. The name is often written as WiFi, Wifi, or wifi, but these are not approved by the Wi-Fi Alliance. IEEE is a separate, but related, organization and their website has stated "WiFi is a short name for Wireless Fidelity". Interbrand also created the Wi-Fi logo. The yin-yang Wi-Fi logo indicates the certification of a product for interoperability. Non-Wi-Fi technologies intended for fixed points, such as Motorola Canopy, are usually described as fixed wireless. Alternative wireless technologies include mobile phone standards, such as 2G, 3G, 4G, 5G and LTE. To connect to a Wi-Fi LAN, a computer must be equipped with a wireless network interface controller. The combination of a computer and an interface controller is called a station. Stations are identified by one or more MAC addresses. Wi-Fi nodes often operate in infrastructure mode where all communications go through a base station. Ad hoc mode refers to devices talking directly to each other without the need to first talk to an access point. A service set is the set of all the devices associated with a particular Wi-Fi network. Devices in a service set need not be on the same wavebands or channels. A service set can be local, independent, extended, or mesh or a combination. Each service set has an associated identifier, the 32-byte Service Set Identifier (SSID), which identifies the particular network. The SSID is configured within the devices that are considered part of the network. A Basic Service Set (BSS) is a group of stations that all share the same wireless channel, SSID, and other wireless settings that have wirelessly connected (usually to the same access point). Each BSS is identified by a MAC address which is called the BSSID. Certification The IEEE does not test equipment for compliance with their standards. The non-profit Wi-Fi Alliance was formed in 1999 to fill this void—to establish and enforce standards for interoperability and backward compatibility, and to promote wireless local-area-network technology. , the Wi-Fi Alliance includes more than 800 companies. It includes 3Com (now owned by HPE/Hewlett-Packard Enterprise), Aironet (now owned by Cisco), Harris Semiconductor (now owned by Intersil), Lucent (now owned by Nokia), Nokia and Symbol Technologies (now owned by Zebra Technologies). The Wi-Fi Alliance enforces the use of the Wi-Fi brand to technologies based on the IEEE 802.11 standards from the IEEE. This includes wireless local area network (WLAN) connections, a device to device connectivity (such as Wi-Fi Peer to Peer aka Wi-Fi Direct), Personal area network (PAN), local area network (LAN), and even some limited wide area network (WAN) connections. Manufacturers with membership in the Wi-Fi Alliance, whose products pass the certification process, gain the right to mark those products with the Wi-Fi logo. Specifically, the certification process requires conformance to the IEEE 802.11 radio standards, the WPA and WPA2 security standards, and the EAP authentication standard. Certification may optionally include tests of IEEE 802.11 draft standards, interaction with cellular-phone technology in converged devices, and features relating to security set-up, multimedia, and power-saving. Not every Wi-Fi device is submitted for certification. The lack of Wi-Fi certification does not necessarily imply that a device is incompatible with other Wi-Fi devices. The Wi-Fi Alliance may or may not sanction derivative terms, such as Super Wi-Fi, coined by the US Federal Communications Commission (FCC) to describe proposed networking in the UHF TV band in the US. Versions and generations Equipment frequently supports multiple versions of Wi-Fi. To communicate, devices must use a common Wi-Fi version. The versions differ between the radio wavebands they operate on, the radio bandwidth they occupy, the maximum data rates they can support and other details. Some versions permit the use of multiple antennas, which permits greater speeds as well as reduced interference. Historically, the equipment has simply listed the versions of Wi-Fi using the name of the IEEE standard that it supports. In 2018, the Wi-Fi Alliance introduced simplified Wi-Fi generational numbering to indicate equipment that supports Wi-Fi 4 (802.11n), Wi-Fi 5 (802.11ac) and Wi-Fi 6 (802.11ax). These generations have a high degree of backward compatibility with previous versions. The alliance has stated that the generational level 4, 5, or 6 can be indicated in the user interface when connected, along with the signal strength. The list of most important versions of Wi-Fi is: 802.11a, 802.11b, 802.11g, 802.11n (Wi-Fi 4), 802.11h, 802.11i, 802.11-2007, 802.11-2012, 802.11ac (Wi-Fi 5), 802.11ad, 802.11af, 802.11-2016, 802.11ah, 802.11ai, 802.11aj, 802.11aq, 802.11ax (Wi-Fi 6), 802.11ay. Uses Internet Wi-Fi technology may be used to provide local network and Internet access to devices that are within Wi-Fi range of one or more routers that are connected to the Internet. The coverage of one or more interconnected access points (hotspots) can extend from an area as small as a few rooms to as large as many square kilometres (miles). Coverage in the larger area may require a group of access points with overlapping coverage. For example, public outdoor Wi-Fi technology has been used successfully in wireless mesh networks in London. An international example is Fon. Wi-Fi provides services in private homes, businesses, as well as in public spaces. Wi-Fi hotspots may be set up either free-of-charge or commercially, often using a captive portal webpage for access. Organizations, enthusiasts, authorities and businesses, such as airports, hotels, and restaurants, often provide free or paid-use hotspots to attract customers, to provide services to promote business in selected areas. Routers often incorporate a digital subscriber line modem or a cable modem and a Wi-Fi access point, are frequently set up in homes and other buildings, to provide Internet access and internetworking for the structure. Similarly, battery-powered routers may include a cellular Internet radio modem and a Wi-Fi access point. When subscribed to a cellular data carrier, they allow nearby Wi-Fi stations to access the Internet over 2G, 3G, or 4G networks using the tethering technique. Many smartphones have a built-in capability of this sort, including those based on Android, BlackBerry, Bada, iOS, Windows Phone, and Symbian, though carriers often disable the feature, or charge a separate fee to enable it, especially for customers with unlimited data plans. "Internet packs" provide standalone facilities of this type as well, without the use of a smartphone; examples include the MiFi- and WiBro-branded devices. Some laptops that have a cellular modem card can also act as mobile Internet Wi-Fi access points. Many traditional university campuses in the developed world provide at least partial Wi-Fi coverage. Carnegie Mellon University built the first campus-wide wireless Internet network, called Wireless Andrew, at its Pittsburgh campus in 1993 before Wi-Fi branding originated. By February 1997, the CMU Wi-Fi zone was fully operational. Many universities collaborate in providing Wi-Fi access to students and staff through the Eduroam international authentication infrastructure. City-wide In the early 2000s, many cities around the world announced plans to construct citywide Wi-Fi networks. There are many successful examples; in 2004, Mysore (Mysuru) became India's first Wi-Fi-enabled city. A company called WiFiyNet has set up hotspots in Mysore, covering the whole city and a few nearby villages. In 2005, St. Cloud, Florida and Sunnyvale, California, became the first cities in the United States to offer citywide free Wi-Fi (from MetroFi). Minneapolis has generated $1.2 million in profit annually for its provider. In May 2010, the then London mayor Boris Johnson pledged to have London-wide Wi-Fi by 2012. Several boroughs including Westminster and Islington already had extensive outdoor Wi-Fi coverage at that point. New York City announced a city-wide campaign to convert old phone booths into digitized "kiosks" in 2014. The project, titled LinkNYC, has created a network of kiosks which serve as public WiFi hotspots, high-definition screens and landlines. Installation of the screens began in late 2015. The city government plans to implement more than seven thousand kiosks over time, eventually making LinkNYC the largest and fastest public, government-operated Wi-Fi network in the world. The UK has planned a similar project across major cities of the country, with the project's first implementation in the Camden borough of London. Officials in South Korea's capital Seoul are moving to provide free Internet access at more than 10,000 locations around the city, including outdoor public spaces, major streets, and densely populated residential areas. Seoul will grant leases to KT, LG Telecom, and SK Telecom. The companies will invest $44 million in the project, which was to be completed in 2015. Geolocation Wi-Fi positioning systems use the positions of Wi-Fi hotspots to identify a device's location. Motion detection Wi-Fi sensing is used in applications such as motion detection and gesture recognition. Operational principles Wi-Fi stations communicate by sending each other data packets: blocks of data individually sent and delivered over radio. As with all radio, this is done by the modulating and demodulation of carrier waves. Different versions of Wi-Fi use different techniques, 802.11b uses DSSS on a single carrier, whereas 802.11a, Wi-Fi 4, 5 and 6 use multiple carriers on slightly different frequencies within the channel (OFDM). As with other IEEE 802 LANs, stations come programmed with a globally unique 48-bit MAC address (often printed on the equipment) so that each Wi-Fi station has a unique address. The MAC addresses are used to specify both the destination and the source of each data packet. Wi-Fi establishes link-level connections, which can be defined using both the destination and source addresses. On the reception of a transmission, the receiver uses the destination address to determine whether the transmission is relevant to the station or should be ignored. A network interface normally does not accept packets addressed to other Wi-Fi stations. Due to the ubiquity of Wi-Fi and the ever-decreasing cost of the hardware needed to support it, many manufacturers now build Wi-Fi interfaces directly into PC motherboards, eliminating the need for installation of a separate wireless network card. Channels are used half duplex and can be time-shared by multiple networks. When communication happens on the same channel, any information sent by one computer is locally received by all, even if that information is intended for just one destination. The network interface card interrupts the CPU only when applicable packets are received: the card ignores information not addressed to it. The use of the same channel also means that the data bandwidth is shared, such that, for example, available data bandwidth to each device is halved when two stations are actively transmitting. A scheme known as carrier sense multiple access with collision avoidance (CSMA/CA) governs the way stations share channels. With CSMA/CA stations attempt to avoid collisions by beginning transmission only after the channel is sensed to be "idle", but then transmit their packet data in its entirety. However, for geometric reasons, it cannot completely prevent collisions. A collision happens when a station receives multiple signals on a channel at the same time. This corrupts the transmitted data and can require stations to re-transmit. The lost data and re-transmission reduces throughput, in some cases severely. Waveband The 802.11 standard provides several distinct radio frequency ranges for use in Wi-Fi communications: 900 MHz, 2.4 GHz, 3.6 GHz, 4.9 GHz, 5 GHz, 5.9 GHz and 60 GHz bands. Each range is divided into a multitude of channels. In the standards, channels are numbered at 5 MHz spacing within a band (except in the 60 GHz band, where they are 2.16 GHz apart), and the number refers to the centre frequency of the channel. Although channels are numbered at 5 MHz spacing, transmitters generally occupy at least 20 MHz, and standards allow for channels to be bonded together to form wider channels for higher throughput. Countries apply their own regulations to the allowable channels, allowed users and maximum power levels within these frequency ranges. The "ISM" band ranges are also often improperly used because some do not know the difference between Part 15 and Part 18 of the FCC rules. 802.11b/g/n can use the 2.4 GHz Part 15 band, operating in the United States under Part 15 Rules and Regulations. In this frequency band equipment may occasionally suffer interference from microwave ovens, cordless telephones, USB 3.0 hubs, and Bluetooth devices. Spectrum assignments and operational limitations are not consistent worldwide: Australia and Europe allow for an additional two channels (12, 13) beyond the 11 permitted in the United States for the 2.4 GHz band, while Japan has three more (12–14). In the US and other countries, 802.11a and 802.11g devices may be operated without a licence, as allowed in Part 15 of the FCC Rules and Regulations. 802.11a/h/j/n/ac/ax can use the 5 GHz U-NII band, which, for much of the world, offers at least 23 non-overlapping 20  MHz channels rather than the 2.4 GHz frequency band, where the channels are only 5 MHz wide. In general, lower frequencies have better range but have less capacity. The 5 GHz bands are absorbed to a greater degree by common building materials than the 2.4 GHz bands and usually give a shorter range. As 802.11 specifications evolved to support higher throughput, the protocols have become much more efficient in their use of bandwidth. Additionally, they have gained the ability to aggregate (or 'bond') channels together to gain still more throughput where the bandwidth is available. 802.11n allows for double radio spectrum/bandwidth (40 MHz- 8 channels) compared to 802.11a or 802.11g (20 MHz). 802.11n can also be set to limit itself to 20 MHz bandwidth to prevent interference in dense communities. In the 5 GHz band, 20 MHz, 40 MHz, 80 MHz, and 160 MHz bandwidth signals are permitted with some restrictions, giving much faster connections. Communication stack Wi-Fi is part of the IEEE 802 protocol family. The data is organized into 802.11 frames that are very similar to Ethernet frames at the data link layer, but with extra address fields. MAC addresses are used as network addresses for routing over the LAN. Wi-Fi's MAC and physical layer (PHY) specifications are defined by IEEE 802.11 for modulating and receiving one or more carrier waves to transmit the data in the infrared, and 2.4, 3.6, 5, or 60 GHz frequency bands. They are created and maintained by the IEEE LAN/MAN Standards Committee (IEEE 802). The base version of the standard was released in 1997 and has had many subsequent amendments. The standard and amendments provide the basis for wireless network products using the Wi-Fi brand. While each amendment is officially revoked when it is incorporated in the latest version of the standard, the corporate world tends to market to the revisions because they concisely denote capabilities of their products. As a result, in the market place, each revision tends to become its own standard. In addition to 802.11 the IEEE 802 protocol family has specific provisions for Wi-Fi. These are required because Ethernet's cable-based media are not usually shared, whereas with wireless all transmissions are received by all stations within the range that employ that radio channel. While Ethernet has essentially negligible error rates, wireless communication media are subject to significant interference. Therefore, the accurate transmission is not guaranteed so delivery is, therefore, a best-effort delivery mechanism. Because of this, for Wi-Fi, the Logical Link Control (LLC) specified by IEEE 802.2 employs Wi-Fi's media access control (MAC) protocols to manage retries without relying on higher levels of the protocol stack. For internetworking purposes, Wi-Fi is usually layered as a link layer (equivalent to the physical and data link layers of the OSI model) below the internet layer of the Internet Protocol. This means that nodes have an associated internet address and, with suitable connectivity, this allows full Internet access. Modes Infrastructure In infrastructure mode, which is the most common mode used, all communications go through a base station. For communications within the network, this introduces an extra use of the airwaves but has the advantage that any two stations that can communicate with the base station can also communicate through the base station, which enormously simplifies the protocols. Ad hoc and Wi-Fi direct Wi-Fi also allows communications directly from one computer to another without an access point intermediary. This is called ad hoc Wi-Fi transmission. Different types of ad hoc networks exist. In the simplest case network nodes must talk directly to each other. In more complex protocols nodes may forward packets, and nodes keep track of how to reach other nodes, even if they move around. Ad hoc mode was first described by Chai Keong Toh in his 1996 patent of wireless ad hoc routing, implemented on Lucent WaveLAN 802.11a wireless on IBM ThinkPads over a size nodes scenario spanning a region of over a mile. The success was recorded in Mobile Computing magazine (1999) and later published formally in IEEE Transactions on Wireless Communications, 2002 and ACM SIGMETRICS Performance Evaluation Review, 2001. This wireless ad hoc network mode has proven popular with multiplayer handheld game consoles, such as the Nintendo DS, PlayStation Portable, digital cameras, and other consumer electronics devices. Some devices can also share their Internet connection using ad hoc, becoming hotspots or "virtual routers". Similarly, the Wi-Fi Alliance promotes the specification Wi-Fi Direct for file transfers and media sharing through a new discovery- and security-methodology. Wi-Fi Direct launched in October 2010. Another mode of direct communication over Wi-Fi is Tunneled Direct-Link Setup (TDLS), which enables two devices on the same Wi-Fi network to communicate directly, instead of via the access point. Multiple access points An Extended Service Set may be formed by deploying multiple access points that are configured with the same SSID and security settings. Wi-Fi client devices typically connect to the access point that can provide the strongest signal within that service set. Increasing the number of Wi-Fi access points for a network provides redundancy, better range, support for fast roaming, and increased overall network-capacity by using more channels or by defining smaller cells. Except for the smallest implementations (such as home or small office networks), Wi-Fi implementations have moved toward "thin" access points, with more of the network intelligence housed in a centralized network appliance, relegating individual access points to the role of "dumb" transceivers. Outdoor applications may use mesh topologies. Performance Wi-Fi operational range depends on factors such as the frequency band, radio power output, receiver sensitivity, antenna gain, and antenna type as well as the modulation technique. Also, the propagation characteristics of the signals can have a big impact. At longer distances, and with greater signal absorption, speed is usually reduced. Transmitter power Compared to cell phones and similar technology, Wi-Fi transmitters are low-power devices. In general, the maximum amount of power that a Wi-Fi device can transmit is limited by local regulations, such as FCC Part 15 in the US. Equivalent isotropically radiated power (EIRP) in the European Union is limited to 20 dBm (100 mW). To reach requirements for wireless LAN applications, Wi-Fi has higher power consumption compared to some other standards designed to support wireless personal area network (PAN) applications. For example, Bluetooth provides a much shorter propagation range between 1 and 100 metres (1 and 100 yards) and so in general has a lower power consumption. Other low-power technologies such as ZigBee have fairly long range, but much lower data rate. The high power consumption of Wi-Fi makes battery life in some mobile devices a concern. Antenna An access point compliant with either 802.11b or 802.11g, using the stock omnidirectional antenna might have a range of . The same radio with an external semi parabolic antenna (15 dB gain) with a similarly equipped receiver at the far end might have a range over 20 miles. Higher gain rating (dBi) indicates further deviation (generally toward the horizontal) from a theoretical, perfect isotropic radiator, and therefore the antenna can project or accept a usable signal further in particular directions, as compared to a similar output power on a more isotropic antenna. For example, an 8 dBi antenna used with a 100 mW driver has a similar horizontal range to a 6 dBi antenna being driven at 500 mW. Note that this assumes that radiation in the vertical is lost; this may not be the case in some situations, especially in large buildings or within a waveguide. In the above example, a directional waveguide could cause the low-power 6 dBi antenna to project much further in a single direction than the 8 dBi antenna, which is not in a waveguide, even if they are both driven at 100 mW. On wireless routers with detachable antennas, it is possible to improve range by fitting upgraded antennas that provide a higher gain in particular directions. Outdoor ranges can be improved to many kilometres (miles) through the use of high gain directional antennas at the router and remote device(s). MIMO (multiple-input and multiple-output) Wi-Fi 4 and higher standards allow devices to have multiple antennas on transmitters and receivers. Multiple antennas enable the equipment to exploit multipath propagation on the same frequency bands giving much faster speeds and greater range. Wi-Fi 4 can more than double the range over previous standards. The Wi-Fi 5 standard uses the 5 GHz band exclusively, and is capable of multi-station WLAN throughput of at least 1 gigabit per second, and a single station throughput of at least 500 Mbit/s. As of the first quarter of 2016, The Wi-Fi Alliance certifies devices compliant with the 802.11ac standard as "Wi-Fi CERTIFIED ac". This standard uses several signal processing techniques such as multi-user MIMO and 4X4 Spatial Multiplexing streams, and wide channel bandwidth (160 MHz) to achieve its gigabit throughput. According to a study by IHS Technology, 70% of all access point sales revenue in the first quarter of 2016 came from 802.11ac devices. Radio propagation With Wi-Fi signals line-of-sight usually works best, but signals can transmit, absorb, reflect, refract, diffract and up and down fade through and around structures, both man-made and natural. Wi-Fi signals are very strongly affected by metallic structures (including rebar in concrete, low-e coatings in glazing) and water (such as found in vegetation.) Due to the complex nature of radio propagation at typical Wi-Fi frequencies, particularly around trees and buildings, algorithms can only approximately predict Wi-Fi signal strength for any given area in relation to a transmitter. This effect does not apply equally to long-range Wi-Fi, since longer links typically operate from towers that transmit above the surrounding foliage. Mobile use of Wi-Fi over wider ranges is limited, for instance, to uses such as in an automobile moving from one hotspot to another. Other wireless technologies are more suitable for communicating with moving vehicles. Distance records Distance records (using non-standard devices) include in June 2007, held by Ermanno Pietrosemoli and EsLaRed of Venezuela, transferring about 3 MB of data between the mountain-tops of El Águila and Platillon. The Swedish Space Agency transferred data , using 6 watt amplifiers to reach an overhead stratospheric balloon. Interference Wi-Fi connections can be blocked or the Internet speed lowered by having other devices in the same area. Wi-Fi protocols are designed to share the wavebands reasonably fairly, and this often works with little to no disruption. To minimize collisions with Wi-Fi and non-Wi-Fi devices, Wi-Fi employs Carrier-sense multiple access with collision avoidance (CSMA/CA), where transmitters listen before transmitting and delay transmission of packets if they detect that other devices are active on the channel, or if noise is detected from adjacent channels or non-Wi-Fi sources. Nevertheless, Wi-Fi networks are still susceptible to the hidden node and exposed node problem. A standard speed Wi-Fi signal occupies five channels in the 2.4 GHz band. Interference can be caused by overlapping channels. Any two channel numbers that differ by five or more, such as 2 and 7, do not overlap (no adjacent-channel interference). The oft-repeated adage that channels 1, 6, and 11 are the only non-overlapping channels is, therefore, not accurate. Channels 1, 6, and 11 are the only group of three non-overlapping channels in North America. However, whether the overlap is significant depends on physical spacing. Channels that are four apart interfere a negligible amount—much less than reusing channels (which causes co-channel interference)—if transmitters are at least a few metres apart. In Europe and Japan where channel 13 is available, using Channels 1, 5, 9, and 13 for 802.11g and 802.11n is recommended. However, many 2.4 GHz 802.11b and 802.11g access-points default to the same channel on initial startup, contributing to congestion on certain channels. Wi-Fi pollution, or an excessive number of access points in the area, can prevent access and interfere with other devices' use of other access points as well as with decreased signal-to-noise ratio (SNR) between access points. These issues can become a problem in high-density areas, such as large apartment complexes or office buildings with many Wi-Fi access points. Other devices use the 2.4 GHz band: microwave ovens, ISM band devices, security cameras, ZigBee devices, Bluetooth devices, video senders, cordless phones, baby monitors, and, in some countries, amateur radio, all of which can cause significant additional interference. It is also an issue when municipalities or other large entities (such as universities) seek to provide large area coverage. On some 5 GHz bands interference from radar systems can occur in some places. For base stations that support those bands they employ Dynamic Frequency Selection which listens for radar, and if it is found, it will not permit a network on that band. These bands can be used by low power transmitters without a licence, and with few restrictions. However, while unintended interference is common, users that have been found to cause deliberate interference (particularly for attempting to locally monopolize these bands for commercial purposes) have been issued large fines. Throughput Various layer-2 variants of IEEE 802.11 have different characteristics. Across all flavours of 802.11, maximum achievable throughputs are either given based on measurements under ideal conditions or in the layer-2 data rates. This, however, does not apply to typical deployments in which data are transferred between two endpoints of which at least one is typically connected to a wired infrastructure, and the other is connected to an infrastructure via a wireless link. This means that typically data frames pass an 802.11 (WLAN) medium and are being converted to 802.3 (Ethernet) or vice versa. Due to the difference in the frame (header) lengths of these two media, the packet size of an application determines the speed of the data transfer. This means that an application that uses small packets (e.g. VoIP) creates a data flow with high overhead traffic (low goodput). Other factors that contribute to the overall application data rate are the speed with which the application transmits the packets (i.e. the data rate) and the energy with which the wireless signal is received. The latter is determined by distance and by the configured output power of the communicating devices. The same references apply to the attached throughput graphs, which show measurements of UDP throughput measurements. Each represents an average throughput of 25 measurements (the error bars are there, but barely visible due to the small variation), is with specific packet size (small or large), and with a specific data rate (10 kbit/s – 100 Mbit/s). Markers for traffic profiles of common applications are included as well. This text and measurements do not cover packet errors but information about this can be found at the above references. The table below shows the maximum achievable (application-specific) UDP throughput in the same scenarios (same references again) with various WLAN (802.11) flavours. The measurement hosts have been 25 metres (yards) apart from each other; loss is again ignored. Hardware Wi-Fi allows wireless deployment of local area networks (LANs). Also, spaces where cables cannot be run, such as outdoor areas and historical buildings, can host wireless LANs. However, building walls of certain materials, such as stone with high metal content, can block Wi-Fi signals. A Wi-Fi device is a short-range wireless device. Wi-Fi devices are fabricated on RF CMOS integrated circuit (RF circuit) chips. Since the early 2000s, manufacturers are building wireless network adapters into most laptops. The price of chipsets for Wi-Fi continues to drop, making it an economical networking option included in ever more devices. Different competitive brands of access points and client network-interfaces can inter-operate at a basic level of service. Products designated as "Wi-Fi Certified" by the Wi-Fi Alliance are backward compatible. Unlike mobile phones, any standard Wi-Fi device works anywhere in the world. Access point A wireless access point (WAP) connects a group of wireless devices to an adjacent wired LAN. An access point resembles a network hub, relaying data between connected wireless devices in addition to a (usually) single connected wired device, most often an Ethernet hub or switch, allowing wireless devices to communicate with other wired devices. Wireless adapter Wireless adapters allow devices to connect to a wireless network. These adapters connect to devices using various external or internal interconnects such as PCI, miniPCI, USB, ExpressCard, Cardbus, and PC Card. As of 2010, most newer laptop computers come equipped with built-in internal adapters. Router Wireless routers integrate a Wireless Access Point, Ethernet switch, and internal router firmware application that provides IP routing, NAT, and DNS forwarding through an integrated WAN-interface. A wireless router allows wired and wireless Ethernet LAN devices to connect to a (usually) single WAN device such as a cable modem, DSL modem, or optical modem. A wireless router allows all three devices, mainly the access point and router, to be configured through one central utility. This utility is usually an integrated web server that is accessible to wired and wireless LAN clients and often optionally to WAN clients. This utility may also be an application that is run on a computer, as is the case with as Apple's AirPort, which is managed with the AirPort Utility on macOS and iOS. Bridge Wireless network bridges can act to connect two networks to form a single network at the data-link layer over Wi-Fi. The main standard is the wireless distribution system (WDS). Wireless bridging can connect a wired network to a wireless network. A bridge differs from an access point: an access point typically connects wireless devices to one wired network. Two wireless bridge devices may be used to connect two wired networks over a wireless link, useful in situations where a wired connection may be unavailable, such as between two separate homes or for devices that have no wireless networking capability (but have wired networking capability), such as consumer entertainment devices; alternatively, a wireless bridge can be used to enable a device that supports a wired connection to operate at a wireless networking standard that is faster than supported by the wireless network connectivity feature (external dongle or inbuilt) supported by the device (e.g., enabling Wireless-N speeds (up to the maximum supported speed on the wired Ethernet port on both the bridge and connected devices including the wireless access point) for a device that only supports Wireless-G). A dual-band wireless bridge can also be used to enable 5 GHz wireless network operation on a device that only supports 2.4 GHz wireless and has a wired Ethernet port. Repeater Wireless range-extenders or wireless repeaters can extend the range of an existing wireless network. Strategically placed range-extenders can elongate a signal area or allow for the signal area to reach around barriers such as those pertaining in L-shaped corridors. Wireless devices connected through repeaters suffer from an increased latency for each hop, and there may be a reduction in the maximum available data throughput. Besides, the effect of additional users using a network employing wireless range-extenders is to consume the available bandwidth faster than would be the case whereby a single user migrates around a network employing extenders. For this reason, wireless range-extenders work best in networks supporting low traffic throughput requirements, such as for cases whereby a single user with a Wi-Fi-equipped tablet migrates around the combined extended and non-extended portions of the total connected network. Also, a wireless device connected to any of the repeaters in the chain has data throughput limited by the "weakest link" in the chain between the connection origin and connection end. Networks using wireless extenders are more prone to degradation from interference from neighbouring access points that border portions of the extended network and that happen to occupy the same channel as the extended network. Embedded systems The security standard, Wi-Fi Protected Setup, allows embedded devices with a limited graphical user interface to connect to the Internet with ease. Wi-Fi Protected Setup has 2 configurations: The Push Button configuration and the PIN configuration. These embedded devices are also called The Internet of Things and are low-power, battery-operated embedded systems. Several Wi-Fi manufacturers design chips and modules for embedded Wi-Fi, such as GainSpan. Increasingly in the last few years (particularly ), embedded Wi-Fi modules have become available that incorporate a real-time operating system and provide a simple means of wirelessly enabling any device that can communicate via a serial port. This allows the design of simple monitoring devices. An example is a portable ECG device monitoring a patient at home. This Wi-Fi-enabled device can communicate via the Internet. These Wi-Fi modules are designed by OEMs so that implementers need only minimal Wi-Fi knowledge to provide Wi-Fi connectivity for their products. In June 2014, Texas Instruments introduced the first ARM Cortex-M4 microcontroller with an onboard dedicated Wi-Fi MCU, the SimpleLink CC3200. It makes embedded systems with Wi-Fi connectivity possible to build as single-chip devices, which reduces their cost and minimum size, making it more practical to build wireless-networked controllers into inexpensive ordinary objects. Network security The main issue with wireless network security is its simplified access to the network compared to traditional wired networks such as Ethernet. With wired networking, one must either gain access to a building (physically connecting into the internal network), or break through an external firewall. To access Wi-Fi, one must merely be within the range of the Wi-Fi network. Most business networks protect sensitive data and systems by attempting to disallow external access. Enabling wireless connectivity reduces security if the network uses inadequate or no encryption. An attacker who has gained access to a Wi-Fi network router can initiate a DNS spoofing attack against any other user of the network by forging a response before the queried DNS server has a chance to reply. Securing methods A common measure to deter unauthorized users involves hiding the access point's name by disabling the SSID broadcast. While effective against the casual user, it is ineffective as a security method because the SSID is broadcast in the clear in response to a client SSID query. Another method is to only allow computers with known MAC addresses to join the network, but determined eavesdroppers may be able to join the network by spoofing an authorized address. Wired Equivalent Privacy (WEP) encryption was designed to protect against casual snooping but it is no longer considered secure. Tools such as AirSnort or Aircrack-ng can quickly recover WEP encryption keys. Because of WEP's weakness the Wi-Fi Alliance approved Wi-Fi Protected Access (WPA) which uses TKIP. WPA was specifically designed to work with older equipment usually through a firmware upgrade. Though more secure than WEP, WPA has known vulnerabilities. The more secure WPA2 using Advanced Encryption Standard was introduced in 2004 and is supported by most new Wi-Fi devices. WPA2 is fully compatible with WPA. In 2017, a flaw in the WPA2 protocol was discovered, allowing a key replay attack, known as KRACK. A flaw in a feature added to Wi-Fi in 2007, called Wi-Fi Protected Setup (WPS), let WPA and WPA2 security be bypassed, and effectively broken in many situations. The only remedy as of late 2011 was to turn off Wi-Fi Protected Setup, which is not always possible. Virtual Private Networks can be used to improve the confidentiality of data carried through Wi-Fi networks, especially public Wi-Fi networks. A URI using the WIFI scheme can specify the SSID, encryption type, password/passphrase, and if the SSID is hidden or not, so users can follow links from QR codes, for instance, to join networks without having to manually enter the data. A MECARD-like format is supported by Android and iOS 11+. Common format: WIFI:S:<SSID>;T:<WEP|WPA|blank>;P:<PASSWORD>;H:<true|false|blank>; Sample WIFI:S:MySSID;T:WPA;P:MyPassW0rd;; Data security risks The older wireless encryption-standard, Wired Equivalent Privacy (WEP), has been shown easily breakable even when correctly configured. Wi-Fi Protected Access (WPA and WPA2) encryption, which became available in devices in 2003, aimed to solve this problem. Wi-Fi access points typically default to an encryption-free (open) mode. Novice users benefit from a zero-configuration device that works out-of-the-box, but this default does not enable any wireless security, providing open wireless access to a LAN. To turn security on requires the user to configure the device, usually via a software graphical user interface (GUI). On unencrypted Wi-Fi networks connecting devices can monitor and record data (including personal information). Such networks can only be secured by using other means of protection, such as a VPN or secure Hypertext Transfer Protocol over Transport Layer Security (HTTPS). Wi-Fi Protected Access encryption (WPA2) is considered secure, provided a strong passphrase is used. In 2018, WPA3 was announced as a replacement for WPA2, increasing security; it rolled out on June 26. Piggybacking Piggybacking refers to access to a wireless Internet connection by bringing one's computer within the range of another's wireless connection, and using that service without the subscriber's explicit permission or knowledge. During the early popular adoption of 802.11, providing open access points for anyone within range to use was encouraged to cultivate wireless community networks, particularly since people on average use only a fraction of their downstream bandwidth at any given time. Recreational logging and mapping of other people's access points have become known as wardriving. Indeed, many access points are intentionally installed without security turned on so that they can be used as a free service. Providing access to one's Internet connection in this fashion may breach the Terms of Service or contract with the ISP. These activities do not result in sanctions in most jurisdictions; however, legislation and case law differ considerably across the world. A proposal to leave graffiti describing available services was called warchalking. Piggybacking often occurs unintentionally – a technically unfamiliar user might not change the default "unsecured" settings to their access point and operating systems can be configured to connect automatically to any available wireless network. A user who happens to start up a laptop in the vicinity of an access point may find the computer has joined the network without any visible indication. Moreover, a user intending to join one network may instead end up on another one if the latter has a stronger signal. In combination with automatic discovery of other network resources (see DHCP and Zeroconf) this could lead wireless users to send sensitive data to the wrong middle-man when seeking a destination (see man-in-the-middle attack). For example, a user could inadvertently use an unsecured network to log into a website, thereby making the login credentials available to anyone listening, if the website uses an insecure protocol such as plain HTTP without TLS. On an unsecured access point, an unauthorized user can obtain security information (factory preset passphrase and/or Wi-Fi Protected Setup PIN) from a label on a wireless access point and use this information (or connect by the Wi-Fi Protected Setup pushbutton method) to commit unauthorized and/or unlawful activities. Societal aspects Wireless internet access has become much more embedded in society. It has thus changed how the society functions in many ways. Influence on developing countries Over half the world does not have access to the internet, prominently rural areas in developing nations. Technology that has been implemented in more developed nations is often costly and low energy efficient. This has led to developing nations using more low-tech networks, frequently implementing renewable power sources that can solely be maintained through solar power, creating a network that is resistant to disruptions such as power outages. For instance, in 2007 a 450 km (280 mile) network between Cabo Pantoja and Iquitos in Peru was erected in which all equipment is powered only by solar panels. These long-range Wi-Fi networks have two main uses: offer internet access to populations in isolated villages, and to provide healthcare to isolated communities. In the case of the aforementioned example, it connects the central hospital in Iquitos to 15 medical outposts which are intended for remote diagnosis. Work habits Access to Wi-Fi in public spaces such as cafes or parks allows people, in particular freelancers, to work remotely. While the accessibility of Wi-Fi is the strongest factor when choosing a place to work (75% of people would choose a place that provides Wi-Fi over one that does not), other factors influence the choice of specific hotspots. These vary from the accessibility of other resources, like books, the location of the workplace, and the social aspect of meeting other people in the same place. Moreover, the increase of people working from public places results in more customers for local businesses thus providing an economic stimulus to the area. Additionally, in the same study it has been noted that wireless connection provides more freedom of movement while working. Both when working at home or from the office it allows the displacement between different rooms or areas. In some offices (notably Cisco offices in New York) the employees do not have assigned desks but can work from any office connecting their laptop to Wi-Fi hotspot. Housing The internet has become an integral part of living. 81.9% of American households have internet access. Additionally, 89% of American households with broadband connect via wireless technologies. 72.9% of American households have Wi-Fi. Wi-Fi networks have also affected how the interior of homes and hotels are arranged. For instance, architects have described that their clients no longer wanted only one room as their home office, but would like to work near the fireplace or have the possibility to work in different rooms. This contradicts architect's pre-existing ideas of the use of rooms that they designed. Additionally, some hotels have noted that guests prefer to stay in certain rooms since they receive a stronger Wi-Fi network. Health concerns The World Health Organization (WHO) says, "no health effects are expected from exposure to RF fields from base stations and wireless networks", but notes that they promote research into effects from other RF sources. (a category used when "a causal association is considered credible, but when chance, bias or confounding cannot be ruled out with reasonable confidence"), this classification was based on risks associated with wireless phone use rather than Wi-Fi networks. The United Kingdom's Health Protection Agency reported in 2007 that exposure to Wi-Fi for a year results in the "same amount of radiation from a 20-minute mobile phone call". A review of studies involving 725 people who claimed electromagnetic hypersensitivity, "...suggests that 'electromagnetic hypersensitivity' is unrelated to the presence of an EMF, although more research into this phenomenon is required." Alternatives Several other wireless technologies provide alternatives to Wi-Fi for different use cases: Bluetooth, a short-distance network Bluetooth Low Energy, a low-power variant of Bluetooth Zigbee, a low-power, low data rate, short-distance communication protocol Cellular networks, used by smartphones WiMax, for providing long range wireless internet connectivity LoRa, for long range wireless with low data rate Some alternatives are "no new wires", re-using existing cable: G.hn, which uses existing home wiring, such as phone and power lines Several wired technologies for computer networking, which provide viable alternatives to Wi-Fi: Ethernet over twisted pair See also Gi-Fi—a term used by some trade press to refer to faster versions of the IEEE 802.11 standards HiperLAN Indoor positioning system Li-Fi List of WLAN channels Operating system Wi-Fi support Power-line communication San Francisco Digital Inclusion Strategy WiGig Wireless Broadband Alliance Wi-Fi Direct Hotspot (Wi-Fi) Bluetooth Notes References Further reading Australian inventions Computer-related introductions in 1999 Networking standards Wireless communication systems
2214548
https://en.wikipedia.org/wiki/Gtkmm
Gtkmm
gtkmm (formerly known as gtk-- or gtk minus minus) is the official C++ interface for the popular GUI library GTK. gtkmm is free software distributed under the GNU Lesser General Public License (LGPL). gtkmm allows the creation of user interfaces either in code or with the Glade Interface Designer, using the Gtk::Builder class. Other features include typesafe callbacks, a comprehensive set of graphical control elements, and the extensibility of widgets via inheritance. Features Because gtkmm is the official C++ interface of the GUI library GTK, C++ programmers can use the common OOP techniques such as inheritance, and C++-specific facilities such as STL (In fact, many of the gtkmm interfaces, especially those for widget containers, are designed to be similar to the Standard Template Library (STL)). Main features of gtkmm are listed as follows: Use inheritance to derive custom widgets. Type-safe signal handlers, in standard C++. Polymorphism. Use of Standard C++ Library, including strings, containers, and iterators. Full internationalization with UTF-8. Complete C++ memory management. Object composition. Automatic de-allocation of dynamically allocated widgets. Full use of C++ namespaces. No macros. Cross-platform: Linux (gcc, LLVM), FreeBSD (gcc, LLVM), NetBSD (gcc), Solaris (gcc, Forte), Win32 (gcc, MSVC++), macOS (gcc), others. Hello World in gtkmm //HelloWorldWindow.h #ifndef HELLOWORLDWINDOW_H #define HELLOWORLDWINDOW_H #include <gtkmm/window.h> #include <gtkmm/button.h> // Derive a new window widget from an existing one. // This window will only contain a button labelled "Hello World" class HelloWorldWindow : public Gtk::Window { public: HelloWorldWindow(); protected: Gtk::Button hello_world; }; #endif //HelloWorldWindow.cc #include <iostream> #include "HelloWorldWindow.h" HelloWorldWindow::HelloWorldWindow() : hello_world("Hello World") { // Set the title of the window. set_title("Hello World"); // Add the member button to the window, add(hello_world); // Handle the 'click' event. hello_world.signal_clicked().connect([] () { std::cout << "Hello world" << std::endl; }); // Display all the child widgets of the window. show_all_children(); } //main.cc #include <gtkmm/main.h> #include "HelloWorldWindow.h" int main(int argc, char *argv[]) { // Initialization Gtk::Main kit(argc, argv); // Create a hello world window object HelloWorldWindow example; // gtkmm main loop Gtk::Main::run(example); return 0; } The above program will create a window with a button labeled "Hello World". The button sends "Hello world" to standard output when clicked. The program is run using the following commands: $ g++ -std=c++11 *.cc -o example `pkg-config gtkmm-3.0 --cflags --libs` $ ./example This is usually done using a simple makefile. Applications Some notable applications that use gtkmm include: Amsynth Cadabra (computer program) Inkscape Vector graphics drawing. Horizon EDA an Electronic Design Automation package for printed circuit board design. PDF Slicer A simple application to extract, merge, rotate and reorder pages of PDF documents. Workrave Assists in recovery and prevention of RSI. Gnome System Monitor Gigedit GParted disk partitioning tool. Nemiver GUI for the GNU debugger gdb. PulseAudio tools: pavucontrol, paman, paprefs pavumeter, RawTherapee GNOME Referencer document organiser and bibliography manager Seq24 Synfig Studio Linthesia MySQL Workbench Administrator Database GUI. Ardour Open Source digital audio workstation (DAW) for Linux and MacOS. Gnote desktop notetaking application. VisualBoyAdvance VMware Workstation and VMware Player both use gtkmm for their Linux ports. See also GTK wxWidgets FLTK FOX toolkit Qt VCF References External links Articles with example C++ code C++ libraries Free computer libraries Free software programmed in C++ GTK language bindings Software using the LGPL license
551666
https://en.wikipedia.org/wiki/IBM%20Research
IBM Research
IBM Research is the research and development division for IBM, an American multinational information technology company headquartered in Armonk, New York, with operations in over 170 countries. IBM Research is the largest industrial research organization in the world and has twelve labs on six continents. IBM employees have garnered six Nobel Prizes, six Turing Awards, 20 inductees into the U.S. National Inventors Hall of Fame, 19 National Medals of Technology, five National Medals of Science and three Kavli Prizes. As of 2018, the company has generated more patents than any other business in each of 25 consecutive years, which is a record. History The roots of today's IBM Research began with the 1945 opening of the Watson Scientific Computing Laboratory at Columbia University. This was the first IBM laboratory devoted to pure science and later expanded into additional IBM Research locations in Westchester County, New York, starting in the 1950s, including the Thomas J. Watson Research Center in 1961. Notable company inventions include the floppy disk, the hard disk drive, the magnetic stripe card, the relational database, the Universal Product Code (UPC), the financial swap, the Fortran programming language, SABRE airline reservation system, DRAM, copper wiring in semiconductors, the smartphone, the portable computer, the Automated Teller Machine (ATM), the silicon-on-insulator (SOI) semiconductor manufacturing process, Watson artificial intelligence and the Quantum Experience. Advances in nanotechnology include IBM in atoms, where a scanning tunneling microscope was used to arrange 35 individual xenon atoms on a substrate of chilled crystal of nickel to spell out the three letter company acronym. It was the first time atoms had been precisely positioned on a flat surface. Major undertakings at IBM Research have included the invention of innovative materials and structures, high-performance microprocessors and computers, analytical methods and tools, algorithms, software architectures, methods for managing, searching and deriving meaning from data and in turning IBM's advanced services methodologies into reusable assets. IBM Research's numerous contributions to physical and computer sciences include the Scanning Tunneling Microscope and high-temperature superconductivity, both of which were awarded the Nobel Prize. IBM Research was behind the inventions of the SABRE travel reservation system, the technology of laser eye surgery, magnetic storage, the relational database, UPC barcodes and Watson, the question-answering computing system that won a match against human champions on the Jeopardy! television quiz show. The Watson technology is now being commercialized as part of a project with healthcare company Anthem Inc. Other notable developments include the Data Encryption Standard (DES), fast Fourier transform (FFT), Benoît Mandelbrot's introduction of fractals, magnetic disk storage (hard disks, the MELD-Plus risk score, the one-transistor dynamic random-access memory (DRAM), the reduced instruction set computer (RISC) architecture, relational databases, and Deep Blue (grandmaster-level chess-playing computer). Notable IBM researchers There are a number of computer scientists "who made IBM Research famous." These include Frances E. Allen, Marc Auslander, John Backus, Charles H. Bennett (computer scientist), Erich Bloch, Grady Booch, Fred Brooks (known for his book The Mythical Man-Month), Peter Brown, Larry Carter, Gregory Chaitin, John Cocke, Alan Cobham, Edgar F. Codd, Don Coppersmith, Wallace Eckert, Ronald Fagin, Horst Feistel, Jeanne Ferrante, Zvi Galil, Ralph E. Gomory, Jim Gray, Joseph Halpern, Kenneth E. Iverson, Frederick Jelinek, Reynold B. Johnson, Benoit Mandelbrot, Robert Mercer (businessman), C. Mohan, Michael O. Rabin, Arthur Samuel, Barbara Simons, Alfred Spector, Gardiner Tucker, Moshe Vardi, John Vlissides, Mark N. Wegman and Shmuel Winograd. Laboratories IBM currently has 19 research facilities spread across 12 laboratories on six continents: Africa (Nairobi, Kenya, and Johannesburg, South Africa) Almaden (San Jose) Australia (Melbourne) Brazil (Sao Paulo and Rio de Janeiro) Cambridge - IBM Research and MIT-IBM Watson AI Lab (Cambridge, US) China (Beijing) Israel (Haifa) Ireland (Dublin) India (Delhi and Bengaluru) Tokyo (Tokyo and Shin-kawasaki) Zurich (Zurich) IBM Thomas J. Watson Research Center (Yorktown Heights and Albany) Historic research centers for IBM also include IBM La Gaude (Nice), the Cambridge Scientific Center, the IBM New York Scientific Center, 330 North Wabash (Chicago), IBM Austin Research Laboratory, and IBM Laboratory Vienna. In 2017, IBM invested $240 million to create the MIT–IBM Watson AI Lab. Headquartered in Cambridge, MA the Lab is a unique joint research venture in artificial intelligence established by IBM and MIT, which brings together researchers in academia and industry to advance AI that has a real world impact for business, academic and society. The Lab funds approximately 50 projects per year that are co-led by principal investigators from MIT and IBM Research, with results published regularly at top peer-reviewed journals and conferences. Projects range from computer vision, natural language processing and reinforcement learning, to devising new ways to ensure that AI systems are fair, reliable and secure. Almaden in Silicon Valley IBM Research – Almaden is in Almaden Valley, San Jose, California. Its scientists perform basic and applied research in computer science, services, storage systems, physical sciences, and materials science and technology. Almaden occupies part of a site owned by IBM at 650 Harry Road on nearly of land in the hills above Silicon Valley. The site, built in 1985 for the research center, was chosen because of its close proximity to Stanford University, UC Santa Cruz, UC Berkeley and other collaborative academic institutions. Today, the research division is still the largest tenant of the site, but the majority of occupants work for other divisions of IBM. IBM opened its first West Coast research center, the San Jose Research Laboratory in 1952, managed by Reynold B. Johnson. Among its first developments was the IBM 350, the first commercial moving head hard disk drive. Launched in 1956, this saw use in the IBM 305 RAMAC computer system. Subdivisions included the Advanced Systems Development Division. Directors of the center include hard disc drive developer Jack Harker. Prompted by a need for additional space, the center moved to its present Almaden location in 1986. Scientists at IBM Almaden have contributed to several scientific discoveries such as the development of photoresists and the quantum mirage effect. The following are some of the famous scientists who have worked in the past or are currently working in this laboratory: Rakesh Agrawal, Rama Akkiraju, John Backus, Raymond F. Boyce, Donald D. Chamberlin, Ashok K. Chandra, Edgar F. Codd, Mark Dean, Cynthia Dwork, Don Eigler, Ronald Fagin, Jim Gray, Laura M. Haas, Joseph Halpern, Andreas J. Heinrich, Reynold B. Johnson, Maria Klawe, Jaishankar Menon, Dharmendra Modha, William E. Moerner, C. Mohan, Stuart Parkin, Nick Pippenger, Dan Russell, Patricia Selinger, Ted Selker, Barbara Simons, Malcolm Slaney, Ramakrishnan Srikant, Larry Stockmeyer, Moshe Vardi, Jennifer Widom, Shumin Zhai. Australia IBM Research – Australia is a research and development laboratory established by IBM Research in 2009 in Melbourne. It is involved in social media, interactive content, healthcare analytics and services research, multimedia analytics, and genomics. The lab is headed by Vice President and Lab Director Joanna Batstone. It was to be the company’s first laboratory combining research and development in a single organisation. The opening of the Melbourne lab in 2011 received an injection of $22 million in Australian Federal Government funding and an undisclosed amount provided by the State Government. Brazil IBM Research – Brazil is one of twelve research laboratories comprising IBM Research, its first in South America. It was established in June 2010, with locations in São Paulo and Rio de Janeiro. Research focuses on Industrial Technology and Science, Systems of Engagement and Insight, Social Data Analytics and Natural Resources Solutions. The new lab, IBM's ninth at the time of opening and first in 12 years, underscores the growing importance of emerging markets and the globalization of innovation. In collaboration with Brazil's government, it will help IBM to develop technology systems around natural resource development and large-scale events such as the 2016 Summer Olympics. Engineer and associate lab director Ulisses Mello explains that IBM has four priority areas in Brazil: "The main area is related to natural resources management, involving oil and gas, mining and agricultural sectors. The second is the social data analytics segment that comprises the analysis of data generated from social networking sites [such as Twitter or Facebook], which can be applied, for example, to financial analysis. The third strategic area is nanotechnology applied to the development of the smarter devices for the intermittent production industry. This technology can be applied to, for example, blood testing or recovering oil from existing fields. And the last one is smarter cities." Japan The IBM Research – Tokyo, which was called IBM Tokyo Research Laboratory (TRL) before January 2009, is one of IBM's twelve major worldwide research laboratories. It is a branch of IBM Research, and about 200 researchers work for TRL. Established in 1982 as the Japan Science Institute (JSI) in Tokyo, it was renamed to IBM Tokyo Research Laboratory in 1986, and moved to Yamato in 1992 and back to Tokyo in 2012. IBM Tokyo Research Laboratory was established in 1982 as the Japan Science Institute (JSI) in Sanbanchō, Tokyo. It was IBM's first research laboratory in Asia. Hisashi Kobayashi was appointed the founding director of TRL in 1982; he served as director until 1986. JSI was renamed to the IBM Tokyo Research Laboratory in 1986. In 1988, English-to-Japanese machine translation system called "System for Human-Assisted Language Translation" (SHALT) was developed at TRL. It was used to translate IBM manuals. History TRL was shifted from downtown Tokyo to the suburbs to share a building with IBM Yamato Facility in Yamato, Kanagawa Prefecture in 1993. In 1993, world record was accomplished for generation of continuous coherent Ultraviolet rays. In 1996, Java JIT compiler was developed at TRL, and it was released for major IBM platforms. Numerous other technological breakthroughs were made at TRL. The team led by Chieko Asakawa (:ja:浅川智恵子), IBM Fellow since 2009, provided basic technology for IBM's software programs for the visually handicapped, IBM Home Page Reader in 1997 and IBM aiBrowser (:ja:aiBrowser) in 2007. TRL moved back to Tokyo in 2012, this time at IBM Toyosu Facility. Research TRL researchers are responsible for numerous breakthroughs in sciences and engineering. The researchers have presented multiple papers at international conferences, and published numerous papers in international journals. They have also contributed to the products and services of IBM, and patent filings. TRL conducts research in microdevices, system software, security and privacy, analytics and optimization, human computer interaction, embedded systems, and services sciences. Other activities TRL collaborates with the Japanese universities, and support their research programs. IBM donates its equipment such as servers, storage systems, and so forth to the Japanese universities to support their research programs under the Shared University Research (SUR) program. In 1987, IBM Japan Science Prize was created to recognize researchers, who are not over 45 years old, working at Japanese universities or public research institutes. It is awarded in physics, chemistry, computer science, and electronics. Israel IBM Research – Haifa, previously known as the Haifa Research Lab (HRL) was founded as a small scientific center in 1972. Since then, it has grown into a major lab that leads the development of innovative technologies and solutions for the IBM corporation. The lab’s offices are situated in three locations across Israel: Haifa, Tel Aviv, and Beer Sheva. IBM Research – Haifa employs researchers in a range of areas. Research projects are being executed today in areas such as artificial intelligence, hybrid cloud, quantum computing, blockchain, IoT, quality, cybersecurity, and industry domains such as healthcare. Dr. Aya Soffer is IBM Vice President of AI Technology and serves as the Director of the IBM Research Lab in Haifa, Israel. History In its 30th year, the IBM Haifa Research Lab in Israel moved to a new home on the University of Haifa campus. The researchers at the Lab are involved in special projects with academic institutions across Israel, the United States, and Europe, and actively participate in numerous consortiums as part of the EU Horizon 2020 programme. Today in 2020, the Lab describes itself as having the highest number of employees in Israel's hi-tech industry who hold advanced degrees in science, electrical engineering, mathematics, or related fields. Researchers participate in international conferences and are published in professional publications. In 2014, IBM Research announced the Cybersecurity Center of Excellence (CCoE) in Beer Sheva in collaboration with Ben-Gurion University of the Negev. Switzerland IBM Research – Zurich (previously called IBM Zurich Research Laboratory, ZRL) is the European branch of IBM Research. It was opened in 1956 and is located in Rüschlikon, near Zurich, Switzerland. In 1956, IBM opened their first European research laboratory in Adliswil, Switzerland, near Zurich. The lab moved to its own campus in neighboring Rüschlikon in 1962. The Zurich lab is staffed by a multicultural and interdisciplinary team of a few hundred permanent research staff members, graduate students and post-doctoral fellows, representing about 45 nationalities. Collocated with the lab is a Client Center (formerly the Industry Solutions Lab), an executive briefing facility demonstrating technology prototypes and solutions. The Zurich lab is world-renowned for its scientific achievements—most notably Nobel Prizes in physics in 1986 and 1987 for the invention of the scanning tunneling microscope and the discovery of high-temperature superconductivity, respectively. Other key inventions include trellis modulation, which revolutionized data transmission over telephone lines; Token Ring, which became a standard for local area networks and a highly successful IBM product; the Secure Electronic Transaction (SET) standard used for highly secure payments; and the Java Card OpenPlatform (JCOP), a smart card operating system. Most recently the lab was involved in the development of SuperMUC, a supercomputer that is cooled using hot water. The Zurich lab focus areas are future chip technologies; nanotechnology; data storage; quantum computing, brain-inspired computing; security and privacy; risk and compliance; business optimization and transformation; server systems. The Zurich laboratory is involved in many joint projects with universities throughout Europe, in research programs established by the European Union and the Swiss government, and in cooperation agreements with research institutes of industrial partners. One of the lab's most high-profile projects is called DOME, which is based on developing an IT roadmap for the Square Kilometer Array. The research projects pursued at the IBM Zurich lab are organized into four scientific and technical departments: Science & Technology, Cloud and AI Systems Research, Cognitive Computing & Industry Solutions and Security Research. The lab is currently managed by Alessandro Curioni. On 17 May 2011, IBM and the Swiss Federal Institute of Technology (ETH) Zurich opened the Binnig and Rohrer Nanotechnology Center, which is located on the same campus in Rüschlikon. Publications IBM Journal of Research and Development References Further reading External links IBM Research Official Website Projects Research History Highlights (Top Innovations) Research history by year Oral history interview with Martin Schwarzschild head of Watson Scientific Computation Laboratory at Columbia University, Charles Babbage Institute, University of Minnesota IBM Research's technical journals IBM facilities Research Computer science organizations Research and development organizations
57338073
https://en.wikipedia.org/wiki/Ishfaq%20Ahmad%20%28computer%20scientist%29
Ishfaq Ahmad (computer scientist)
Ishfaq Ahmad is a computer scientist, IEEE Fellow and Professor of Computer Science and Engineering at the University of Texas at Arlington (UTA). He is the Director of Center For Advanced Computing Systems (CACS) and has previously directed IRIS (Institute of Research in Security) at UTA. He is widely recognized for his contributions to scheduling techniques in parallel and distributed computing systems, and video coding. Education He received his Ph.D. degree in Computer Science and an M.S. degree in Computer Engineering from Syracuse University College of Engineering and Computer Science in 1992 and 1987, respectively; and a B.Sc. degree in Electrical Engineering from the University of Engineering and Technology, Lahore in Pakistan, in 1985. Prior to joining the University of Texas, he was an associate professor of Computer Science Department at the Hong Kong University of Science and Technology (HKUST). At HKUST, he also directed the university's Multi-media Technology Research Center. Research His research focus is on the broader areas of parallel and distributed computing systems and their applications, optimization algorithms, multimedia systems, video compression, assistive technologies, smart power grid, and energy-aware sustainable computing. His research work is published in 250 articles in books, and peer-reviewed journals and conference proceedings. Professor Ishfaq Ahmad's current research is funded by the U.S. Department of Justice (DOJ), National Science Foundation (NSF), Department of Education (GAANN Project), Semiconductor Research Corporation (SRC), Adobe Inc. and Texas Instruments. He is leading several efforts in sustainable computing and computing for sustainability. This includes launching of a new journal with Elsevier, Sustainable Computing: Informatics and Systems (SUSCOM) of which he is the founding editor-in-chief, and launching of the International Green Computing Conference. Awards Professor Ishfaq Ahmad has received numerous international research awards, including several best paper awards at leading conferences proceedings and top ranked journals, such as IEEE Circuits and Systems Society – 2007 Circuits and Systems for Video Technology Transactions Best Paper Award, IEEE Service Appreciation Award, and 2008 Outstanding Area Editor Award from the IEEE Transactions on Circuits and Systems for Video Technology. His research work in high-performance computing and video compression is widely cited with over 17,000 citations to his papers. He is listed in Pride of Pakistan, Hall of Fame. He is a Fellow of IEEE. Other appointments In addition to being a full-time professor at University of Texas at Arlington, he is also Visiting scientist at the U.S. Air Force Research Laboratory in Rome, New York Visiting scientist at the Institute of Computing Technology (ICT) in Beijing, China Honorary professor at the University of Electronic Sciences at Chengdu, China Honorary professor at the Amity University in India and UAE Certified ABET (Accreditation Board for Engineering and Technology) evaluator Member of the advisory board for European Commission on Energy Efficiency in Information and Communication Technologies Fellow of the IEEE Distinguished Life Fellow of the IDES (Institute of Doctors, Engineers, and Scientists) of India Member of the supercomputing advisory board for Lifeboat Foundation Founding Editor-in-Chief of the Journal, Sustainable Computing: Informatics and Systems Co-founder of the International Green and Sustainable Computing (IGSC) Conference In addition, he has served as editor of IEEE Transactions on Parallel and Distributed Systems, IEEE Distributed Systems Online, Journal of Parallel and Distributed Computing, IEEE Transactions on Circuits and Systems for Video Technology, and IEEE Transactions on Multimedia. Notable publications [BOOK] Handbook of Energy-Aware and Green Computing. Ishfaq Ahmad and Sanjay Ranka, Chapman and Hall/CRC Press, Taylor and Francis Group LLC, Jan. 2012, two volumes, over 1200 pages. [BOOK] Handbook of Exascale Computing. Sanjay Ranka and Ishfaq Ahmad. Under publication, Chapman and Hall/CRC Press, expected mid-2018. Static scheduling algorithms for allocating directed task graphs to multiprocessors. YK Kwok, Ishfaq Ahmad, ACM Computing Surveys 31 (4), 406-471. Dynamic critical-path scheduling: An effective technique for allocating task graphs to multiprocessors. YK Kwok, Ishfaq Ahmad, IEEE transactions on parallel and distributed systems 7 (5), 506-521.     Benchmarking and comparison of the task graph scheduling algorithms. YK Kwok, Ishfaq Ahmad, Journal of Parallel and Distributed Computing 59 (3), 381-422.  Video transcoding: an overview of various techniques and research issues. Ishfaq  Ahmad, X Wei, Y Sun, YQ Zhang, IEEE Transactions on multimedia 7 (5), 793-804.            Power-rate-distortion analysis for wireless video communication under energy constraints. Z He, Y Liang, L Chen, Ishfaq Ahmad, D Wu, IEEE Transactions on Circuits and Systems for Video Technology 15 (5), 645-658.    On exploiting task duplication in parallel program scheduling. Ishfaq Ahmad, YK Kwok, IEEE Transactions on Parallel and Distributed Systems 9 (9), 872-892.  Optimal task assignment in heterogeneous distributed computing systems. M Kafil, Ishfaq Ahmad, IEEE concurrency 6 (3), 42-50. Efficient scheduling of arbitrary task graphs to multiprocessors using a parallel genetic algorithm. YK Kwok, Ishfaq Ahmad, Journal of Parallel and Distributed Computing 47 (1), 58-77.      A cooperative game theoretical technique for joint optimization of energy consumption and response time in computational grids. SU Khan, Ishfaq Ahmad, IEEE Transactions on Parallel and Distributed Systems 20 (3), 346-360.          An integrated technique for task matching and scheduling onto distributed heterogeneous computing systems. MK Dhodhi, I Ahmad, A Yatama, I Ahmad, Journal of parallel and distributed computing 62 (9), 1338-1361.    Semi-distributed load balancing for massively parallel multicomputer systems. Ishfaq Ahmad, A Ghafoor, IEEE Transactions on Software Engineering 17 (10), 987-1004.          A fast adaptive motion estimation algorithm. Ishfaq Ahmad, W Zheng, J Luo, M Liou, IEEE Transactions on circuits and systems for video technology 16 (3), 420-438. References Year of birth missing (living people) Living people Fellow Members of the IEEE American computer scientists American academics of Pakistani descent Pakistani academics Academic journal editors Pakistani electrical engineers University of Engineering and Technology, Lahore alumni Pakistani computer scientists University of Texas at Arlington faculty Syracuse University College of Engineering and Computer Science alumni Government College University, Lahore alumni Pakistani expatriate academics Hong Kong University of Science and Technology faculty Pakistani emigrants to the United States
45588368
https://en.wikipedia.org/wiki/Ring%20learning%20with%20errors
Ring learning with errors
In post-quantum cryptography, ring learning with errors (RLWE) is a computational problem which serves as the foundation of new cryptographic algorithms, such as NewHope, designed to protect against cryptanalysis by quantum computers and also to provide the basis for homomorphic encryption. Public-key cryptography relies on construction of mathematical problems that are believed to be hard to solve if no further information is available, but are easy to solve if some information used in the problem construction is known. Some problems of this sort that are currently used in cryptography are at risk of attack if sufficiently large quantum computers can ever be built, so resistant problems are sought. Homomorphic encryption is a form of encryption that allows computation on ciphertext, such as arithmetic on numeric values stored in an encrypted database. RLWE is more properly called learning with errors over rings and is simply the larger learning with errors (LWE) problem specialized to polynomial rings over finite fields. Because of the presumed difficulty of solving the RLWE problem even on a quantum computer, RLWE based cryptography may form the fundamental base for public-key cryptography in the future just as the integer factorization and discrete logarithm problem have served as the base for public key cryptography since the early 1980s. An important feature of basing cryptography on the ring learning with errors problem is the fact that the solution to the RLWE problem can be used to solve the NP-hard shortest vector problem (SVP) in a lattice (a polynomial-time reduction from the SVP problem to the RLWE problem has been presented). Background The security of modern cryptography, in particular public-key cryptography, is based on the assumed intractability of solving certain computational problems if the size of the problem is large enough and the instance of the problem to be solved is chosen randomly. The classic example that has been used since the 1970s is the integer factorization problem. It is believed that it is computationally intractable to factor the product of two prime numbers if those prime numbers are large enough and chosen at random. As of 2015 research has led to the factorization of the product of two 384-bit primes but not the product of two 512-bit primes. Integer factorization forms the basis of the widely used RSA cryptographic algorithm. The ring learning with errors (RLWE) problem is built on the arithmetic of polynomials with coefficients from a finite field. A typical polynomial is expressed as: Polynomials can be added and multiplied in the usual fashion. In the RLWE context the coefficients of the polynomials and all operations involving those coefficients will be done in a finite field, typically the field for a prime integer . The set of polynomials over a finite field with the operations of addition and multiplication forms an infinite polynomial ring (). The RLWE context works with a finite quotient ring of this infinite ring. The quotient ring is typically the finite quotient (factor) ring formed by reducing all of the polynomials in modulo an irreducible polynomial . This finite quotient ring can be written as though many authors write . If the degree of the polynomial is , the quotient ring becomes the ring of polynomials of degree less than modulo with coefficients from . The values , , together with the polynomial partially define the mathematical context for the RLWE problem. Another concept necessary for the RLWE problem is the idea of "small" polynomials with respect to some norm. The typical norm used in the RLWE problem is known as the infinity norm (also called the uniform norm). The infinity norm of a polynomial is simply the largest coefficient of the polynomial when these coefficients are viewed as integers. Hence, states that the infinity norm of the polynomial is . Thus is the largest coefficient of . The final concept necessary to understand the RLWE problem is the generation of random polynomials in and the generation of "small" polynomials . A random polynomial is easily generated by simply randomly sampling the coefficients of the polynomial from , where is typically represented as the set . Randomly generating a "small" polynomial is done by generating the coefficients of the polynomial from in a way that either guarantees or makes very likely small coefficients. When is a prime integer, there are two common ways to do this: Using Uniform Sampling – The coefficients of the small polynomial are uniformly sampled from a set of small coefficients. Let be an integer that is much less than . If we randomly choose coefficients from the set: the polynomial will be small with respect to the bound (). Using discrete Gaussian sampling – For an odd value for , the coefficients of the polynomial are randomly chosen by sampling from the set according to a discrete Gaussian distribution with mean and distribution parameter . The references describe in full detail how this can be accomplished. It is more complicated than uniform sampling but it allows for a proof of security of the algorithm. The paper "Sampling from Discrete Gaussians for Lattice-Based Cryptography on a Constrained Device" by Dwarakanath and Galbraith provide an overview of this problem. The RLWE Problem The RLWE problem can be stated in two different ways: a "search" version and a "decision" version. Both begin with the same construction. Let be a set of random but known polynomials from with coefficients from all of . be a set of small random and unknown polynomials relative to a bound in the ring . be a small unknown polynomial relative to a bound in the ring . . The Search version entails finding the unknown polynomial given the list of polynomial pairs . The Decision version of the problem can be stated as follows. Given a list of polynomial pairs , determine whether the polynomials were constructed as or were generated randomly from with coefficients from all of . The difficulty of this problem is parameterized by the choice of the quotient polynomial (), its degree (), the field (), and the smallness bound (). In many RLWE based public key algorithms the private key will be a pair of small polynomials and . The corresponding public key will be a pair of polynomials , selected randomly from , and the polynomial . Given and , it should be computationally infeasible to recover the polynomial . Security Reduction In cases where the polynomial is a cyclotomic polynomial, the difficulty of solving the search version of RLWE problem is equivalent to finding a short vector (but not necessarily the shortest vector) in an ideal lattice formed from elements of represented as integer vectors. This problem is commonly known as the Approximate Shortest Vector Problem (α-SVP) and it is the problem of finding a vector shorter than α times the shortest vector. The authors of the proof for this equivalence write: "... we give a quantum reduction from approximate SVP (in the worst case) on ideal lattices in to the search version of ring-LWE, where the goal is to recover the secret (with high probability, for any ) from arbitrarily many noisy products." In that quote, The ring is and the ring is . The α-SVP in regular lattices is known to be NP-hard due to work by Daniele Micciancio in 2001, although not for values of α required for a reduction to general learning with errors problem. However, there is not yet a proof to show that the difficulty of the α-SVP for ideal lattices is equivalent to the average α-SVP. Rather we have a proof that if there are any α-SVP instances that are hard to solve in ideal lattices then the RLWE Problem will be hard in random instances. Regarding the difficulty of Shortest Vector Problems in Ideal Lattices, researcher Michael Schneider writes, "So far there is no SVP algorithm making use of the special structure of ideal lattices. It is widely believed that solving SVP (and all other lattice problems) in ideal lattices is as hard as in regular lattices." The difficulty of these problems on regular lattices is provably NP-hard. There are, however, a minority of researchers who do not believe that ideal lattices share the same security properties as regular lattices. Peikert believes that these security equivalences make the RLWE problem a good basis for future cryptography. He writes: "There is a mathematical proof that the only way to break the cryptosystem (within some formal attack model) on its random instances is by being able to solve the underlying lattice problem in the worst case" (emphasis in the original). RLWE Cryptography A major advantage that RLWE based cryptography has over the original learning with errors (LWE) based cryptography is found in the size of the public and private keys. RLWE keys are roughly the square root of keys in LWE. For 128 bits of security an RLWE cryptographic algorithm would use public keys around 7000 bits in length. The corresponding LWE scheme would require public keys of 49 million bits for the same level of security. On the other hand, RLWE keys are larger than the keys sizes for currently used public key algorithms like RSA and Elliptic Curve Diffie-Hellman which require public key sizes of 3072 bits and 256 bits, respectively, to achieve a 128-bit level of security. From a computational standpoint, however, RLWE algorithms have been shown to be the equal of or better than existing public key systems. Three groups of RLWE cryptographic algorithms exist: Ring learning with errors key exchanges (RLWE-KEX) The fundamental idea of using LWE and Ring LWE for key exchange was proposed and filed at the University of Cincinnati in 2011 by Jintai Ding. The basic idea comes from the associativity of matrix multiplications, and the errors are used to provide the security. The paper appeared in 2012 after a provisional patent application was filed in 2012. In 2014, Peikert presented a key transport scheme following the same basic idea of Ding's, where the new idea of sending additional 1 bit signal for rounding in Ding's construction is also utilized. An RLWE version of the classic MQV variant of a Diffie-Hellman key exchange was later published by Zhang et al. The security of both key exchanges is directly related to the problem of finding approximate short vectors in an ideal lattice. Ring learning with errors signature (RLWE-SIG) A RLWE version of the classic Feige–Fiat–Shamir Identification protocol was created and converted to a digital signature in 2011 by Lyubashevsky. The details of this signature were extended in 2012 by Gunesyu, Lyubashevsky, and Popplemann in 2012 and published in their paper "Practical Lattice Based Cryptography – A Signature Scheme for Embedded Systems." These papers laid the groundwork for a variety of recent signature algorithms some based directly on the ring learning with errors problem and some which are not tied to the same hard RLWE problems. Ring learning with errors homomorphic encryption (RLWE-HOM) The purpose of homomorphic encryption is to allow the computations on sensitive data to occur on computing devices that should not be trusted with the data. These computing devices are allowed to process the ciphertext which is output from a homomorphic encryption. In 2011, Brakersky and Vaikuntanathan, published "Fully Homomorphic Encryption from Ring-LWE and Security for Key Dependent Messages" which builds a homomorphic encryption scheme directly on the RLWE problem. References Computational problems Computational hardness assumptions Cryptography Post-quantum cryptography Lattice-based cryptography
15468271
https://en.wikipedia.org/wiki/PathWave%20Design
PathWave Design
PathWave Design is a division of Keysight Technologies that was formerly called EEsof ( ; electronic engineering software). It is a provider of electronic design automation (EDA) software that helps engineers design products such as cellular phones, wireless networks, radar, satellite communications systems, and high-speed digital wireline infrastructure. Applications include electronic system level (ESL), high-speed digital, RF-Mixed signal, device modeling, RF and Microwave design for commercial wireless, aerospace, and defense markets. History EEsof was founded in 1983 by an entrepreneur, Charles J. ("Chuck") Abronson, and a former Compact Software employee, Bill Childs. EEsof's first products included high-frequency circuit simulators such as Touchstone and Libra. Although the Touchstone simulator itself is obsolete, its eponymous file format lives on. EEsof was acquired by Hewlett-Packard in 1993 and later spun out first as part of Agilent Technologies in 1999 and then as part of Keysight Technologies. After the merger of HP and EEsof, the EEsof products were combined with the HP simulator, Microwave Design System (MDS). HP's entry, MDS, had been introduced in 1985. It was developed in-house and comprised a linear circuit simulator with integrated schematic capture and graphical layout with back-annotation, a first for RF EDA software. MDS was offered on UNIX workstations from HP, Sun, and Apollo as well on the PC. Before the introduction of MDS, HP had a marketing relationship with EEsof and sold Touchstone software on HP platforms such as the Series 200 (but not on the PC). The marketing relationship ended after the introduction of HP's MDS product. The HP and EEsof harmonic balance simulators also had parallel lives before the merger. HP funded an employee Ken Kundert to do a Ph.D. at UC Berkeley. For his thesis, he developed Spectre, the first harmonic balance prototype. Some sources argue that since Berkeley had an open policy to all of its research work, EEsof was able to learn about the project and released a product, Libra, before HP was able to commercialize it in MDS. (Libra was a play on the Latin word libra for balance or scales). However, other sources say that Libra was developed completely independently. In any case, Kundert left HP to join Cadence Design Systems shortly after receiving his Ph.D. There he developed Spectre and SpectreRF. In 1997, HP acquired Optimization Systems Associates (OSA), founded by John Bandler in 1983. OSA thereby became part of HP EEsof. OSA’s products included HarPE and OSA90/hope, featuring the world’s then most powerful harmonic balance optimizer, as well as Empipe, Empipe3D, EmpipeExpress, and empath. OSA's optimization technology and the OSA's Empipe family became the foundation of HFSS Designer and Momentum Optimization. This integration into HP’s electromagnetic product line consolidated a paradigm shift in HP's offering of their tools—a shift from analysis to design. Longer-term plans of the acquisition included leveraging OSA technology across HP's circuit- and device-simulation product lines. On January 7, 2014, Agilent announced a plan to spin off its electronic measurement divisions, including EEsof, as a separate company, Keysight Technologies. In 2019, Keysight started to phase out the EEsof brand in favor of their new PathWave Design branded TestOps toolchain, although the old brand is still used in some places. Products Platforms: PathWave Advanced Design System (formerly EEsof ADS) – RF, microwave and high speed digital EDA software for wireless communications and networking, aerospace and defense, and signal integrity applications PathWave EM Design (formerly EEsof EMPro or Electromagnetic Professional) - 3D EM platform that integrates 3D EM simulation and circuit simulation PathWave RF Synthesis (formerly EEsof/Eagleware Genesys) - RF and microwave design for circuit board and subsystem designers PathWave System Design (formerly EEsof/Eagleware/Elanix SystemVue) - Electronic system-level design tool for system architects and algorithm developers to change the physical layer (PHY) of wireless and aerospace/defense communications systems PathWave RFIC Design (formerly EEsof GoldenGate and RF Design Environment) - RFIC/RF mixed-signal simulator PathWave Device Modeling (formerly EEsof Integrated Circuit Characterization and Analysis Program (IC-CAP)), PathWave Model Builder (formerly MBP), PathWave Model QA (MQA), PathWave WaferPro - Device modeling, characterization, and validation EM solvers: Momentum – 3D planar EM simulator that uses frequency domain Method of Moments (MoM) technology. Available with the PathWave ADS, RF Synthesis, and RFIC Design platforms. FEM Element (formerly Electromagnetic Design System) – full-wave 3D simulator, frequency domain. Available with the PathWave ADS and EM Design platforms. FDTD Element – full 3D, time-domain simulator to analyze 3D structures. Available with the PathWave EM Design platform. Mergers and acquisitions The GoldenGate product was added with the Xpedion acquisition. The SystemVue and Genesys products were added as a result of the acquisition of Eagleware-Elanix in 2005., In turn, Eagleware-Elanix was a result of the merger of Eagleware and Elanix. Eagleware itself was founded in 1985 by Randy Rhea. The MBP and MQA platforms were added with EEsof's acquisition of Accelicon Technologies. See also List of EDA companies Comparison of EDA Software Optimization Systems Associates Notes Electronic design automation companies Software companies based in California Hewlett-Packard acquisitions Software companies of the United States
31091235
https://en.wikipedia.org/wiki/Arista%20Records%20LLC%20v.%20Lime%20Group%20LLC
Arista Records LLC v. Lime Group LLC
Arista Records LLC v. Lime Group LLC, 715 F. Supp. 2d 481 (S.D.N.Y. 2010), is a United States district court case in which the Southern District of New York held that Lime Group LLC, the defendant, induced copyright infringement with its peer-to-peer file sharing software, LimeWire. The court issued a permanent injunction to shut it down. The lawsuit is a part of a larger campaign against piracy by the Recording Industry Association of America (RIAA). Background LimeWire LLC was founded in June 2000 and released its software program, LimeWire, the following August. LimeWire was widely used; in 2006, when the lawsuit was filed, it had almost 4 million users per day. LimeWire is a program that uses peer-to-peer (P2P) file sharing technology, which permits users to share digital files via an Internet-based network known as Gnutella; most of these were MP3 files containing copyrighted audio recordings. An expert report presented during the trial found, in a random sample of files available on LimeWire, that 93% were protected by copyright. These files were distributed, published, and copied by LimeWire users without authorization from the copyright owners, potentially competing with the recording companies' own sale of the music. Thirteen major recording companies led by Arista Records (when sued owned by Sony BMG, now by Sony Music) sued LimeWire LLC, Lime Group LLC, Mark Gorton, Greg Bildson, and M.J.G. LimeWire Family Limited Partnership for copyright infringement. LimeWire filed antitrust counterclaims against the plaintiffs and ancillary counterclaims for conspiracy in restraint of trade, deceptive trade practices, and tortious interference with prospective business relations, all dismissed by the court in 2007. The recording companies alleged that the software is used to obtain and share unauthorized copies, and that LimeWire facilitated this copyright infringement by distributing and maintaining the software. They claimed that LimeWire was liable for: inducement of copyright infringement contributory copyright infringement vicarious copyright infringement violations of state common law prohibiting copyright infringement and unfair competition Opinion of the court On 11 May 2010, Judge Kimba Wood granted the RIAA's motion for summary judgment, finding LimeWire liable for inducement of copyright infringement, common law copyright infringement and unfair competition as to the plaintiffs' pre-1978 copyrighted works. The court amended its opinion and court order on 25 May 2010. Because all persons and corporations who participate in, exercise control over or benefit from an infringement are jointly and severally liable as copyright infringers, all claims of copyright infringement were equally applicable against Lime Group LLC and Mark Gorton, as sole executive director. The claims against Bildson, a former employee, were dropped in exchange for providing factual information about LimeWire and payment of a settlement. The court did not settle the fraudulent conveyance claim against M.J.G. LimeWire Family Limited Partnership under summary judgment, due to a genuine issue of fact. Secondary liability The RIAA based their infringement claims on theories of secondary liability, which may be imposed on a party that has not directly infringed a copyright, but has nonetheless played a significant role. A party that distributes infringement-enabling products or services may facilitate direct infringement on a massive scale, making effective enforcement for the copyright owner illusory. The court established the prerequisite of direct infringement, supported by expert testimony which estimated that 98.8% of the files requested for download through LimeWire were copyright-protected and not authorized for free distribution. Inducement To establish a claim for inducement, the RIAA had to show that LimeWire, by distributing and maintaining LimeWire, engaged in purposeful conduct that encouraged copyright infringement with intent to do so. Applying the inducement doctrine as announced by the Supreme Court in 2005 in MGM Studios, Inc. v. Grokster, Ltd., the court found overwhelming evidence of LimeWire's purposeful conduct fostered infringement. It established intent to encourage infringement by distributing LimeWire through the following factors: LimeWire's awareness of substantial infringement by users; its efforts to attract infringing users; its efforts to enable and assist users to commit infringement; its dependence on infringing use for the success of its business; and its failure to mitigate infringing activities. The court said that LimeWire's electronic notice asking users to affirm that they were not using it for copyright infringement, did not constitute any meaningful effort to mitigate infringement. In 2006, LimeWire had implemented an optional hash-based filter capable of identifying a digital file with copyrighted content and blocking a user from downloading the file, but the court did not consider this a sufficient barrier. It found inducement, noting that failure to utilize existing technology to create meaningful barriers against infringement is a strong indicator of intent. Contributory copyright infringement The parties also cross-moved for summary judgment on the claim that LimeWire was secondarily liable for contributory copyright infringement because it materially contributed to the infringement committed by users. Unlike an inducement claim, a claim for contributory infringement does not require a show of intent, rather it must show that the defendant 1) had actual or constructive knowledge of the activity, and 2) encouraged or assisted others' infringement or provided machinery or goods that facilitated infringement. A joint amicus curiae brief was submitted by the Electronic Frontier Foundation and a coalition of consumers and industry urging the court to apply the law in a manner that would not chill technological innovation. In particular, the brief urged the court to preserve the "Sony Betamax" doctrine developed in Sony Corp. of America v. Universal City Studios, Inc., which protects developers of technologies capable of substantial noninfringing uses from contributory infringement liability. The court declined to rule on this point of law because the case lacked enough facts to determine whether or not the software was actually capable of substantial noninfringing uses. Vicarious copyright infringement LimeWire also moved for summary judgment on the claim that it was vicariously liable for copyright infringement. Vicarious liability occurs when a defendant profits from direct infringement, yet declines to stop it. The court found substantial evidence that LimeWire had the right and ability to limit the use of its product for infringing purposes, including by implementing filtering, denying access, and by supervising and regulating users, none of which were exercised. Furthermore, the court found that LimeWire possessed a direct financial interest in the infringing activity, that its revenue was based on advertising and increased sales of LimeWire Pro, both consequences of its ability to attract infringing users. The court denied LimeWire's motion. Common law copyright infringement and unfair competition The parties cross-moved for summary judgment on the claim of common law copyright infringement and unfair competition. The claim was included because federal copyright law does not cover sound recordings made prior to 1972. The elements for finding inducement for copyright infringement are, as under federal law: direct infringement, purposeful conduct, and intent. The court found these established on the previously introduced evidence and granted summary judgment to the RIAA. The unfair competition claim was also granted because they had had to compete with LimeWire's free and unauthorized reproduction and distribution of plaintiffs' copyrighted recordings. Evidentiary motions LimeWire filed a number of motions challenging the admissibility of evidence submitted by the RIAA. The court found all the evidentiary objections without merit and denied the motions; it did place certain conditions on plaintiffs' future interaction with a specific former LimeWire employee. Subsequent developments Permanent injunction As the litigation continued, the parties consented to a permanent injunction on 26 October 2010 shutting down the LimeWire file-sharing service. The permanent injunction prohibits LimeWire from copying, reproducing, downloading, or distributing a sound recording, as well as directly or indirectly enabling or assisting any user to use the LimeWire system to copy, reproduce or distribute any sound recording, or make available any of the copyrighted works. LimeWire was also required to disable the file trading and distribution functionality for current and legacy users, to provide all users with a tool to uninstall the software, to obtain permission from the plaintiffs before offering any new version of the software, to implement a copyrighted content filter in any new versions developed, and to encourage all legacy users to upgrade if a new version was approved. The court order also required that if LimeWire sells or licenses any of its assets, as a condition of the transfer, it must require the purchaser or licensee to submit to the court's jurisdiction and agree to be bound by the permanent injunction. Following the permanent injunction, the website www.limewire.com was effectively shut down and displays a notice to that effect. LimeWire also shut down its online store. Soon after the injunction was ordered, a report appeared on TorrentFreak about the availability of LimeWire Pirate Edition (LPE), a new, improved LimeWire client released by "a secret dev team." The RIAA quickly complained to the judge that LimeWire was not complying with the injunction, and alleged that the LPE developer was a current or former LimeWire employee. The court ordered the LPE website shut down and allowed limited discovery to obtain the identity of the primary developer. LimeWire denied affiliation with the developer, who likewise denied being affiliated with Lime Wire LLC. The developer, who initially said his motivation was for working on the software was "to make RIAA lawyers cry into their breakfast cereal," voluntarily shuttered the LPE website rather than lose anonymity by contesting the court order. Limitations on damages The court maintained jurisdiction in order to provide a final ruling on LimeWire's liability and damages to determine the appropriate level necessary to compensate the record companies. Citing a hypothetical argument from Nimmer on Copyright and the U.S. Supreme Court case Feltner v. Columbia Pictures Television, Inc., the plaintiffs proposed one award for each infringement by individual LimeWire users. The plaintiffs estimated, using a consultant's statistical analysis, that there were more than 500 million downloads of post-1972 works using the LimeWire system; however, they offered no suggestions on how to determine a precise number of direct infringers of each work. The defendants cited McClatchey v. The Associated Press and related case law which rejected the Feltner precedent and the Nimmer hypothetical for situations involving statutory damages for infringement committed on a massive scale. In their pleadings, the defendants pointed out that the 500 million direct infringements estimated by the plaintiffs could lead to a maximum damage award of $75 trillion ($75,000,000,000,000). On 11 March 2011, the court ruled that McClatchey and related case law did indeed trump Feltner and the Nimmer hypothetical, and held that the per-infringement proposal produces "an absurd result" potentially in the "trillions" of dollars, given the large number of uploads and downloads by LimeWire users over a period of several years. The court noted that despite its claims to the contrary, the RIAA hadn't ever argued for a per-infringement award until this case, and even then not until September 2010, more than three years after filing the lawsuit. The court added that the plaintiffs were "suggesting an award that is more money than the entire music recording industry has made since Edison's invention of the phonograph in 1877." Accordingly, the court ruled that the labels were entitled to an award only on a per-work, rather than per-infringement basis, limiting the award to between $7.5 million and $1.5 billion of statutory damages. The court calculated the damages at $750 to $150,000 for each of approximately 10,000 post-1972 recordings infringed via LimeWire, plus "actual" damages for infringement of about 1,000 earlier works (statutory damages are not an option for the earlier works). Inapplicable direct infringement actions The defendants then asked for a partial summary judgment exempting 104 works which had been directly infringed by LimeWire users, and for which the plaintiffs had already recovered damages in separate actions. The defendants claimed the individual defendants in those cases are "jointly and severally liable" with the defendants in this case, and, citing Bouchat v. Champion Prods., Inc., argued that the language of 17 USC 504 meant to group together all actions relating to infringement of a given set of works. The defense further contended that since only those 104 works had been proven to be directly infringed, they shouldn't be held liable for inducing infringement of any other works. The court denied the request, rejecting the notion that the other actions had any bearing on the inducement case, except to the extent that already-recovered damage amounts might be taken into account when calculating the inducement damages, and further holding that Bouchat doesn't apply; 17 USC 504 doesn't preclude finding inducement of infringement separately from finding direct infringement of the same works. Damages phase The damages phase of the trial began on 2 May 2011. LimeWire founder and CEO Mark Gorton admitted on the stand that he was aware of widespread copyright infringement by LimeWire users and that he chose not to make use of available filtering technology; and he said that until this trial, he believed that his company couldn't be held liable for inducing copyright infringement. Citing a 2001 statement he made to investors about the risk of being sued, and a 2005 notice sent by the RIAA making him aware that the decision in MGM Studios, Inc. v. Grokster, Ltd. meant that LimeWire was liable, the plaintiffs contended that he didn't misread the law, but rather knew all along that he was violating it. The RIAA further asserted that Gorton made efforts to hide LimeWire's profits in personal investments, in anticipation of a lawsuit. Settlement On 12 May 2011, a settlement was reached, and the case dismissed. LimeWire CEO Mark Gorton paid $105 million to the four largest record labels, which at that time were Universal Music Group, Sony BMG, Warner Music Group, and EMI. Press Reactions to the injunction One law and technology blog called the injunction a "smackdown" for LimeWire, a second, "Judge slaps Lime Wire with permanent injunction", while The New York Times brought quotes from both parties disagreeing on the impact of the decision. The RIAA issued a press release urging previous LimeWire users to begin using one of the available legitimate options. The New York Times also ran a story about Gorton. Finally, there was interest in what previous LimeWire users would do and where they would go—and if BitTorrent would be their next move. Reporting of estimated damages Although the plaintiffs never explicitly asked for a specific amount of damages—rather, the exact calculation was left as an exercise for the court—the press circulated several multitrillion-dollar estimates. Some press reports initially stated, accurately, that the largest estimate, $75 trillion, was no longer being sought, but later reports were written as if the RIAA was still seeking that amount. Many of these reports surfaced in mid-2012, over a year since the case had been settled. 2010 In mid-May 2010—which was after the summary judgment had been made finding liability, but well before damages had been capped and before the RIAA sought per-infringement damages—blogger Miles Harrison, on his slashparty blog (now defunct), reviewed court documents and estimated the potential damages being sought at $15 trillion. Harrison based his figure on a reported 200 million downloads of the LimeWire client software from Download.com alone, as cited in a filing by the plaintiff's lawyer. Harrison cut the 200 million in half to account for re-downloads and multiple installations by the same user, then assumed every user infringed an average of 1 work at issue in the case. He then multiplied that 100 million infringements by the upper limit for statutory damages, $150,000, to arrive at $15 trillion. This story was modestly promoted on Reddit, but wasn't widely reported. A few weeks later, on 8 June 2010, blogger Jon Newton, in an article on his technology news blog p2pnet, arrived at his own estimate of $1.5 trillion, based on the same report of 200 million client downloads. Apparently conflating LimeWire software downloads with infringements of sound recordings, Newton computed his figure by multiplying 200 million by $750, the minimum statutory damage amount the filing said was being sought for each work infringed. The following day, Newton published a revised estimate of $15 trillion, quoting Harrison's calculations. Newton's first article was promoted on Reddit, but apparently wasn't picked up by the mainstream press. 2011 In March 2011, shortly after the court issued the ruling capping potential damages at $1.5 billion, an even higher estimate was reported: Corporate Counsel magazine and its affiliated Law.com website reported on 15 March that when seeking damages on a per-direct-infringement, rather than per-infringed-work basis, the record companies had "demanded damages ranging from $400 billion to $75 trillion"—figures taken from the defendants' pleadings. The report went on to say that Judge Wood had called the plaintiffs' request "absurd". The Law.com article was popularized via Reddit on 22 March, and the $75 trillion figure was repeated in a PC Magazine article the next day. Two days later, citing the $75 trillion figure as if it were still being actively sought, Anonymous launched a DDoS attack on the RIAA website under the Operation Payback banner. The following week, in several Australian APN News & Media outlets, an op-ed piece repeated the $75 trillion figure, erroneously calculated from $150,000 × 11,000 infringed works (which actually comes to $1.65 billion). 2012 In May 2012, a year after the case was dismissed, an op-ed piece on a New Zealand news site reported on the case as if it were ongoing. Although it mentioned the March 2011 damages ruling, the piece repeated the $75 trillion estimate and added that the amount was in excess of the 2011 global GDP. The revived story was then reported as news by the online edition of NME, with the amount dropped slightly to $72 trillion, and with the erroneous statement that the RIAA was still actively seeking that much in damages. The NME story linked back to an accurate, 2011 report in ComputerWorld, but made no mention of the court's ruling that limited the damages to a maximum of $1.5 billion. This embellished version of the story "went viral" and was picked up as a current news report, without fact-checking, by numerous organizations, including Business Insider, PC World, a Forbes blog, a CBC business program, a Los Angeles CBS affiliate, and many entertainment and technology news blogs. Some reported the figure as $75 trillion, others as $72 trillion. After it was pointed out that the case had been settled a year earlier and that the RIAA was not still seeking damages, some outlets pulled the story and some issued retractions, but many websites left the unedited story online. The Forbes retraction included a statement from the RIAA pointing out that no specific amount had ever been asked for, but the author countered that this is splitting hairs; a multitrillion-dollar amount was still implied. NMPA lawsuit The case resulted in a separate lawsuit from National Music Publishers Association (NMPA) in order for them to be included in any future settlement negotiations and damages. This case ended with a settlement before the damages phase of the trial at hand. See also Legal aspects of file sharing Arista Records, LLC v. Launch Media, Inc Footnotes References External links Court case on docket Justia.com LimeWire information official website Rule 56. Summary Judgment Cornell University Law School. Retrieved 10 March 2011 United States District Court for the Southern District of New York cases United States copyright case law 2010 in United States case law United States file sharing case law Arista Records
57518694
https://en.wikipedia.org/wiki/Contributor%20Covenant
Contributor Covenant
The Contributor Covenant is a code of conduct for contributors to free/open source software projects, created by Coraline Ada Ehmke. Its stated purpose is to reduce harassment of minority, LGBT and otherwise underrepresented open source software developers. The Contributor Covenant is used in prominent projects including Linux, Ruby on Rails, Swift, Go, and JRuby. Relevant signers include Google, Apple, Microsoft, Intel, Eclipse and GitLab. Since its initial release as an open source document in 2014, its creator has claimed it has been adopted by over 100,000 open source projects. In 2016 GitHub added a feature to streamline the addition of the Contributor Covenant to an open source project and the Ruby library manager Bundler also has an option to add the Contributor Covenant to software programs that its users create. In 2016, Ehmke received a Ruby Hero award in recognition of her work on the Contributor Covenant. Following the adoption of the Contributor Covenant v1.4 by Linux in 2018 the Linux community reacted, with some applauding the change and some speaking against it. In 2021 the Contributor Covenant has been folded into the Organization for Ethical Source which promotes the idea that "software freedom must be always in service of human freedom". See also Ada Initiative Inclusive language Outreachy Women in computing References External links Contributor Covenant main page Open-source movement Codes of conduct
31861002
https://en.wikipedia.org/wiki/Open%20Virtualization%20Alliance
Open Virtualization Alliance
The Open Virtualization Alliance (OVA) was a Linux Foundation Collaborative Project committed to foster the adoption of free and open-source software virtualization solutions including KVM, but also software to manage such, e.g. oVirt. The consortium promoted examples of customer successes, encouraged interoperability and accelerated the expansion of the ecosystem of third party solutions around KVM. The OVA provided education, best practices and technical advice to help businesses understand and evaluate their virtualization options. The consortium complemented the existing open source communities managing the development of the KVM hypervisor and associated management capabilities, which are rapidly driving technology innovations for customers virtualizing both Linux and Windows® applications. The OVA was not a formal standards body and did not influence upstream development, but encouraged interoperability and the development of common interfaces and application programming interfaces (APIs) to ease the adoption of KVM for users. On 21 October 2013, it was announced that the Open Virtualization Alliance will become a Linux Foundation Collaborative Project. Having achieved its original purpose, on 1 December 2016, the OVA was officially dissolved. Software promoted by OVA Kernel-based Virtual Machine (KVM) – software that turns the Linux kernel into a hypervisor libvirt – API and its implementation to manage KVM and other virtualization solutions OVirt – web application for managing KVM libguestfs – API and its implementation for modifying virtual disk images Membership Initially formed by Red Hat and IBM, the Alliance had over 200 members involved with enterprise virtualization. Participation was open and the OVA encouraged new participants to become members. Membership was tiered, with governing memberships requiring higher dues than general memberships. One of the criteria for joining the Alliance was to produce or use a product or service that is based on KVM. References Linux Foundation projects Virtualization software
2805320
https://en.wikipedia.org/wiki/Lsof
Lsof
lsof is a command meaning "list open files", which is used in many Unix-like systems to report a list of all open files and the processes that opened them. This open source utility was developed and supported by Victor A. Abell, the retired Associate Director of the Purdue University Computing Center. It works in and supports several Unix flavors. A replacement for Linux is being written, , to be included in a future util-linux release. Examples Open files in the system include disk files, named pipes, network sockets and devices opened by all processes. One use for this command is when a disk cannot be unmounted because (unspecified) files are in use. The listing of open files can be consulted (suitably filtered if necessary) to identify the process that is using the files. # lsof /var COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME syslogd 350 root 5w VREG 222,5 0 440818 /var/adm/messages syslogd 350 root 6w VREG 222,5 339098 6248 /var/log/syslog cron 353 root cwd VDIR 222,5 512 254550 /var -- atjobs To view the port associated with a daemon: # lsof -i -n -P | grep sendmail sendmail 31649 root 4u IPv4 521738 TCP *:25 (LISTEN) From the above one can see that "sendmail" is listening on its standard port of "25". Lists IP sockets. Do not resolve hostnames (no DNS). Do not resolve port names (list port number instead of its name). One can also list Unix Sockets by using lsof -U. Lsof output The lsof output describes: the identification number of the process (PID) that has opened the file; the process group identification number (PGID) of the process (optional); the process identification number of the parent process (PPID) (optional); the command the process is executing; the owner of the process; for all files in use by the process, including the executing text file and the shared libraries it is using: the file descriptor number of the file, if applicable; the file's access mode; the file's lock status; the file's device numbers; the file's inode number; the file's size or offset; the name of the file system containing the file; any available components of the file's path name; the names of the file's stream components; the file's local and remote network addresses; the TLI network (typically UDP) state of the file; the TCP state, read queue length, and write queue length of the file; the file's TCP window read and write lengths (Solaris only); and other file or dialect-specific values. For a complete list of options, see the Lsof(8) Linux manual page See also fuser (Unix) stat (Unix) netstat strace List of Unix commands References External links Old site lsof-l mailing list mirror of legacy sources Using lsof Lsof FAQ Sam Nelson's PCP script, an alternative to "lsof -i" for Solaris. Glsof is two separate utilities (Queries and Filemonitor) based on lsof. Sloth is a macOS graphical interface for lsof Manpage of LSOF Unix file system-related software
25213924
https://en.wikipedia.org/wiki/Social%20learning%20tools
Social learning tools
Social learning tools are tools used for pedagogical and andragogical purposes that utilize social software and/or social media in order to facilitate learning through interactions between individuals and systems. The idea of setting up "social learning tools" is to make education more convenient and widespread. It also allows an interaction between users and/or the software which can bring a different aspect to learning. People can acquire knowledge by distance learning tools, for instance, Facebook, Twitter, Khan Academy and so on. Social learning tools may mediate in formal or informal learning environments to help create connections between learners, instructors and information. These connections form dynamic knowledge networks. Social learning tools are used in schools for teaching/learning and in businesses for training. Within a school environment, the use of social learning tools can affect not only the user (student) but his/her caretaker as well as his/her instructor. It brings a different approach to the traditional way of learning which affects the student and his/her support circle. Companies also use social learning tools. They used them to improve knowledge transfer within departments and across teams. Businesses use a variety of these tools to create a social learning environment. They are also used in company settings to help improve team work, problem solving, and performance in stressful situations. Social learning tools are used for people who are willing to share their good ideas/thoughts with someone else. The ideas can be related to either the academic studies or any other daily skills that we want to share with others. Social learning tools connect learning to our daily lives. It creates a learning environment more truthful to today's society. There are a couple of common elements that should be present in a social learning tool. Technology should be involved to allow physical and cognitive learning. There should be interactions between the people who use the tool and interactions with the software. Another element is trust. Users should trust the software and what other people have created. Frequently Used Social Learning Tools Blogger Blogger is a site in which users can create their own blog. Blogger can be used in an educational context. A teacher could create a blog in which students could interact on. This would allow students to improve their writing and planning skills. Students would have to think about how they would write out their information on the blog in order to convey their message properly. Blogger provides opportunities for students to share what they wrote with their peers and with their caretakers. The blogs could also be a form of electronic portfolios that could show what the students created during the school year. Blogger has settings in which only people with permission can see or write on the blogs themselves. Facebook Facebook is an online communication tool that allows individuals to interact with each other within a virtual community. It has become the most popular social networking site since its beginnings in 2004. Facebook was co-founded in 2004 by Mark Zuckerberg with his roommates and colleagues from Harvard University. Zuckerberg is presently the CEO of Facebook. This online communication tool can be used for personal and for professional purposes. Technology plays an integral part of students' daily lives and teachers will need to find ways of implementing these technologies into their classrooms. Research is currently being conducted on the benefits of implementing Facebook in educational settings and how it could potentially be used as a tool for teaching and learning, even if it is known for social networking in the first place. For example, Facebook can be used to create discussion groups for group projects to divide the work and stay up to date with each other. It can also be used to share articles. Educators can also create professional Facebook accounts for their classrooms, which could be potentially used as a dashboard to upload course material and assignments. It can also be used to create polls to get some feedback and suggestions on the course that the teacher would like to implement. The objective is to improve the methods of teaching as well as the learning experience of students. With the use of technologies, students are more engaged in their learning. Google Hangouts Google Hangouts is a communication software platform that was created by Google. Some features of this social learning tool include starting a chat conversation or video call, phone calls through WI-FI or data and sending messages. All you need is a Google Account. Google Hangouts has become an increasingly popular tool to participate and communicate with people around the world. It can be seen as a better version of Skype in the sense that it has the potential to record or have a group chat without the occasional availability issues that may be seen in the free version of Skype. This makes it easily accessible and efficient in many ways. There are also many other useful tools from Google like Google Drive that allow users to take part in editing and sharing different content, assignments, and sources even when not together. It only takes an Internet connection and everything is saved in your own personal drive. Instagram Instagram, founded in 2010, is a social networking app that was created for sharing photos and videos. Used very frequently in today's society, Instagram can be seen as a great social learning tool through reaching the majority of the populations that have access to the Internet. It has become one of the most popular apps in the social networking world and is now used as a way to campaign companies or organizations. It may be used for educational purposes by posting photos or 1-minute videos of information or subject content. It may also be used for photo essays or creating an organization or lesson. Khan Academy Khan Academy, founded in 2006 by Salman “Sal” Khan, is a non-profit organization providing free online educational resources used by millions of people worldwide. It is best known for its strong mathematics content provided through video tutorials. The mathematics content is designed to help students from kindergarten to grade 12. In 2009, Khan Academy began expanding its educational learning platform to include other subjects, including art history and computer science. The mathematics section is also much more accessible since its recent translation into French. Khan Academy has also diverged to offering other means of learning beyond the online videos, including self-paced training exercises, quizzes, and dashboards for teachers to keep track of student progress. There is also the coach resource section which provides guidelines to parents or teachers for example, who want to learn how to use Khan Academy. Khan Academy is used in majority by individuals outside of school for study purposes. However, in recent years, its use in educational settings has been increasing. Some teachers have implemented the use of Khan Academy in their classrooms, believing it would simply be an adjunct to their instruction. However, it became more important than this. Some teachers are now thinking of “flipping” the normal functioning of classroom settings and including video tutorials from Khan Academy in their teaching. The idea is to have students watch the videos or read the lectures provided by the teacher at home and to do the problem sets during class time. It has been shown that students are having the most difficulty when they are doing their homework. Pinterest Pinterest is a web-based software platform that allows its users to share and discover new information through the use of pictures, descriptions, and short videos. It can also be seen as an online pin board where users can post on their board, collect from other boards, and privately share posted pins. This can be useful in the classroom through having students post their works on their own board or a class board and to share (re-pin) information between their peers. It is easily accessible and is popular for its application that can be used on any mobile or smart device. Skype Skype is a free telecommunications tool that gives its users the opportunity to voice chat or video call between various devices such as laptops and cellphones. Although this application requires the Internet, it has grown to be one of the world's most popular communication tools. Skype is an excellent resource to use when collaborating with participants who are in remote areas because discussions and communication can still occur. Skype can be applied to classrooms in many ways. For example, it can be used as a way to include students who are at home due to illness into collaborative discussions held in class. Twitter Twitter is an established micro-blogging service available to individuals with Internet access world-wide and provides unlimited access to social networks for its users, to send and receive messages. In 2006, it was funded by Jack Dorsey, Noah Glass, Biz Stone, and Evan Williams. One of the common site users is media outlets to broadcast current news updates. For this reason, people use Twitter as a daily source of information to inform themselves on local and world news. Another way that Twitter is utilized, is in the classroom, when a teacher builds a classroom account which can be viewed by their students. This account would allow teachers to post reminders for test revisions, assignment due dates, upcoming field trips and conferences, and review post lessons and answer homework questions. For future lesson plans, teachers could have students tweet what they learned from the previous class and reflect on the content and then engage in classroom discussions on Twitter. YouTube YouTube is a video-sharing website founded in 2005 that was created for users to share, like, upload, and comment on videos. Nowadays, YouTube can be used on many devices through applications like its YouTube app. Although it is mainly used for music, videos on reviewing items or how to do something, it may also be used for educational purposes. There are many accounts just for specific educational subjects like Crash Course, an account that focuses on subjects like anatomy, biology, chemistry and physics. YouTube can be a great social learning tool in education in many different ways: showing videos on different subjects in class or having students research how something works to answer questions, for homework or even to seek further knowledge. There are many platforms of YouTube that lead to other useful social learning tools for educational purposes. TED Talks created its account on YouTube, which provides videos of the best ideas from trusted voices discussing different issues, sharing knowledge or facts and opinions, sharing poetry and spoken word. For education, TED has created its own platform called TED-Ed, which is essentially short TED Talks videos that may be used for information or insight purposes on many different school subjects. There is even such thing as TeacherTube, which is another type of social learning tool similar to YouTube though initially created for education teachers. It may be used for teachers, students or parents through the use of published educational videos with a library of different content filled with videos, audio and pictures. Types of Environments The different environments in which social learning tools can be used may influence the connections between learners, educators and the knowledge or information being acquired. These environments can be formal, non-formal, informal or virtual. Formal learning, is the planned process of acquiring knowledge that happens within a structured educational or institute setting. Lecturers in the university setting may use Moodle to post class lectures and the latest information to students, e.g. courseware used during lectures, coursework, quizzes, and forums. Students can both prepare and review for lectures based on what professors have posted at any given time. Non-formal learning, is the planned process learning that is voluntary and not in a structured environment. Most of planned learning that is not directly associated with school or the education system is done in a non-formal learning environment. For example, taking swimming lessons at the town pool. The swimming lessons take place in an unstructured environment though are planned, organized, and taught by an instructor of prior knowledge on the subject. Social learning tools can be used in this particular environment to facilitate learning like showing a video on YouTube on how to swim. Informal learning, is when information is gained inadvertently or unstructured, which usually happens on a day-to-day basis. Informal learning happens outside of the formal classroom during conversations among people, through exploration and personal experiences. Many circumstances can be considered to be part of an informal learning environment, e.g. the casual watching of videos on Instagram, Facebook or YouTube feed. Virtual learning, is when learning is done via the use of web-based platforms in and out of school. This learning environment is almost always used for any social learning tool. The virtual learning environment (VLE) encourages students to explore, discover and exchange information quickly, creatively and independently. There are many social learning tools created for the sole purpose of education. The education system has adopted technology as a way to educate students in school and outside of school settings, where students have shown to work well in the VLE. Social learning tools including Moodle, Khan Academy, TED-Ed and TeacherTube can all be used both in school and at home through the use of the Internet. This goes to show that social learning tools can be both used within all learning environments while being used for educational purposes. However, it is important for these social learning tools to first be introduced in the formal learning environment, e.g. in the classroom, in order to be established as first an educational tool and second a social media platform that may be accessible outside of school. The majority of informal learning environments are directly related to the use of social media, e.g. Facebook, Instagram, Twitter, Television Channels and so on. For instance, there are many Food TV or YouTube Channels that allow people to self-teach themselves how to cook. This is done through the informal and VLE where a person can simply learn from the comfort of their own home and according to their own pace. Not only does social software or media allow society to gain new knowledge or information; it is easily accessible and can even happen without the desire or intent of learning. Effectiveness: Education and Business The student/employee learning experience is expanded and advanced through the usage of technology. Convenience Social learning tools are convenient because through the various platforms that are now available, communication between students and teachers is fluid and the exchange of knowledge and ideas can be achieved remotely. This means that both parties are no longer required to travel in order to attend school and to study content based subjects. Students can communicate with their teachers over the Internet and attend and actively participate in lectures. Social learning tools provide both students and teachers with a media platform that enables distance education. Cost-Efficient Social learning tools give access, as a hub, to sources of information and resources online, either for free or at low costs through the use of the Internet. Most of these tools can easily be downloaded and used on a smartphone, tablet or computer. For example, there are online applications (apps) like EasyBib, which generates bibliographic citations in the necessary styles as needed for research, essays and other assignments and does this at no cost. School-Wide Application Social learning tools are applicable to elementary, high school, CEGEP, and university level learning. The social media platforms such as Facebook, Skype or Twitter could enhance the communication and interaction between students and their teachers. For example, teachers could post assignments, schedules, lesson summaries or notices regarding class updates for the students to view. For professional development, it provides teachers with the networking to connect with other professionals, share lesson plans, and receive up to date research regarding how education is taught. Additionally, students could post their questions online for the teacher to answer in the group section, have class discussions and exchange ideas, and incorporate other forms of communication. As well, enable students to find real world applications and associate it with their topic. The platforms mentioned above, allow students to look beyond the classroom for answers to questions and to obtain outside input. Learning Tools Social networking permits students to research and gather information on topics beyond the classroom setting. For example, Twitter is one of the social learning tools implemented into classrooms for language learning. Tweets are a method of improving short essays and supports grammatically correct writing and reading skills of a language. It is a form of active learning which further engages the students. After posting a tweet, students would get feedback from other site users and have a second opinion on their writing piece. Another advantage would be to use Twitter in large lecture halls, so students could engage in the topic of discussion and share their interests and thoughts with their colleagues and professors in an ongoing basis. Commercial Advantages There are several advantages to the field of commerce, when it comes to implementing social learning tools in the workplace. If businesses create a forum for customers to write their feedbacks, they could have more ideas about how to improve their products, which enables the company to create better marketing strategies. Furthermore, employees could use the Internet or other program systems, in order to work from home. They could be informed about the current issues and concerns of the workplace. It allows people to concentrate without any distractions at the office, it offers the flexibility of scheduling, time engagement and committing. Employment Opportunities Social media platforms has made searching for job employment and connecting with potential employers more available for students entering the workforce for both part-time and full-time work. For example, Facebook and Twitter can be utilized to follow the companies’ postings and what is going on in the field. LinkedIn is a website designed for students or other individuals to create professional profiles, post their curriculum vitae, receive current notifications on available employment and network with companies, businesses, school boards and industries to apply for a wider variety of job offers. Drawbacks in a Classroom Setting Inappropriate or Offensive Use A major concern among educators regarding social learning tools is the lack of control and monitoring. When students are given access to the Internet, they may post inappropriate content or use these sites in a way that can be disrespectful or damaging to others. Educators do not always have the means to be able to monitor every student's usage of these social learning tools, therefore students may be able to take advantage of this privilege for their own personal benefit and amusement. Posting pornography or using foul language are just a few examples that highlight the offensive and inappropriate behaviour that can be exhibited by students through social learning tools in classrooms. Distraction From Learning Social learning tools give students access to the Internet, where they can take advantage of other irrelevant websites and interfaces. It is very easy for students to get distracted by other social platforms that can divert their attention away from information being presented in class. This ultimately results in a disruption to the student's learning process which is a common concern for the majority of faculty members. Some research has shown that disruption to the learning process can significantly affect a student's ability to retain information into their long-term memory. Educators should be made aware of the potential risk for technology to be a distraction and set clear guidelines for their students to follow. Cyberbullying As the use of technology increases, cyberbullying is becoming more prevalent among students. Although social learning tools give students the ability to interact with teachers and peers, it also provides them with the opportunity to bully others online. This type of malicious behaviour can be seen at every level of education, including college campuses. Educators should plan to intervene on any incidents of cyberbullying that they witness in order to prevent more serious offences. They should also encourage respectful and positive behaviours towards peers when in the classroom and online. Lack of Face-to-Face Communication There is much speculation about whether or not social learning tools can provide enough opportunity for the development of real-life social skills. Some educators are concerned that students will not fully develop the ability to communicate effectively or command attention if they are constantly learning through a screen. Although social learning tools create opportunities for students to interact with others via web interfaces, it does not offer them authentic real-life interactions where students are forced to express themselves verbally or connect with others face-to-face. Protection of Students’ Privacy Social learning tools such as Facebook and Twitter are online networking websites that are public and can be accessed by anyone. Parents and educators alike have expressed their concerns about students using these platforms, as they could expose students’ identities. Protecting students’ personal privacy should be a top priority for educators, however this can be a challenge when using social learning tools. To prevent these issues from arising, schools should develop a system which ensures that students stay anonymous when posting content online. Stress for Teachers Social learning tools give students the ability to communicate efficiently with teachers outside of regular classroom hours. Although this may be a benefit to student learning, excessive stress is placed on teachers to respond quickly and provide feedback. Many faculty members have expressed that technology has harmed their work environment, as digital communication increases their stress levels. Furthermore, they have reported that digital communication has increased the number of hours they work in a day. Educators can also experience added pressure to learn how to teach with social learning tools. Many teachers are skilled with using technology, however they do not possess the knowledge on how to implement social learning tools in teaching. Educators need to be taught how and when to use technology, so they can effectively implement it within their curriculum. Plagiarism and Student Integrity Academic integrity is an important part of learning in all educational settings. Plagiarism is a concern for teachers that can be amplified when students are using social learning tools. The Internet is a database that holds a multitude of resources available to online users. Unfortunately, direct access to all of this information when using social learning tools makes it easy for students to copy the work of others and use it as their own. Cost of Equipment Social learning tools would not exist without the technological devices that harbor them. In order to implement social learning tools into the classroom, the school or school board must invest a large sum of money in order to purchase technological devices for its students. Depending on the budget available, there may not be enough money to buy devices for every student to use. Furthermore, personal cell phones are a popular device that can be used as a vehicle for teaching using social learning tools. However, not every student has the money to buy a cell phone. See also Social Learning (Social Pedagogy) Social Learning Theory Learning Tools Social Networking Site (SNS) Social Networking Pedagogy References Educational technology Picuki is a free website that you can use to view and edit other users’ Instagram profiles. Picuki – Guide about the Ultimate Instagram Editor and Viewer
16165661
https://en.wikipedia.org/wiki/Tape%20management%20system
Tape management system
A tape management system (TMS) is computer software that manages the usage and retention of computer backup tapes. This may be done as a stand-alone function or as part of a broader backup software package. The role of a tape management system A modern tape management system (TMS) is usually used in conjunction with backup applications and are generally used to manage magnetic tape media that contains backup information and other electronically stored information. Tape management systems are used by organizations to locate, track, and rotate media according to an organizations internal policies as well as government regulations. Categories of tape management systems Stand-alone tape management systems Stand-alone tape management systems are predominant on mainframe platforms where tape is used as both a backup and base load storage medium. Mainframe systems such as IBM's z/OS do provide some basic support for tape inventory control via the OS Catalog but as cataloging files is optional it is usually required that an additional software package does the following: Ensure that live tape volumes are not over-written. Keep a list of tape volumes that are eligible to be over-written (known as scratch tapes). Maintain an online catalog of the location of files written to tape and a list of what files reside on each tape volume. These operations are usually achieved by using operating system "hooks" to intercept file open and close operations. Backup applications Robotic control systems Off-line tape management systems Commercially available tape management systems Mainframe Stand-alone tape management systems BrightStore CA-1 (previously known as UCC-1 when Computer Associates acquired Uccel in '87) CA Dynam/TLMS (previously known as TLMS II when CA acquired Capex Corporation in '82) IBM RMM BMC Control-M/Tape (previously known as Control-T when BMC Software acquired New Dimension Software in '99) ASG Zara Lascon Storage's GFS/AFM Robotic control/management systems StorageTek HSC StorageTek ExLM Distributed systems AES Webscan Tape Management System TapeTrack Tape Management Framework Vertices Tape Management System VaultLedger Tape Management System See also List of backup software Backup software Storage software
858121
https://en.wikipedia.org/wiki/Dld%20%28software%29
Dld (software)
Dld was a library package for the C programming language that performs dynamic link editing. Programs that use dld can add or remove compiled object code from a process anytime during its execution. Loading modules, searching libraries, resolving external references, and allocating storage for global and static data structures are all performed at run time. Dld supported various Unix platforms, having originally been developed for the VAX, Sun-3 and SPARCstation architectures. Its authors contrast its functionality with the dynamic linking that was at the time of its construction available in operating systems such as SunOS 4, System V.4, HP-UX and VMS: all of these operating systems had shared libraries, but did not allow programs to load additional libraries after startup. Dld offered this functionality without requiring changes to the OS or toolchain. Dld was a GNU package, but has been withdrawn because its functionality is available (through the dlopen API) in modern Unix-like operating systems. References External links DLD C (programming language) libraries
3645346
https://en.wikipedia.org/wiki/List%20of%20companies%20with%20Denver%20area%20operations
List of companies with Denver area operations
This is a list of notable companies based or having major operations in the Denver-Aurora Metropolitan Area. Headquarters in Denver area American Medical Response - emergency services, corporate headquarters in Greenwood Village, Colorado Antero Resources - natural gas exploration Arrow Electronics - corporate headquarters Ball Aerospace & Technologies Corp. - aerospace, corporate headquarters in Broomfield, Colorado Bremner Biscuit Company CH2M - environmental engineering, corporate headquarters in Englewood, Colorado Chipotle Mexican Grill - dining Coors Brewing Company - brewing and dining Crispin Porter + Bogusky - advertising DaVita Inc. - kidney dialysis and other healthcare services DigitalGlobe - digital satellite imagery DISH Network - pay-TV distributor Ebags.com - custom apparel EchoStar - satellite communication solutions Einstein Bros. Bagels - dining Frontier Airlines - commercial airline Gaiam Gates Rubber Company (Gates Corporation) Gray Line Worldwide - transportation Ibotta - e-commerce The Integer Group - promotional, retail, and shopper marketing Janus Capital Group - financial services JD Edwards, now part of Oracle Corporation - financial services Jones Intercable - television and content distribution King Soopers - a division of Kroger - retail consumer goods LaMar's Donuts - dining Leprino Foods - food manufacturing Level 3 Communications - telecommunications Liberty Media - television and content origination and distribution ManiaTV.com MediaNews Group - news and media distribution Mrs. Fields - Snack food franchise with corporate headquarters in Broomfield, Colorado Name.com - Domain name registration and hosting National CineMedia - digital content service provider Never Summer - snowboard and skateboard manufacturer Newmont Mining - mining and oil exploration Noodles & Company - dining oVertone Haircare - hair care Palantir Technologies - software development PostNet - Internet postal service provider Qdoba Mexican Grill - dining Quark, Inc. - software development Quizno's - dining Red Lion Hotels Corporation Red Robin - dining RE/MAX - real estate Samsonite - specialty luggage manufacturer Smashburger - dining System76 - computer sales and manufacturing TCBY - Frozen yogurt franchise with headquarters in Broomfield, Colorado TeleTech - outsourced callcenters TransMontaigne - energy and oil refinement and distribution Vail Resorts - travel and skiing VF Corporation - Apparel Western Union - Financial services Woody's Chicago Style - dining Xanterra Parks & Resorts - tourism and resorts Branch operations in Denver area Brown Brothers Harriman & Co. - investment operations CenturyLink - telecommunications CH2M Hill - engineering services Charles Schwab - financial services Charter Communications - television and content distribution Cisco systems - networking and security Comcast - television and content distribution Conoco - fuel refining DaVita Inc. - renal care, corporate headquarters DirecTV - television and content origination/distribution First Data Corp. - financial services GoDaddy.com - domain name registration Gymshark - Clothing Manufacturer Halliburton - fuel and energy refinement and distribution Hospital Corporation of America, dba HealthONE Colorado - healthcare Intuit, Inc. - website products K N Energy Inc., part of Kinder Morgan Inc. - engineering services Kiewit Western Co., a Kiewit Corporation company - construction Kroenke Sports & Entertainment - sports and entertainment Lockheed-Martin - space and aerospace technologies Medtronic - surgical device manufacturing Ovintiv - hydrocarbon exploration Owens & Minor - medical device distribution PCL Construction - commercial construction Raytheon - defense and aerospace Regal Entertainment Group (regional headquarters) - entertainment RE/MAX International - real estate Rocket Software - U2 software division Safeway Inc. - consumer goods (district headquarters) The Shaw Group - construction and consulting StorageTek - now part of Oracle Corporation; Internet and software development Sun Microsystems - now part of Oracle Corporation; Internet and software development Suncor Energy - energy and oil refinement and distribution Towers Watson - HR consulting Triangle Pest Control - Denver pest control branch United Airlines - commercial airlines Vertafore - Insurance Software VF Corporation - Clothing Manufacturer Visa Inc. - Payment processor Washington Group International - part of URS Corporation; engineering, construction and management services Western Union - financial services Xcel Energy - electrical energy Branch restaurants Blackjack Pizza - dining Boston Market - dining Einstein Bros. Bagels - dining Red Robin - dining Rock Bottom Restaurants - dining Village Inn - dining Mellow Mushroom - dining References Denver metropolitan area
51769994
https://en.wikipedia.org/wiki/CAPE-OPEN%20Interface%20Standard
CAPE-OPEN Interface Standard
The CAPE-OPEN Interface Standard consists of a series of specifications to expand the range of application of process simulation technologies. The CAPE-OPEN specifications define a set of software interfaces that allow plug and play inter-operability between a given Process Modelling Environment and a third-party Process Modelling Component. Origins The CAPE-OPEN, European Union funded, project was established in 1997. The project involved participants from a number of companies from the process industries (Bayer, BASF, BP, DuPont, French Institute of Petroleum (IFP), Elf Aquitaine, and Imperial Chemical Industries (ICI)) together with 15 partners including software vendors (Aspen Technology, Hyprotech Ltd, QuantiSci and SimSci]) and academics (Imperial College London, National Polytechnic Institute of Toulouse (INPT), and RWTH Aachen University). The objective of the project was to demonstrate the feasibility of a set of specification interfaces to allow plug and play interoperability between modelling environments and third party modelling components. Following the completion of the CAPE-OPEN project in 2001, and the successful proof-of-concept of plug and play interoperability, a second project, Global CAPE-OPEN, was formed to turn the interface specifications into products that could be widely used by Industry. This project had a number of key elements including: An interoperability task force to check on the implementation of CAPE-OPEN in commercial simulation tools The subsidy of small simulation vendors to implement CAPE-OPEN interfaces The formation of the not-for-profit organisation, The CAPE-OPEN Laboratories Network (CO-LaN), to assure the maintenance and further development of the CAPE-OPEN interfaces. Purpose Operating companies in the process industries typically make a significant financial investment in commercial simulation technologies. However, all simulation tools have strengths and weaknesses. Typically, these reflect a focus on the particular process industry for which the simulation package was originally developed. For example, simulation packages developed for the oil industry may have a weakness for the modelling of certain speciality chemical systems; modelling environments focussed on gas and oil systems may not have the capabilities to handle multiple liquid phases and/or solids formation. Although, over time, simulation vendors improve and enhance the capabilities of their modelling technologies generally capability gaps remain. An operating company can address these capability gaps by replacing the relevant components in their tool of choice with improved components from elsewhere. Often these improved components originate from within the operating company itself and contain significant intellectual property relating to a specific process which is not readily available to the commercial modelling vendors. Alternatively, the improved components may come from a company specialising in niche areas of modelling, for example the rigorous modelling of heat exchangers or for thermodynamics and physical properties. Historically the integration of third-party components into a commercial simulation environment involved the writing of proprietary software interfaces that “wrapped” around the new components and allowed them to communicate with the host modelling environment. The degree of difficulty in developing such interfaces varied significantly depending on how “open” the host modelling environment was and how well documented were the associated communication protocols. Inevitably bespoke component interfaces were difficult to maintain as new versions of the modelling environment were adopted. Additionally, a component wrapper for one environment would not work with an alternative environment from a different simulation vendor. User added subroutines, for both unit operations and thermodynamic models, is an alternative approach to component integration but one that suffers from similar difficulties in moving the subroutines from one simulator to another. The development of a standardised plug and play capability hence had the potential to deliver a number of significant business benefits: Lower maintenance costs for operating companies and software vendors due to the standardisation of the interfaces. Continuous capture of lessons learned across the membership community and the associated improvements to the interfaces. The ability to apply a consistent set of simulation components across all CAPE-OPEN compliant simulation environments and other modelling tools such as MATLAB and Microsoft Excel. The ability to choose and incorporate the technically most appropriate model for a particular modelling task with the level of fidelity needed. Concepts A number of commercial simulation programmes are available to support process modelling. Generally one or more of these commercial tools will be used by a given operating company to underpin its modelling activity. In addition, many operating companies also maintain their own in-house software to allow for the modelling of niche applications not fully addressed by the commercial tools. Each simulation programme provides an environment which allows a process flow-sheet to be constructed and the process fluid thermodynamics to be incorporated. The CAPE-OPEN project formally identified such a modelling programme as a Process Modelling Environment (PME) with the requirement that users of a PME should be able to easily connect the PME with other modelling tools without the need to develop bespoke interfaces. To do this a PME would be provided with a CAPE-OPEN “plug” that would allow any CAPE-OPEN component to be added to the modelling environment. All PMEs come with a library of unit operations (vapor-liquid separators, valves, heat exchangers, distillation columns etc.) and a range of thermodynamic methods (equation of state, activity coefficient models, etc.). These library components are normally restricted to usage within the native PME. However, users of a given PME, often require to substitute a third party unit operation or thermodynamic model for the one provided by the native environment. The CAPE-OPEN project formally identified a unit operation or a thermodynamic engine as a Process Modelling Component (PMC) with the requirement that a PMC could be “wrapped” with standard interfaces that would allow it to be placed in a CAPE-OPEN compliant PME without the need for additional interfacing software to be developed – no programming would be required either for the modelling environment or for the core of the modelling component. In order to organise its work programmes the CAPE-OPEN project classified the main elements of a simulation system namely: Unit operations; the modelling of specific process units, e.g. reactors, distillation columns, heat exchangers. A unit operation has ports defining the locations of material stream inputs and outputs and acquires physical properties from Material Objects. Material Objects. These represent process fluid, energy or information streams connecting two or more unit operations. A material object is associated with a thermodynamic package which returns physical properties such as density, viscosity, thermal conductivity, etc. Numerical solvers; efficient iterative numerical methods for solving the highly non-linear equations set formed by a process flow-sheet. Iterative methods are used to solve the equations of both a single unit operation module and to solve the overall flow-sheet containing a number of inter-connected unit operations. Any modelling environment with a CAPE-OPEN interface, for a unit operation or a thermodynamics package, would be able to communicate with any CAPE-OPEN modelling component without the need for additional interfacing software to be written. The CAPE-OPEN specifications define software interfaces for process simulation environments in terms of both the Microsoft standard COM/DCOM and the Common Object Request Broker Architecture (CORBA). Hence both COM and CORBA based simulators are supported by the CAPE-OPEN specifications. The specifications follow an Object Oriented approach and are developed and specified using the Unified Modelling Language (UML). Formal Use Cases are developed to define end-user requirements. The Use Cases summarise the activities and interactions involved with the installation and application of a CAPE-OPEN component within a CAPE-OPEN modelling environment. Once developed, the Use Cases provide an effective procedure for testing new CAPE-OPEN components and environments. Support The Global CAPE-OPEN project ended in 2002 and delivered interface specifications for unit operations (in steady-state) and thermodynamic components. A non-profit organisation, CO-LaN, was subsequently established to maintain and support the existing specifications and to continue the development of additional CAPE-OPEN interface specifications. CAPE-OPEN specifications Currently three main CAPE-OPEN specifications have found wide use within the process industries The unit operation specification, version 1.0, which applies to steady-state modelling Thermodynamic and physical property interface 1.0 Thermodynamic and physical property interface version 1.1. This interface is a complete revision of Thermodynamic and physical property interface 1.0 with some extended functionality together with simplifications and increased flexibility designed to make it easier for the CAPE-OPEN implementation to be carried out. Unfortunately this version of the interface is not backwards compatible with version 1.0 The development and support of new CAPE-OPEN components has been actively encouraged and supported by CO-LaN and attention has focussed on new unit operations, not readily available in commercial simulators and the interfacing of proprietary thermodynamic and physical property models to commercial simulation environments while protecting the inherent intellectual property. Currently all of the major commercial process modelling environments are CAPE-OPEN compliant and there are many CAPE-OPEN process modelling components available. A full list of the available PMEs and PMCs is available on the CO-LaN website. Software tools There is no licensing required from CO-LaN or another organization in order to make use of the CAPE-OPEN specifications. However, CO-LaN has developed a number of tools to assist with the implementation of CAPE-OPEN interfaces: Software Wizards to assist with the development of the CAPE-OPEN interface for modelling components. Software code examples for thermodynamic components and unit operations to provide templates for new implementations. A CAPE-OPEN testing environment into which components can be plugged and tested for conformity against the CAPE-OPEN specifications. A logging tool to capture all communications between a CAPE-OPEN modelling component and a CAPE-OPEN modelling environment More information on the CO-LaN software tools together with available downloads can be found on the CO-LaN website. In addition, CAPE-OPEN is implemented in freeware such as COCO simulator, in openware such as DWSIM, and in many of the leading commercial simulation tools. Future developments Specifications under development by the CO-LaN include: Dynamic unit operations. This extension to the steady-state unit operation specification will allow third party dynamic unit operation models to be used in a CAPE-OPEN compliant dynamic simulation environment. Chemical reactions which will be issued as an extension to the Thermodynamic interface A flow-sheet monitoring specification A Petroleum fractions interface specification References Simulation software
255133
https://en.wikipedia.org/wiki/Multitrack%20recording
Multitrack recording
Multitrack recording (MTR), also known as multitracking or tracking, is a method of sound recording developed in 1955 that allows for the separate recording of multiple sound sources or of sound sources recorded at different times to create a cohesive whole. Multitracking became possible in the mid-1950s when the idea of simultaneously recording different audio channels to separate discrete "tracks" on the same reel-to-reel tape was developed. A "track" was simply a different channel recorded to its own discrete area on the tape whereby their relative sequence of recorded events would be preserved, and playback would be simultaneous or synchronized. Prior to the development of multitracking, the sound recording process required all of the singers, band instrumentalists, and/or orchestra accompanists to perform at the same time in the same space. Multitrack recording was a significant technical improvement as it allowed studio engineers to record all of the instruments and vocals for a piece of music separately. Multitracking allowed the engineer to adjust the levels and tone of each individual track, and if necessary, redo certain tracks or overdub parts of the track to correct errors or get a better "take". As well, different electronic effects such as reverb could be applied to specific tracks, such as the lead vocals, while not being applied to other tracks where this effect would not be desirable (e.g., on the electric bass). Multitrack recording was much more than a technical innovation; it also enabled record producers and artists to create new sounds that would be impossible to create outside of the studio, such as a lead singer adding many harmony vocals with their own voice to their own lead vocal part, an electric guitar player playing many harmony parts along with their own guitar solo, or even recording the drums and replaying the track backwards for an unusual effect. In the 1980s and 1990s, computers provided means by which both sound recording and reproduction could be digitized, revolutionizing audio recording and distribution. In the 2000s, multitracking hardware and software for computers was of sufficient quality to be widely used for high-end audio recordings by both professional sound engineers and by bands recording without studios using widely available programs, which can be used on a high-end laptop computer. Though magnetic tape has not been replaced as a recording medium, the advantages of non-linear editing (NLE) and recording have resulted in digital systems largely superseding tape. Even in the 2010s, with digital multitracking being the dominant technology, the original word "track" is still used by audio engineers. Process Multi-tracking can be achieved with analogue recording, tape-based equipment (from simple, late-1970s cassette-based four-track Portastudios, to eight-track cassette machines, to 2" reel-to-reel 24-track machines), digital equipment that relies on tape storage of recorded digital data (such as ADAT eight-track machines) and hard disk-based systems often employing a computer and audio recording software. Multi-track recording devices vary in their specifications, such as the number of simultaneous tracks available for recording at any one time; in the case of tape-based systems this is limited by, among other factors, the physical size of the tape employed. With the introduction of SMPTE timecode in the early 1970s, engineers began to use computers to perfectly synchronize separate audio and video playback, or multiple audio tape machines. In this system, one track of each machine carried the timecode signal, while the remaining tracks were available for sound recording. Some large studios were able to link multiple 24-track machines together. An extreme example of this occurred in 1982, when the rock group Toto recorded parts of Toto IV on three synchronized 24-track machines. This setup theoretically provided for up to 69 audio tracks, which is far more than necessary for most recording projects. For computer-based systems, the trend in the 2000s is towards unlimited numbers of record/playback tracks, although issues such as RAM memory and CPU available do limit this from machine to machine. Moreover, on computer-based systems, the number of simultaneously available recording tracks is limited by the number of sound card discrete analog or digital inputs. When recording, audio engineers can select which track (or tracks) on the device will be used for each instrument, voice, or other input and can even blend one track with two instruments to vary the music and sound options available. At any given point on the tape, any of the tracks on the recording device can be recording or playing back using sel-sync or Selective Synchronous recording. This allows an artist to be able to record onto track 2 and, simultaneously, listen to track 1, 3 and 7, allowing them to sing or to play an accompaniment to the performance already recorded on these tracks. They might then record an alternate version on track 4 while listening to the other tracks. All the tracks can then be played back in perfect synchrony, as if they had originally been played and recorded together. This can be repeated until all of the available tracks have been used, or in some cases, reused. During mixdown, a separate set of playback heads with higher fidelity are used. Before all tracks are filled, any number of existing tracks can be "bounced" into one or two tracks, and the original tracks erased, making more room for more tracks to be reused for fresh recording. In 1963, the Beatles were using twin track for Please Please Me. The Beatles' producer George Martin used this technique extensively to achieve multiple-track results, while still being limited to using only multiple four-track machines, until an eight-track machine became available during the recording of the Beatles' self-titled ninth album. The Beach Boys' Pet Sounds also made innovative use of multitracking with eight-track machines of the day (circa 1965). Motown also began recording with eight-track machines in 1965, before moving to 16-track machines in mid-1969. Multitrack recording also allows any recording artist to record multiple "takes" of any given section of their performance, allowing them to refine their performance to virtual perfection by making additional "takes" of songs or instrumental tracks. A recording engineer can record only the section being worked on, without erasing any other section of that track. This process of turning the recording mechanism on and off is called "punching in" and "punching out". (See "Punch in / out".) When recording is completed, the many tracks are "mixed down" through a mixing console to a two-track stereo recorder in a format which can then be duplicated and distributed. (Movie and DVD soundtracks can be mixed down to four or more tracks, as needed, the most common being five tracks, with an additional Low Frequency Effects track, hence the "5.1" surround sound most commonly available on DVDs.) Most of the records, CDs and cassettes commercially available in a music store are recordings that were originally recorded on multiple tracks, and then mixed down to stereo. In some rare cases, as when an older song is technically "updated", these stereo (or mono) mixes can in turn be recorded (as if it were a "submix") onto two (or one) tracks of a multitrack recorder, allowing additional sound (tracks) to be layered on the remaining tracks. Flexibility During multitracking, multiple musical instruments (and vocals) can be recorded, either one at a time or simultaneously, onto individual tracks, so that the sounds thus recorded can be accessed, processed and manipulated individually to produce the desired results. In the 2010s, many rock and pop bands record each part of the song one after the other. First, the bass and drums are often recorded, followed by the chordal rhythm section instruments. Then the lead vocals and guitar solos are added. As a last step, the harmony vocals are added. On the other hand, orchestras are always recorded with all 70 to 100 instrumentalists playing their parts simultaneously. If each group of instrument has its own microphone, and each instrument with a solo melody has its own microphone, the different microphones can record on multiple tracks simultaneously. After recording the orchestra, the record producer and conductor can adjust the balance and tone of the different instrument sections and solo instruments, because each section and solo instrument was recorded to its own track. With the rock or pop band example, after recording some parts of a song, an artist might listen to only the guitar part, by 'muting' all the tracks except the one on which the guitar was recorded. If one then wanted to listen to the lead vocals in isolation, one would do so by muting all the tracks apart from the lead vocals track. If one wanted to listen to the entire song, one could do so by un-muting all the tracks. If one did not like the guitar part, or found a mistake in it, and wanted to replace it, one could do so by re-recording only the guitar part (i.e., re-recording only the track on which the guitar was recorded), rather than re-recording the entire song. If all the voices and instruments in a recording are individually recorded on distinct tracks, then the artist is able to retain complete control over the final sculpting of the song, during the mix-down (re-recording to two stereo tracks for mass distribution) phase. For example, if an artist wanted to apply one effects unit to a synthesizer part, a different effect to a guitar part, a 'chorused reverb' effect to the lead vocals, and different effects to all the drums and percussion instruments, they could not do so if they had all been originally recorded together onto the same track. However, if they had been recorded onto separate tracks, then the artist could blend and alter all of the instrument and vocal sounds with complete freedom. Multitracking a song also leaves open the possibilities of remixes by the same or future artists, such as DJs. If the song was not available in a multitrack format recording, the job of the remixing artist was very difficult, or impossible, because, once the tracks had been re-recorded together onto a single track ('mixed down'), they were previously considered inseparable. More recent software allows sound source separation, whereby individual instruments, voices and effects can be 'upmixed' — isolated from a single-track source — in high quality. This has permitted the production of stereophonic or surround sound mixes of recordings that were originally mastered and released in mono. History The process was conceived and developed by Ross Snyder at Ampex in 1955 resulting in the first Sel-Sync machine, an 8-track machine which used one-inch tape. This 8-track recorder was sold to the American guitarist, songwriter, luthier, and inventor Les Paul for $10,000. It became known as the "Octopus". Les Paul, Mary Ford and Patti Page used the technology in the late 1950s to enhance vocals and instruments. From these beginnings, it evolved in subsequent decades into a mainstream recording technique. With computers Since the early 1990s, many performers have recorded music using only a Mac or PC equipped with multitrack recording software as a tracking machine. The computer must have a sound card or other type of audio interface with one or more Analog-to-digital converters. Microphones are needed to record the sounds of vocalists or acoustic instruments. Depending on the capabilities of the system, some instruments, such as a synthesizer or electric guitar, can also be sent to an interface directly using Line level or MIDI inputs. Direct inputs eliminate the need for microphones and can provide another range of sound control options. There are tremendous differences in computer audio interfaces. Such units vary widely in price, sound quality, and flexibility. The most basic interfaces use audio circuitry that is built into the computer motherboard. The most sophisticated audio interfaces are external units of professional studio quality which can cost thousands of dollars. Professional interfaces usually use one or more IEEE 1394 (commonly known as FireWire) connections. Other types of interfaces may use internal PCI cards, or external USB connections. Popular manufacturers of high-quality interfaces include Apogee Electronics, Avid Audio (formerly Digidesign), Echo Digital Audio, Focusrite, MOTU, RME Audio, M-Audio and PreSonus. Microphones are often designed for highly specific applications and have a major effect on recording quality. A single studio-quality microphone can cost $5,000 or more, while consumer-quality recording microphones can be bought for less than $50 each. Microphones also need some type of microphone preamplifier to prepare the signal for use by other equipment. These preamplifiers can also have a major effect on the sound and come in different price ranges, physical configurations, and capability levels. Microphone preamplifiers may be external units or a built in feature of other audio equipment. Software Software for multitrack recording can record multiple tracks at once. It generally uses graphic notation for an interface and offers a number of views of the music. Most multitrackers also provide audio playback capability. Some multitrack software also provides MIDI playback functions not just for audio; during playback the MIDI data is sent to a softsynth or virtual instrument (e.g., VSTi) which converts the data to audio sound. Multitrack software may also provide other features that qualify it being called a digital audio workstation (DAW). These features may include various displays including showing the score of the music, as well as editing capability. There is often overlap between many of the categories of musical software. In this case scorewriters and full featured multitrackers such as DAWs have similar features for playback, but may have less similarity for editing and recording. Multitrack recording software varies widely in price and capability. Popular multitrack recording software programs include: Reason, Ableton Live, FL Studio, Adobe Audition, Pro Tools, Digital Performer, Cakewalk Sonar, Samplitude, Nuendo, Cubase and Logic. Lower-cost alternatives include Mixcraft, REAPER and n-Track Studio. Open-source and free software programs are also available for multitrack recording. These range from very basic programs such as Jokosher to Ardour and Audacity, which are capable of performing many functions of the most sophisticated programs. Instruments and voices are usually recorded as individual files on a computer hard drive. These function as tracks which can be added, removed or processed in many ways. Effects such as reverb, chorus, and delays can be applied by electronic devices or by computer software. Such effects are used to shape the sound as desired by the producer. When the producer is satisfied with the recorded sound finished tracks can be mixed into a new stereo pair of tracks within the multitrack recording software. Finally, the final stereo recording can be written to a CD, which can be copied and distributed. Order of recording In modern popular songs, drums, percussion instruments and electric bass are often among the first instruments to be recorded. These are the core instruments of the rhythm section. Musicians recording later tracks use the precise attack of the drum sounds as a rhythmic guide. In some styles, the drums may be recorded for a few bars and then looped. Click (metronome) tracks are also often used as the first sound to be recorded, especially when the drummer is not available for the initial recording, and/or the final mix will be synchronized with motion picture and/or video images. One reason that a band may start with just the drums is because this allows the band to pick the song's key later on. The producer and the musicians can experiment with the song's key and arrangement against the basic rhythm track. Also, though the drums might eventually be mixed down to a couple of tracks, each individual drum and percussion instrument might be initially recorded to its own individual track. The drums and percussion combined can occupy a large number of tracks utilized in a recording. This is done so that each percussion instrument can be processed individually for maximum effect. Equalization (or EQ) is often used on individual drums, to bring out each one's characteristic sound. The last tracks recorded are often the vocals (though a temporary vocal track may be recorded early on either as a reference or to guide subsequent musicians; this is sometimes called a "guide vocal", "ghost vocal" or "scratch vocal"). One reason for this is that singers will often temper their vocal expression in accordance with the accompaniment. Producers and songwriters can also use the guide/scratch vocal when they have not quite ironed out all the lyrics or for flexibility based on who sings the lead vocal (as The Alan Parsons Project's Eric Woolfson often did). Concert music For classical and jazz recordings, particularly instrumentals where multitracking is chosen as the recording method (as opposed to direct to stereo, for example), a different arrangement is used; all tracks are recorded simultaneously. Sound barriers are often placed between different groups within the orchestra, e.g. pianists, violinists, percussionists, etc. When barriers are used, these groups listen to each other via headphones. See also Click track Comparison of digital audio editors Digital audio workstation Dynamic range compression Fostex List of musical works released in a stem format MIDI mockup Module file Overdubbing Portastudio Surround sound Quadraphonic sound Remix Reverb Sound effects#Techniques TASCAM Revox References External links n-Track Studio - Multitrack recording software "All You Need is Ears" by George Martin, p. 148-157 The History of Magnetic Recording Recording Technology History Der Bingle Technology "Both Sides Now" webpage on Ampex Records AES Historical Committee: Ampex History Project Sound recording Tape recording ja:マルチトラック・レコーダー
1347955
https://en.wikipedia.org/wiki/File-system%20permissions
File-system permissions
Most file systems include attributes of files and directories that control the ability of users to read, change, navigate, and execute the contents of the file system. In some cases, menu options or functions may be made visible or hidden depending on a user's permission level; this kind of user interface is referred to as permission-driven. Two types of permissions are very widely available: traditional Unix permissions and Access Control Lists (ACLs) which are capable of more specific control. File system variations The original File_Allocation_Table file system, designed for single user systems, has a read-only attribute which is not actually a permission. NTFS implemented in Microsoft Windows NT and its derivatives, use ACLs to provide a complex set of permissions. OpenVMS uses a permission scheme similar to that of Unix. There are four categories (System, Owner, Group, and World) and four types of access permissions (Read, Write, Execute and Delete). The categories are not mutually disjoint: World includes Group, which in turn includes Owner. The System category independently includes system users. HFS implemented in Classic Mac OS operating systems, do not support permissions. Mac OS X versions 10.3 ("Panther") and prior use POSIX-compliant permissions. Mac OS X, beginning with version 10.4 ("Tiger"), also support the use of NFSv4 ACLs. They support "traditional Unix permissions" as used in previous versions of Mac OS X, and the Apple Mac OS X Server version 10.4+ File Services Administration Manual recommends using only traditional Unix permissions if possible. It also still supports the Mac OS Classic's "Protected" attribute. Solaris ACL support depends on the filesystem being used; older UFS filesystem supports POSIX.1e ACLs, while ZFS supports only NFSv4 ACLs. Linux supports ext2, ext3, ext4, Btrfs and other file systems many of which include POSIX.1e ACLs. There is experimental support for NFSv4 ACLs for ext3 and ext4 filesystems. FreeBSD supports POSIX.1e ACLs on UFS, and NFSv4 ACLs on UFS and ZFS. IBM z/OS implements file security using RACF (Resource Access Control Facility) The AmigaOS Filesystem, AmigaDOS supports a permissions system relatively advanced for a single-user OS. In AmigaOS 1.x, files had Archive, Read, Write, Execute and Delete (collectively known as ARWED) permissions/flags. In AmigaOS 2.x and higher, additional Hold, Script, and Pure permissions/flags were added. Traditional Unix permissions Permissions on Unix-like file systems are managed in three scopes or classes known as user, group, and others. When a file is created its permissions are restricted by the umask of the process that created it. Classes Files and directories are owned by a user. The owner determines the file's user class. Distinct permissions apply to the owner. Files and directories are assigned a group, which define the file's group class. Distinct permissions apply to members of the file's group. The owner may be a member of the file's group. Users who are not the owner, nor a member of the group, comprise a file's others class. Distinct permissions apply to others. The effective permissions are determined based on the first class the user falls within in the order of user, group then others. For example, the user who is the owner of the file will have the permissions given to the user class regardless of the permissions assigned to the group class or others class. Permissions Unix-like systems implement three specific permissions that apply to each class: The read permission grants the ability to read a file. When set for a directory, this permission grants the ability to read the names of files in the directory, but not to find out any further information about them such as contents, file type, size, ownership, permissions. The write permission grants the ability to modify a file. When set for a directory, this permission grants the ability to modify entries in the directory, which includes creating files, deleting files, and renaming files. Note that this requires that execute is also set; without it, the write permission is meaningless for directories. The execute permission grants the ability to execute a file. This permission must be set for executable programs, in order to allow the operating system to run them. When set for a directory, the execute permission is interpreted as the search permission: it grants the ability to access file contents and meta-information if its name is known, but not list files inside the directory, unless read is set also. The effect of setting the permissions on a directory, rather than a file, is "one of the most frequently misunderstood file permission issues". When a permission is not set, the corresponding rights are denied. Unlike ACL-based systems, permissions on Unix-like systems are not inherited. Files created within a directory do not necessarily have the same permissions as that directory. Changing permission behavior with setuid, setgid, and sticky bits Unix-like systems typically employ three additional modes. These are actually attributes but are referred to as permissions or modes. These special modes are for a file or directory overall, not by a class, though in the symbolic notation (see below) the setuid bit is set in the triad for the user, the setgid bit is set in the triad for the group and the sticky bit is set in the triad for others. The set user ID, setuid, or SUID mode. When a file with setuid is executed, the resulting process will assume the effective user ID given to the owner class. This enables users to be treated temporarily as root (or another user). The set group ID, setgid, or SGID permission. When a file with setgid is executed, the resulting process will assume the group ID given to the group class. When setgid is applied to a directory, new files and directories created under that directory will inherit their group from that directory. (Default behaviour is to use the primary group of the effective user when setting the group of new files and directories, except on BSD-derived systems which behave as though the setgid bit is always set on all directories (see Setuid).) The sticky mode (also known as the Text mode). The classical behaviour of the sticky bit on executable files has been to encourage the kernel to retain the resulting process image in memory beyond termination; however, such use of the sticky bit is now restricted to only a minority of unix-like operating systems (HP-UX and UnixWare). On a directory, the sticky permission prevents users from renaming, moving or deleting contained files owned by users other than themselves, even if they have write permission to the directory. Only the directory owner and superuser are exempt from this. These additional modes are also referred to as setuid bit, setgid bit, and sticky bit, due to the fact that they each occupy only one bit. Notation of traditional Unix permissions Symbolic notation Unix permissions are represented either in symbolic notation or in octal notation. The most common form, as used by the command ls -l, is symbolic notation. The first character of the ls display indicates the file type and is not related to permissions. The remaining nine characters are in three sets, each representing a class of permissions as three characters. The first set represents the user class. The second set represents the group class. The third set represents the others class. Each of the three characters represent the read, write, and execute permissions: r if reading is permitted, - if it is not. w if writing is permitted, - if it is not. x if execution is permitted, - if it is not. The following are some examples of symbolic notation: -rwxr-xr-x: a regular file whose user class has full permissions and whose group and others classes have only the read and execute permissions. crw-rw-r--: a character special file whose user and group classes have the read and write permissions and whose others class has only the read permission. dr-x------: a directory whose user class has read and execute permissions and whose group and others classes have no permissions. In some permission systems additional symbols in the ls -l display represent additional permission features: + (plus) suffix indicates an access control list that can control additional permissions. . (dot) suffix indicates an SELinux context is present. Details may be listed with the command ls -Z. @ suffix indicates extended file attributes are present. To represent the setuid, setgid and sticky or text attributes, the executable character (x or -) is modified. Though these attributes affect the overall file, not only users in one class, the setuid attribute modifies the executable character in the triad for the user, the setgid attribute modifies the executable character in the triad for the group and the sticky or text attribute modifies the executable character in the triad for others. For the setuid or setgid attributes, in the first or second triad, the x becomes s and the - becomes S. For the sticky or text attribute, in the third triad, the x becomes t and the - becomes T. Here is an example: -rwsr-Sr-t: a file whose user class has read, write and execute permissions; whose group class has read permission; whose others class has read and execute permissions; and which has setuid, setgid and sticky attributes set. Numeric notation Another method for representing Unix permissions is an octal (base-8) notation as shown by stat -c %a. This notation consists of at least three digits. Each of the three rightmost digits represents a different component of the permissions: owner, group, and others. (If a fourth digit is present, the leftmost (high-order) digit addresses three additional attributes, the setuid bit, the setgid bit and the sticky bit.) Each of these digits is the sum of its component bits in the binary numeral system. As a result, specific bits add to the sum as it is represented by a numeral: The read bit adds 4 to its total (in binary 100), The write bit adds 2 to its total (in binary 010), and The execute bit adds 1 to its total (in binary 001). These values never produce ambiguous combinations; each sum represents a specific set of permissions. More technically, this is an octal representation of a bit field – each bit references a separate permission, and grouping 3 bits at a time in octal corresponds to grouping these permissions by user, group, and others. These are the examples from the symbolic notation section given in octal notation: User private group Some systems diverge from the traditional POSIX model of users and groups by creating a new group – a "user private group" – for each user. Assuming that each user is the only member of its user private group, this scheme allows an umask of 002 to be used without allowing other users to write to newly created files in normal directories because such files are assigned to the creating user's private group. However, when sharing files is desirable, the administrator can create a group containing the desired users, create a group-writable directory assigned to the new group, and, most importantly, make the directory setgid. Making it setgid will cause files created in it to be assigned to the same group as the directory and the 002 umask (enabled by using user private groups) will ensure that other members of the group will be able to write to those files. See also Comparison_of_file_systems#Metadata chmod: change mode (permissions) on Unix-like file systems chattr or chflags: change attributes or flags including those which restrict access. lsattr list attributes POSIX umask User identifier (Unix) Group identifier (Unix) References External links The Linux Cookbook: Groups and How to Work in Them by Michael Stutz 2004
18932622
https://en.wikipedia.org/wiki/Berkeley%20Software%20Distribution
Berkeley Software Distribution
The Berkeley Software Distribution or Berkeley Standard Distribution (BSD) is a discontinued operating system based on Research Unix, developed and distributed by the Computer Systems Research Group (CSRG) at the University of California, Berkeley. The term "BSD" commonly refers to its descendants, including FreeBSD, OpenBSD, NetBSD, and DragonFly BSD. BSD was initially called Berkeley Unix because it was based on the source code of the original Unix developed at Bell Labs. In the 1980s, BSD was widely adopted by workstation vendors in the form of proprietary Unix variants such as DEC Ultrix and Sun Microsystems SunOS due to its permissive licensing and familiarity to many technology company founders and engineers. Although these proprietary BSD derivatives were largely superseded in the 1990s by UNIX SVR4 and OSF/1, later releases provided the basis for several open-source operating systems including FreeBSD, OpenBSD, NetBSD, DragonFly BSD, Darwin, and TrueOS. These, in turn, have been used by proprietary operating systems, including Apple's macOS and iOS, which derived from them, and Microsoft Windows, which used (at least) part of its TCP/IP code, which was legal. Code from FreeBSD was also used to create the operating system for the PlayStation 4 and Nintendo Switch. History The earliest distributions of Unix from Bell Labs in the 1970s included the source code to the operating system, allowing researchers at universities to modify and extend Unix. The operating system arrived at Berkeley in 1974, at the request of computer science professor Bob Fabry who had been on the program committee for the Symposium on Operating Systems Principles where Unix was first presented. A PDP-11/45 was bought to run the system, but for budgetary reasons, this machine was shared with the mathematics and statistics groups at Berkeley, who used RSTS, so that Unix only ran on the machine eight hours per day (sometimes during the day, sometimes during the night). A larger PDP-11/70 was installed at Berkeley the following year, using money from the Ingres database project. Understanding BSD requires delving far back into the history of Unix, the operating system first released by AT&T Bell Labs in 1969. BSD began life as a variant of Unix that programmers at the University of California at Berkeley, initially led by Bill Joy, began developing in the late 1970s. At first, BSD was not a clone of Unix, or even a substantially different version of it. It just included some extra features, which were intertwined with code owned by AT&T. In 1975, Ken Thompson took a sabbatical from Bell Labs and came to Berkeley as a visiting professor. He helped to install Version 6 Unix and started working on a Pascal implementation for the system. Graduate students Chuck Haley and Bill Joy improved Thompson's Pascal and implemented an improved text editor, ex. Other universities became interested in the software at Berkeley, and so in 1977 Joy started compiling the first Berkeley Software Distribution (1BSD), which was released on March 9, 1978. 1BSD was an add-on to Version 6 Unix rather than a complete operating system in its own right. Some thirty copies were sent out. The second Berkeley Software Distribution (2BSD), released in May 1979, included updated versions of the 1BSD software as well as two new programs by Joy that persist on Unix systems to this day: the vi text editor (a visual version of ex) and the C shell. Some 75 copies of 2BSD were sent out by Bill Joy. A VAX computer was installed at Berkeley in 1978, but the port of Unix to the VAX architecture, UNIX/32V, did not take advantage of the VAX's virtual memory capabilities. The kernel of 32V was largely rewritten to include Berkeley graduate student Ozalp Babaoglu's virtual memory implementation, and a complete operating system including the new kernel, ports of the 2BSD utilities to the VAX, and the utilities from 32V was released as 3BSD at the end of 1979. 3BSD was also alternatively called Virtual VAX/UNIX or VMUNIX (for Virtual Memory Unix), and BSD kernel images were normally called /vmunix until 4.4BSD. After 4.3BSD was released in June 1986, it was determined that BSD would move away from the aging VAX platform. The Power 6/32 platform (codenamed "Tahoe") developed by Computer Consoles Inc. seemed promising at the time, but was abandoned by its developers shortly thereafter. Nonetheless, the 4.3BSD-Tahoe port (June 1988) proved valuable, as it led to a separation of machine-dependent and machine-independent code in BSD which would improve the system's future portability. In addition to portability, the CSRG worked on an implementation of the OSI network protocol stack, improvements to the kernel virtual memory system and (with Van Jacobson of LBL) new TCP/IP algorithms to accommodate the growth of the Internet. Until then, all versions of BSD used proprietary AT&T Unix code, and were therefore subject to an AT&T software license. Source code licenses had become very expensive and several outside parties had expressed interest in a separate release of the networking code, which had been developed entirely outside AT&T and would not be subject to the licensing requirement. This led to Networking Release 1 (Net/1), which was made available to non-licensees of AT&T code and was freely redistributable under the terms of the BSD license. It was released in June 1989. After Net/1, BSD developer Keith Bostic proposed that more non-AT&T sections of the BSD system be released under the same license as Net/1. To this end, he started a project to reimplement most of the standard Unix utilities without using the AT&T code. Within eighteen months, all of the AT&T utilities had been replaced, and it was determined that only a few AT&T files remained in the kernel. These files were removed, and the result was the June 1991 release of Networking Release 2 (Net/2), a nearly complete operating system that was freely distributable. Net/2 was the basis for two separate ports of BSD to the Intel 80386 architecture: the free 386BSD by William Jolitz and the proprietary BSD/386 (later renamed BSD/OS) by Berkeley Software Design (BSDi). 386BSD itself was short-lived, but became the initial code base of the NetBSD and FreeBSD projects that were started shortly thereafter. BSDi soon found itself in legal trouble with AT&T's Unix System Laboratories (USL) subsidiary, then the owners of the System V copyright and the Unix trademark. The USL v. BSDi lawsuit was filed in 1992 and led to an injunction on the distribution of Net/2 until the validity of USL's copyright claims on the source could be determined. The lawsuit slowed development of the free-software descendants of BSD for nearly two years while their legal status was in question, and as a result systems based on the Linux kernel, which did not have such legal ambiguity, gained greater support. The lawsuit was settled in January 1994, largely in Berkeley's favor. Of the 18,000 files in the Berkeley distribution, only three had to be removed and 70 modified to show USL copyright notices. A further condition of the settlement was that USL would not file further lawsuits against users and distributors of the Berkeley-owned code in the upcoming 4.4BSD release. The final release from Berkeley was 1995's 4.4BSD-Lite Release 2, after which the CSRG was dissolved and development of BSD at Berkeley ceased. Since then, several variants based directly or indirectly on 4.4BSD-Lite (such as FreeBSD, NetBSD, OpenBSD and DragonFly BSD) have been maintained. The permissive nature of the BSD license has allowed many other operating systems, both open-source and proprietary, to incorporate BSD source code. For example, Microsoft Windows used BSD code in its implementation of TCP/IP and bundles recompiled versions of BSD's command-line networking tools since Windows 2000. Darwin, the basis for Apple's macOS and iOS, is based on 4.4BSD-Lite2 and FreeBSD. Various commercial Unix operating systems, such as Solaris, also incorporate BSD code. Relationship to Research Unix Starting with the 8th Edition, versions of Research Unix at Bell Labs had a close relationship to BSD. This began when 4.1cBSD for the VAX was used as the basis for Research Unix 8th Edition. This continued in subsequent versions, such as the 9th Edition, which incorporated source code and improvements from 4.3BSD. The result was that these later versions of Research Unix were closer to BSD than they were to System V. In a Usenet posting from 2000, Dennis Ritchie described this relationship between BSD and Research Unix: Relationship to System V Eric S. Raymond summarizes the longstanding relationship between System V and BSD, stating, "The divide was roughly between longhairs and shorthairs; programmers and technical people tended to line up with Berkeley and BSD, more business-oriented types with AT&T and System V." In 1989, David A. Curry wrote about the differences between BSD and System V. He characterized System V as being often regarded as the "standard Unix." However, he described BSD as more popular among university and government computer centers, due to its advanced features and performance: Technology Berkeley sockets Berkeley's Unix was the first Unix to include libraries supporting the Internet Protocol stacks: Berkeley sockets. A Unix implementation of IP's predecessor, the ARPAnet's NCP, with FTP and Telnet clients, had been produced at the University of Illinois in 1975, and was available at Berkeley. However, the memory scarcity on the PDP-11 forced a complicated design and performance problems. By integrating sockets with the Unix operating system's file descriptors, it became almost as easy to read and write data across a network as it was to access a disk. The AT&T laboratory eventually released their own STREAMS library, which incorporated much of the same functionality in a software stack with a different architecture, but the wide distribution of the existing sockets library reduced the impact of the new API. Early versions of BSD were used to form Sun Microsystems' SunOS, founding the first wave of popular Unix workstations. Binary compatibility Some BSD operating systems can run much native software of several other operating systems on the same architecture, using a binary compatibility layer. Much simpler and faster than emulation, this allows, for instance, applications intended for Linux to be run at effectively full speed. This makes BSDs not only suitable for server environments, but also for workstation ones, given the increasing availability of commercial or closed-source software for Linux only. This also allows administrators to migrate legacy commercial applications, which may have only supported commercial Unix variants, to a more modern operating system, retaining the functionality of such applications until they can be replaced by a better alternative. Standards Current BSD operating system variants support many of the common IEEE, ANSI, ISO, and POSIX standards, while retaining most of the traditional BSD behavior. Like AT&T Unix, the BSD kernel is monolithic, meaning that device drivers in the kernel run in privileged mode, as part of the core of the operating system. BSD descendants Several operating systems are based on BSD, including FreeBSD, OpenBSD, NetBSD, MidnightBSD, GhostBSD, Darwin and DragonFly BSD. Both NetBSD and FreeBSD were created in 1993. They were initially derived from 386BSD (also known as "Jolix"), and merged the 4.4BSD-Lite source code in 1994. OpenBSD was forked from NetBSD in 1995, and DragonFly BSD was forked from FreeBSD in 2003. BSD was also used as the basis for several proprietary versions of Unix, such as Sun's SunOS, Sequent's DYNIX, NeXT's NeXTSTEP, DEC's Ultrix and OSF/1 AXP (now Tru64 UNIX). NeXTSTEP later became the foundation for Apple Inc.'s macOS. See also BSD Daemon BSD licenses Comparison of BSD operating systems List of BSD operating systems Unix wars References Bibliography Marshall K. McKusick, Keith Bostic, Michael J. Karels, John S. Quartermain, The Design and Implementation of the 4.4BSD Operating System (Addison Wesley, 1996; ) Marshall K. McKusick, George V. Neville-Neil, The Design and Implementation of the FreeBSD Operating System (Addison Wesley, August 2, 2004; ) Samuel J. Leffler, Marshall K. McKusick, Michael J. Karels, John S. Quarterman, The Design and Implementation of the 4.3BSD UNIX Operating System (Addison Wesley, November 1989; ) Peter H. Salus, The Daemon, the GNU & The Penguin (Reed Media Services, September 1, 2008; ) Peter H. Salus, A Quarter Century of UNIX (Addison Wesley, June 1, 1994; ) Peter H. Salus, Casting the Net (Addison-Wesley, March 1995; ) External links A timeline of BSD and Research UNIX UNIX History – History of UNIX and BSD using diagrams The Design and Implementation of the 4.4BSD Operating System The Unix Tree: Source code and manuals for old versions of Unix EuroBSDCon, an annual event in Europe in September, October or November, founded in 2001 BSDCan, a conference in Ottawa, Ontario, Canada, held annually in May since 2004, in June since 2015 AsiaBSDCon, a conference in Tokyo, held annually in March of each year, since 2007 mdoc.su – short manual page URLs for FreeBSD, OpenBSD, NetBSD and DragonFly BSD, a web-service written in nginx BXR.SU – Super User's BSD Cross Reference, a userland and kernel source code search engine based on OpenGrok and nginx 1977 software Free software operating systems Free software programmed in C Operating system families Science and technology in the San Francisco Bay Area University of California, Berkeley
65562075
https://en.wikipedia.org/wiki/Eneide%20%28TV%20serial%29
Eneide (TV serial)
Eneide is a seven-episode 1971–1972 Italian television drama, adapted by Franco Rossi from Virgil's epic poem the Aeneid. It stars Giulio Brogi as Aeneas and Olga Karlatos as Dido, and also stars Alessandro Haber, Andrea Giordana and Marilù Tolo. RAI originally broadcast the hour-long episodes from 19 December 1971 to 30 January 1972. A shorter theatrical version was released in 1974 as Le avventure di Enea. Plot Episode 1: The city of Troy is in ruins after the Trojan War. One of the survivors is the demigod Aeneas, who escaped with a Trojan fleet. He arrives at Carthage in North Africa, where the queen Dido asks him to tell his story. He begins by telling her about the Trojan Horse. Episode 2: Aeneas tells Dido how he travelled on the Mediterranean Sea and visited Delos, where an oracle told him to find the "ancient mother". He decided to travel west. Episode 3: After having heard Aeneas' story, Dido dismisses him. She is however fascinated by his search for the earth mother and cannot sleep. She tells him to go and find her in the land Hesperia, located in the north. Episode 4: Aeneas finds a community of Trojan survivors on an island. Juno instigates the Trojan women to set fire to the fleet, but it is saved by rainfall. Episode 5: Aeneas' mother Venus guides him to the underworld to receive strength from his father's shadow. The Trojans arrive at the Tiber in Latium, were a prophecy says that Lavinia, the daughter of the king Latinus, will marry a foreigner. Aeneas develops a bond with Turnus, king of the Rutuli. Episode 6: After advise from Latinus, Aeneas visits the inland, where an old Greek man tells him legends. Intrigues involving Lavinia and Turnus stir up conflict between the Trojans and the Latins. Episode 7: To solve the conflict, Aeneas challenges Turnus in a single combat to death. He wins and marries Lavinia. On his deathbed, Latinus bequeths his land to Aeneas. Cast Giulio Brogi as Aeneas Olga Karlatos as Dido Andrea Giordana as Turnus Marilù Tolo as Venus Vasa Pantelic as Anchises Arsen Costa as Ascanius Marisa Bartoli as Andromache Angelica Zielke as Creusa Ilaria Guerrini as Juno Alessandro Haber as Misenus Christian Ledoux as Palinurus Jaspar Von Oertzen as Evander Jagoda Ristic as Lavinia Anna Maria Gherardi as Amata Janez Vrhovec as Latinus Production Franco Rossi's 1968 television adaptation of Homer's Odyssey had been a success in Italy and elsewhere in Europe, and was followed by an adaptation of the ancient Roman author Virgil's epic poem the Aeneid. Like in Rossi's Odyssey, Roberto Rossellini's television works and Pier Paolo Pasolini's films such as Oedipus Rex (1967) and Medea (1969) provided inspiration for the use of natural locations and sometimes intentionally anachronistic set and costume designs. The exterior scenes set in Carthage were filmed in the Bamyan valley in Afghanistan, where one of the giant Buddha statues was used to represent an unnamed pre-Tyrian god. Reception Eneide premiered on the Italian public television network RAI's channel Programma Nazionale, where it aired from 19 December 1971 to 30 January 1972. Like Rossi's Odyssey before and his Quo Vadis? in 1985, it was well received and distributed internationally. In 1974, a theatrical version edited down to 100 minutes was released in Italian cinemas as Le avventure di Enea (). References External links 1971 Italian television series debuts 1972 Italian television series endings 1970s television miniseries Italian television miniseries Works based on the Aeneid Television shows based on poems Television series based on classical mythology Cultural depictions of Dido Films directed by Franco Rossi Films scored by Mario Nascimbene Italian adventure television series Italian drama television series Italian-language television shows
61624037
https://en.wikipedia.org/wiki/Tariq%20Hilal%20Al%20Barwani
Tariq Hilal Al Barwani
Tariq Al Barwani (Arabic طارق البرواني; born July 1979) is a motivational speaker in Oman who is best known for creating projects in information technology and delivering presentations that help develop people and organizations. He has created radio and TV programs in partnership with Oman government's Ministry of Information that supported talents interviews and has also created a knowledge sharing platform called Knowledge Oman that provides free seminars, workshops and training. Education Al Barwani has a master’s degree in information technology from Swinburne University of Technology, Australia. He also has a bachelor’s degree with honors in computer science from Acadia University, Canada, and a diploma in information systems from Aptech Advanced Computer Institute, Oman. Career At age 19, he created a simplified program for the nascent Omantel internet services to automate configuration processes. Subsequently, he completed his education, and returned to Oman. He founded Knowledge Oman in 2008, a community platform run by volunteers to empower local communities by connecting them with information. Al Barwani works for the London Speaker Bureau as a motivational speaker. Knowledge Oman Knowledge Oman was created on April 1, 2008 to support the vision of the ruler Sultan Qaboos of Oman for transforming the country into a knowledge society. Events in the form of seminars, workshops, and training have been organized by Knowledge Oman members in partnership with private and government organizations. Knowledge Oman was recognized with an International Standard Web Technology Award at the Oman Web Awards 2019, Brand Leadership Award at the World Brand Congress in 2010, Outstanding contribution to the cause of education at the Human Resource Developed Conference 2012, and a Golden Strategic Award for Culture at the PAN Arab Awards 2013. In 2018, Knowledge Oman has introduced a project branded under the title “Giving Back” that consolidates charitable causes it has been engaging with its members since 2008.  Gifts were distributed to children at the Sultan Qaboos University Hospital, IT Infrastructure setup was supported at Al Noor Blind Association, children library computers were set up in Muscat and Musanah, visits made and spent with Elderly at Rustaq Care house, and educational materials were distributed to academic institutions such as University of Nizwa, the German University of Technology and the Middle East College of Information Technology. Food distribution to needy families is organized during Ramadhan. In 2018, Knowledge Oman has introduced a project branded under the title “Giving Back” that consolidates charitable causes it has been engaging for the community with its members since 2008. He left the role of President in 2014 but remains as the adviser to new appointed presidents and financially support the operation of the organization. Awards and recognition His projects have earned him multiple local, GCC and international awards. Creative Man of the Year – Asian Leadership Awards 2012 Young Achiever of the Year, The Asian Awards 2015 Tech Man of the Year at Oman Tech Awards 2016 Microsoft Most Valuable Professional (2007 – 2017) Global Knowledge Management Leadership Award at the 26th World HRD Congress 2018 Awarded "Outstanding Entrepreneur who believes in the Spirit of Giving" at the Global Giving Awards 2018, Al Barwani's work has been recognized in the 2019 publication Leaders of Oman, by the Parisian communication and publication firm Omnia international. He was also featured in Jawharat Oman as one of the top 40 achievers in the Sultanate of Oman in 2011. He was also nominated for the top 50 world technology leaders list during the Internet conference, Intercon 2019. References External links Official Website Knowledge Oman Tariq Al Barwani at Swinburne University of Technology Alumni Living people 1979 births Omani businesspeople Motivational speakers People from Muscat, Oman
172787
https://en.wikipedia.org/wiki/Joe%20Ossanna
Joe Ossanna
Joseph Frank Ossanna, Jr. (December 10, 1928 in Detroit, Michigan – November 28, 1977 in Morristown, New Jersey) worked as a member of the technical staff at the Bell Telephone Laboratories in Murray Hill, New Jersey. He became actively engaged in the software design of Multics (Multiplexed Information and Computing Service), a general-purpose operating system used at Bell. Education and career Ossanna received his Bachelor of Engineering (B.S.E.E.) from Wayne State University in 1952. At Bell Telephone Labs, Ossanna was concerned with low-noise amplifier design, feedback amplifier design, satellite look-angle prediction, mobile radio fading theory, and statistical data processing. He was also concerned with the operation of the Murray Hill Computation Center and was actively engaged in the software design of Multics. After learning how to program the PDP-7 computer, Ken Thompson, Dennis Ritchie, Joe Ossanna, and Rudd Canaday began to program the operating system that was designed earlier by Thompson (Unics, later named Unix). After writing the file system and a set of basic utilities, and assembler, a core of the Unix operating system was established. Doug McIlroy later wrote, "Ossanna, with the instincts of a motor pool sergeant, equipped our first lab and attracted the first outside users." When the team got a Graphic Systems CAT phototypesetter for making camera-ready copy of professional articles for publication and patent applications, Ossanna wrote a version of nroff that would drive it. It was dubbed troff, for typesetter roff. So it was that in 1973 he authored the first version of troff for Unix entirely written in PDP-11 assembly language. However, two years later, Ossanna re-wrote the code in the C programming language. He had planned another rewrite which was supposed to improve its usability but this work was taken over by Brian Kernighan. Ossanna was a member of the Association for Computing Machinery, Sigma Xi, and Tau Beta Pi. He died as a consequence of heart disease. Sometimes he is described as having died in a car accident, but this is a mistake. Selected publications Bogert, Bruce P.; Ossanna, Joseph F., "The heuristics of cepstrum analysis of a stationary complex echoed Gaussian signal in stationary Gaussian noise", IEEE Transactions on Information Theory, v.12, issue 3, July 19, 1966, pp. 373 – 380 Ossanna, Joseph F.; Kernighan, Brian W., Troff user's manual, UNIX Vol. II, W. B. Saunders Company, March 1990 Kernighan, B W; Lesk, M E; Ossanna, J F, Jr., Document preparation, in UNIX:3E system readings and applications. Volume I: UNIX:3E time-sharing system, Prentice-Hall, Inc., December 1986 Ossanna, Joseph F., "The current state of minicomputer software", AFIPS '72 (Spring): Proceedings of the May 16–18, 1972, spring joint computer conference, Publisher: ACM, May 1972 Ossanna, Joseph F., "Identifying terminals in terminal-oriented systems", Proceedings of the ACM second symposium on Problems in the optimizations of data communications systems, Publisher: ACM, January 1971 Ossanna, J. F.; Saltzer, J. H., "Technical and human engineering problems in connecting terminals to a time-sharing system", AFIPS '70 (Fall): Proceedings of the November 17–19, 1970, fall joint computer conference, Publisher: ACM, November 1970 Ossanna, J. F.; Mikus, L. E.; Dunten, S. D., "Communications and input/output switching in a multiplex computing system", AFIPS '65 (Fall, part I): Proceedings of the November 30—December 1, 1965, fall joint computer conference, part I, Publisher: ACM, November 1965 References 1928 births 1977 deaths Unix people Troff Wayne State University alumni Multics people
52067611
https://en.wikipedia.org/wiki/2017%20in%20Bellator%20MMA
2017 in Bellator MMA
2017 in Bellator MMA was the tenth year in the history of Bellator MMA, a mixed martial arts promotion based in the United States. Bellator held 26 events in 2017. Background This year, Bellator announced their first MMA event to be held in New York: Bellator NYC. The event would also mark the Bellator debuts of former UFC commentator Mike Goldberg, calling his first card on Spike TV since 2011 (which was the televised portion of Bellator NYC, referred to as Bellator 180), and former Strikeforce commentator Mauro Ranallo. Bellator 170 Bellator 170: Ortiz vs. Sonnen took place on January 21, 2017, at The Forum in Inglewood, California. The event aired live in prime time on Spike TV, drawing 1.4 million viewers. Background On October 18, 2016, it was announced that former UFC Light Heavyweight Champion Tito Ortiz and UFC veteran Chael Sonnen would headline this event. Sonnen owns career wins over the likes of Yushin Okami, Nate Marquardt, Brian Stann, former UFC Light Heavyweight Champion Maurício Rua and former UFC Middleweight Champion Michael Bisping. Ortiz is a UFC Hall of Famer and one of the most decorated champions in UFC history. "The Huntington Beach Bad Boy" signed with Bellator in 2013 and has gone 2–1 with the promotion. He has wins over the likes of Alexander Shlemenko, Stephan Bonnar, Wanderlei Silva, Ken Shamrock and Forrest Griffin. A women's bout between Rebecca Ruth and Colleen Schneider was initially announced for this card. However, the bout never came to fruition for undisclosed reasons and Ruth was replaced by Chrissie Daniels Results Bellator 171 Bellator 171: Guillard vs. Njokuani was held on January 27, 2017, at the Kansas Star Casino in Mulvane, Kansas. The event aired live in prime time on Spike TV. Background The event was headlined by catchweight at 179 pounds bout between UFC veteran Melvin Guillard and Chidi Njokuani. The co-main event featured a lightweight bout between local favorite David Rickels and Aaron Derrow. Additionally, prospects and former collegiate heavyweight wrestling stars Jarod Trice and Tyrell Fortune were featured on the undercard. Trice, a three-time All-American wrestler at Central Michigan University, faced Kevin Woltkamp. Fortune, a former NCAA Division 2 champion, faced Will Johnson. Results Bellator 172 Bellator 172: Thomson vs. Pitbull aired on Saturday, February 18, 2017, at the SAP Center in San Jose, California. The event aired live in prime time on Spike TV. Background The event was scheduled to be headlined by heavyweight legend Fedor Emelianenko against former UFC veteran Matt Mitrione. Emelianenko retired in 2012, but ultimately returned to the sport in 2015. This was to be Emelianenko's first fight in the U.S. since 2011. Mitrione, a 38-year-old former NFL player has scored back-to-back knockout victories since joining Bellator over Carl Seumanutafa and Oli Thompson. Hours before the event, the fight was cancelled due to Mitrione becoming ill. As a result, Josh Thomson vs. Patricky Pitbull was moved up to the main event. Results Bellator 173 Bellator 173: McGeary vs. McDermott was held on February 24, 2017, at the SSE Arena in Belfast, Northern Ireland. The event aired live in prime time on Spike TV. Background The event was originally scheduled to be headlined by a bout between Liam McGeary and Chris Fields. However, on February 20, it was announced that Fields had to withdraw from the match due to injury. The initial replacement for Fields was announced as Bellator newcomer Vladimir Filipovic. However, Filipovic also was pulled from the bout due to visa issues. McGeary eventually faced Bellator newcomer Brett McDermott. The event was a dual promotion event as Bellator's card was co-promoted with BAMMA 28. Results Bellator 174 Bellator 174: Coenen vs. Budd was held on March 3, 2017, at the WinStar World Casino in Thackerville, Oklahoma. The event aired live in prime time on Spike TV. Background The main event featured Marloes Coenen versus Julia Budd for the inaugural Bellator women's featherweight title. This was the first time a woman's bout has headlined a Bellator card. Kendall Grove was originally scheduled to face Chris Honeycutt at this event. However, Honeycutt was removed from the bout on February 28 and replaced by three-time UFC veteran and Bellator newcomer Mike Rhodes. Rhodes then failed to make weight and the fight was cancelled. Joe Taimanglo was scheduled to face Steve Garcia on this card. However, the fight was removed from the card after Taimanglo missed weight. Results Bellator 175 Bellator 175: Rampage vs. King Mo 2 was held on March 31, 2017, at the Allstate Arena in Rosemont, Illinois. The event aired live in prime time on Spike TV. Background The event was headlined by Heavyweight bout between former UFC Light Heavyweight Champion Quinton Jackson against former Strikeforce Light Heavyweight Champion Muhammed Lawal. At the weigh ins, Emmanuel Sanchez missed the 146-pound featherweight limit, coming in at 149.5 pounds. The fight was changed to a catchweight. Results Bellator 176 Bellator 176: Carvalho vs. Manhoef 2 took place on April 8, 2017, at the Pala Alpitour in Torino, Italy. The event aired on Spike TV. Background A Middleweight world title fight rematch pitting Rafael Carvalho against Melvin Manhoef served as the main event of Bellator 176. The event was announced by the company in February 2017. It was the second time Bellator MMA held an event at the Pala Alpitour in Torino, Italy. Like the previous card, this event featured both MMA and kickboxing bouts. Results Bellator 177 Bellator 177: Dantas vs. Higo took place on April 14, 2017, at Budapest Sports Arena in Budapest, Hungary . The event aired on Spike TV. Background This event marked Bellator's second event in Hungary. The event featured both MMA and kickboxing bouts. The main event was originally scheduled to feature Eduardo Dantas against Darrion Caldwell, but Caldwell withdrew due to injury. Leandro Higo stepped in as a replacement. Due to the short notice, the fight was changed from a Bantamweight title bout to a non-title contest at a catchweight of 139 pounds. Bellator Kickboxing 6 was headlined by a Welterweight world title rematch featuring Zoltán Laszák against Karim Ghajji. Additionally, two-time Glory Featherweight Champion Gabriel Varga made his Bellator Kickboxing debut. Results Bellator 178 Bellator 178: Straus vs. Pitbull 4 took place on April 21, 2017, at Mohegan Sun Arena in Uncasville, Connecticut . The event aired live on Spike TV. Background Bellator returned to Mohegan Sun with a Featherweight world title fight pitting Daniel Straus against Patricio "Pitbull" Freire as the main event of Bellator 178. Straus and Freire have fought three times before; Freire won one by submission and once by decision, while Straus most recently won by decision. A middleweight bout between Ed Ruth and Aaron Goodwine was originally scheduled for this card, but failed to materialize. Ruth instead faced David Mundell. Results Bellator 179 Bellator 179: MacDonald vs. Daley took place on May 19, 2017, at the SSE Arena in London, England. The event aired on Spike TV. Background Bellator returned to London with a welterweight fight pitting Paul Daley against Rory MacDonald serving as the main event of Bellator 179. Stav Economou was originally scheduled to face Karl Etherington in heavyweight bout. However, the bout was cancelled and Economous faced Dan Konecke. Neil Grove and Łukasz Parobiec were expected to fight in a Heavyweight bout. However, the fight was cancelled for undisclosed reasons. Michael Page was scheduled to face Derek Anderson as the co-main event. However, the fight was removed from the card after Page suffered a knee injury and neck injury. Results Bellator Monster Energy Fight Series: Charlotte Bellator Monster Energy Fight Series: Charlotte took place on May 20, 2017, at the Charlotte Motor Speedway in Concord, North Carolina Background On May 1, 2017, Bellator announced a special four-fight card that took place at the Monster Energy NASCAR All-Star Race XXXIII. The goal of the event was to scout new talent for a prospective contract. Unlike all previous Bellator events, this event was not televised, as this event took place at a NASCAR race, where Fox Sports, which holds rights to a rival MMA promotion, holds video rights for events at the Charlotte Motor Speedway during the event. Results Bellator NYC/Bellator 180 Bellator NYC: Sonnen vs. Silva and Bellator 180 took place on June 24, 2017, at Madison Square Garden in New York City. The event aired both Spike TV and on PPV. Background A grudge match between rivals Chael Sonnen and Wanderlei Silva served as the main event. The co-main event was a heavyweight bout between Fedor Emelianenko vs. Matt Mitrione. The pairing were supposed to meet at Bellator 172 but Mitrione had to be hospitalized due to kidney stones. Debuting former UFC fighter Ryan Bader was initially scheduled to face Muhammed Lawal on this card. However, Lawal pulled out of the fight due to an injury and Bader instead faced Light Heavyweight champion Phil Davis in a rematch. Bader and Davis first met at UFC on Fox: Gustafsson vs. Johnson on January 24, 2015, with Bader winning by split decision. A fight between Keri Anne Melendez and Sadee Monseratte Williams was scheduled but cancelled after Melendez pulled out due to injury. The PPV portion of the MSG card was referred to as Bellator NYC. The undercard, airing on Spike, was Bellator 180. Results Bellator 181 Bellator 181: Girtz vs. Campos 3 took place on July 14, 2017, at the WinStar World Casino in Thackerville, Oklahoma. The event aired live in prime time on Spike TV. Background The main event featured Brandon Girtz against Derek Campos in a rubber match. Campos won the pair's initial meeting in June 2013 at Bellator 96 by unanimous decision. Girtz avenged the loss in November 2015 at Bellator 146, where he won by knockout in the first round. Former UFC fighter Valérie Létourneau was expected to make her Bellator debut on this card against Emily Ducote. However, on July 10, Létourneau withdrew from the bout due to injury and was replaced by Jessica Middleton. Results Bellator Monster Energy Fight Series: Bristol Bellator Monster Energy Fight Series: Bristol was the second installment of the series and took place on August 19, 2017, at the Bristol Motor Speedway, in Bristol, Tennessee. Again, the fights were not televised because of television conflicts with NASCAR's rights holder (NBC, which airs the Professional Fighters League, aired a PFL contest June 30). Results Bellator 182 Bellator 182: Koreshkov vs. Njokuani took place on August 25, 2017, at the Turning Stone Resort & Casino in Verona, New York. The event aired live in prime time on Spike TV. Background The main event featured former Bellator Welterweight Champion Andrey Koreshkov versus Chidi Njokuani. It was originally scheduled as a welterweight bout, but Njokuani missed weight so the bout was contested at a catchweight of 175 pounds. The co-main event of Brennan Ward against Fernando Gonzalez was set to be a catchweight bout of 178 pounds. However, Gonzalez weighed in at 180 pounds. Gabby Holloway was originally scheduled to face Talita Nogueira on this card. However, on August 1, Holloway withdrew from the bout and was replaced by Amanda Bell. Results Bellator 183 Bellator 183: Henderson vs. Pitbull took place on September 23, 2017, at the SAP Center in San Jose, California. The event aired live in prime time on Spike TV. Background The main event featured a Lightweight bout between Benson Henderson and Patricky Freire. Also occurring on the show was Bellator Kickboxing 7 headlined by featherweight title bout between Kevin Ross and Domenico Lomurno. Results Bellator 184 Bellator 184: Dantas vs. Caldwell took place on October 6, 2017, at the WinStar World Casino in Thackerville, Oklahoma. The event aired live in prime time on Spike TV. Results Bellator Monster Energy Fight Series: Talladega Bellator Monster Energy Fight Series: Talladega was the third installment of the series held on October 13, 2017, at the Talladega Superspeedway, in Lincoln, Alabama. Again, the fights were not televised because of television conflicts with NASCAR's rights holder (NBC, which airs the Professional Fighters League, aired a PFL contest June 30 during the Coca-Cola Firecracker 250 at Daytona). Results Bellator 185 Bellator 185: Mousasi vs. Shlemenko took place on October 20, 2017, at Mohegan Sun Arena in Uncasville, Connecticut. The event aired live on Spike TV. Background Bellator 185 marked the debut of heralded middleweight fighter Gegard Mousasi. Having competed for the UFC for the last four years, Mousasi opted to sign with Bellator MMA when his contract came up. Muhammed Lawal was scheduled to face Liam McGeary in a Light Heavyweight bout in the co-main event. However, on October 2, he pulled out of the fight due to an undisclosed injury. He was replaced by Bubba McDaniel. The following week, McGeary pulled out of the fight due to a thumb injury. Brennan Ward was scheduled to face David Rickels in a welterweight bout on the main card. However, on October 16, Ward was removed from the card due to injury and Rickels was pulled from the card as a result. Javier Torres was scheduled to face Neiman Gracie on this card, but pulled out due to injury. He was replaced with welterweight Zak Bucia. Results Bellator 186 Bellator 186: Bader vs. Vassell took place on November 3, 2017, at the Bryce Jordan Center in University Park, Pennsylvania. The event aired live in prime time on Spike TV. Background In the Bellator 186 main event Ryan Bader made the first defense of his Light Heavyweight title against Linton Vassell . The co-main event featured Ilima-Lei Macfarlane against Emily Ducote for the inaugural Bellator Women's Flyweight title. Results Bellator 187 Bellator 187: McKee vs. Moore took place on November 10, 2017, at the 3Arena in Dublin, Ireland. The event aired live in prime time on Spike TV. Background James Gallagher was expected to main event Bellator 187 against Jeremiah Labiano; however, on October 11, Gallagher withdrew from the bout due to injury. Labiano was moved to face Noad Lahat at Bellator 188 on November 16. A featherweight match-up between A.J. McKee and Brian Moore served as the new main event. The event was a co-promotion between Bellator MMA and BAMMA with BAMMA 32 taking place the same night. Results Bellator 188 Bellator 188: Lahat vs. Labiano took place on November 16, 2017, at the Menora Mivtachim Arena in Tel Aviv, Israel. The event aired on Spike TV. Background The card was originally headlined by a rematch between champion Patrício Freire and Daniel Weichel for the Bellator Featherweight Championship. The pair previously met in June 2015 at Bellator 138 with Freire winning by knockout. On November 12, it was announced Freire had to withdraw due to a knee injury. The main event instead featured a match between Noad Lahat and Jeremiah Labiano. The card also featured Dutch kickboxing standout Denise Kielholtz's second MMA fight. Results Bellator: Monster Energy Fight Series: Homestead Bellator: Monster Energy Fight Series: Homestead was the fourth installment of the series held on November 19, 2017, at the Homestead-Miami Speedway, in Homestead, Florida. Again, the fights were not televised because of television conflicts with NASCAR's rights holder (NBC, which airs the Professional Fighters League, aired a PFL contest June 30 during the Coca-Cola Firecracker 250 at Daytona). Results Bellator 189 Bellator 189: Budd vs. Blencowe 2 took place December 1, 2017 at the WinStar World Casino in Thackerville, Oklahoma. The event aired live in prime time on Spike TV. Background In the Bellator 189 main event, women's featherweight champion Julia Budd faced Arlene Blencowe in a rematch. The pair originally fought at Bellator 162 with Budd winning via majority decision. The co-main event was a middleweight bout between Rafael Lovato Jr. and Chris Honeycutt. Results Bellator 190 Bellator 190: Carvalho vs. Sakara took place on December 9, 2017, at the Nelson Mandela Forum in Florence, Italy. The event aired on Spike TV. Background A Middleweight world title fight match pitting Rafael Carvalho against Alessio Sakara will served as the main event of Bellator 190. Occurring also on this card is Bellator Kickboxing 8 headlined by Lightweight title bout pitting Giorgio Petrosyan against Youdwicha. Results Bellator 191 Bellator 191: McDonald vs. Ligier took place on December 15, 2017, at the Metro Radio Arena in Newcastle, England. The event aired on tape delay in prime time on Spike TV. Background The main event featured the Bellator debut of former World Extreme Cagefighting and Ultimate Fighting Championship bantamweight contender Michael McDonald. The event was a co-promotion between Bellator MMA and BAMMA 33. Results References External links Bellator 2017 in mixed martial arts Bellator MMA events
10341022
https://en.wikipedia.org/wiki/Bill%20Fisk
Bill Fisk
William G. Fisk (November 5, 1916 – March 28, 2007) was an American football end and defensive end who played in the National Football League (NFL) and All-America Football Conference (AAFC) from 1940 to 1948. ]. Early years Born in Los Angeles, California, Fisk prepped at Alhambra High School and played college football at the University of Southern California (USC). He was a member of the Trojans' 1938 Rose Bowl-winning team, and was voted Most Inspirational Player on the 1939 USC Trojans football team, which own a national championship. He was one of six Trojans selected for the 1940 College All-Star Game in Chicago. Professional football career Fisk played for the NFL's Detroit Lions and the AAFC's San Francisco 49ers and Los Angeles Dons between 1940 and 1948. He was drafted in the third round of the 1940 NFL Draft by Detroit. Later years Fisk was an assistant coach of the USC Trojans between 1949 and 1956 under head coaches Jeff Cravath and Jess Hill. After coaching, Fisk worked in aerospace. His son Bill, Jr. was an offensive guard on USC's 1962 national championship team, and was named All-American in 1964. He served as head coach of Mt. San Antonio College for a period of time. References External links ESPN obituary 1916 births 2007 deaths American football defensive ends American football ends Detroit Lions players Los Angeles Dons players San Francisco 49ers (AAFC) players USC Trojans football players USC Trojans football coaches Mt. SAC Mounties football coaches Players of American football from Los Angeles Sports coaches from Los Angeles
32925572
https://en.wikipedia.org/wiki/Tommy%20Milone
Tommy Milone
Tomaso Anthony Milone ( ; born February 16, 1987) is an American professional baseball pitcher who is a free agent. He previously played in Major League Baseball (MLB) for the Washington Nationals, Oakland Athletics, Minnesota Twins, Milwaukee Brewers, New York Mets, Seattle Mariners, Baltimore Orioles, Atlanta Braves, and Toronto Blue Jays. He made his MLB debut in 2011. Amateur career Born and raised in Santa Clarita, California, Milone attended Saugus High School, where he was a standout as a pitcher and hitter. Milone won All-State honors twice, and was the Foothill Player of the Year his senior season, in which he hit .474 and threw a perfect game, finishing the year with a 9-2 record and a 1.04 ERA. Milone then attended the University of Southern California, playing for the USC Trojans baseball team and pursued a degree in public policy and development. As a freshman, Milone was named the number two starter in the rotation and went 7-4 with a 4.94 ERA in 16 starts. In his sophomore season, Milone struggled, going 3-7 with a 6.17 ERA. His junior season would prove to be his best, Milone went 6-6 with a 3.51 ERA and was the number one starter in the rotation. In 2007, Milone played collegiate summer baseball in the Cape Cod Baseball League with the Chatham A's, winning the B.F.C. Whitehouse Award, given to the best pitcher in the league, after finishing the summer 6-1 with a 2.92 ERA. Professional career Washington Nationals Milone was drafted by the Washington Nationals in the 10th round of the 2008 Major League Baseball Draft. Milone made his major league debut on September 3, 2011, against the New York Mets. Milone struck out Angel Pagan of the New York Mets for his first career strikeout. Later in the same game, he hit a three-run home run on the first pitch of his first Major League at bat, becoming the 27th player, and only the eighth pitcher, in major league history to do so. He left the game after pitching four and one-third innings. Oakland Athletics On December 23, 2011, Milone was traded with A. J. Cole, Derek Norris and Brad Peacock to the Oakland Athletics for Gio González and Robert Gilliam. Milone started the regular season in the third starting rotation spot behind Brandon McCarthy and Bartolo Colón. Milone was the only starting pitcher in the A's rotation to last all season without getting injured and had started the most games for the A's during the 2012 season. He pitched his first complete game of his career on June 20 defeating the Los Angeles Dodgers. Milone had started game 2 of the ALDS, but the A's had lost to a no decision in the bottom of the 9th inning. Milone finished the season with a 13–10 record and with 137 strikeouts and an ERA of 3.74 Milone was optioned to the Triple-A Sacramento River Cats on August 3, 2013. Milone finished the season with 12 wins in 28 games, 26 of them starts. Milone started the 2014 season in the A's rotation as the fifth starter. Despite owning a record of 6-3 and a 3.55 ERA in 16 starts, Milone was sent down to AAA. After his demotion he demanded a trade. Minnesota Twins On July 31, 2014, the Athletics traded Milone to the Minnesota Twins in exchange for outfielder Sam Fuld. Milone started in five games for the Twins before being shut down with a neck injury. Milone had a bounce back season in 2015 going 9-5 with a 3.92 ERA in 128.2 innings. Milone struggled in 2016 going 3–5 with a 5.12 ERA and after the season he declined being outrighted to Triple-A Rochester by electing free agency. Milwaukee Brewers On December 14, 2016, Milone signed a one-year, $1.25 million contract with the Milwaukee Brewers. He was designated for assignment on May 1, 2017, when the team purchased the contract of Rob Scahill. With the Brewers he was 1-0 with a 6.43 ERA. New York Mets On May 7, 2017, the New York Mets claimed Milone off waivers. With the Mets, he was 0-3 with a 8.56 ERA. In 2017 between the two teams, right-handed batters had a higher batting average against him, .348, than against all other MLB pitchers in 30 or more innings. Second stint in Washington On December 20, 2017, the Washington Nationals signed Milone to a minor league contract, with an invite to spring training. On July 26, 2018, he was called up in place of Stephen Strasburg in the rotation. Milone was reassigned to the bullpen on August 18. On September 4, Milone was outrighted off the roster. For the season he was 1-1 with a 5.81 ERA. He declared free agency on October 2, 2018. Seattle Mariners On December 6, 2018, Milone signed a minor league deal with the Seattle Mariners. He opened the 2019 season with the Tacoma Rainiers. On May 21, his contract was selected by the Mariners. Baltimore Orioles Milone signed a minor league deal with the Baltimore Orioles on February 13, 2020. On July 15, 2020, the Orioles purchased his contract, putting him on the 40-man roster for the shortened season. On July 21, 2020, Milone was named starter for opening day vs the Red Sox as John Means had arm fatigue. On July 24, 2020, he made his Orioles debut as a Opening Day starting pitcher throwing 4 runs on 4 hits and 3 walks in 3 innings pitched against the Boston Red Sox. Atlanta Braves On August 30, 2020, Milone was traded to the Atlanta Braves in exchange for AJ Graffanino and Greg Cullen. The Braves released Milone on September 30, 2020, right before their Wild Card Series game against the Cincinnati Reds. Toronto Blue Jays On February 25, 2021, the Toronto Blue Jays signed Milone to a minor league deal with an invitation to spring training. On April 4, 2021, the Blue Jays selected his contract and placed him on the active roster. On May 27, Milone was placed on the 60-day injured list with left shoulder inflammation. On August 11, 2021, Milone was released by the Blue Jays. Cincinnati Reds On August 24, 2021, Milone signed a minor league deal with the Cincinnati Reds. He was assigned to the Triple-A Louisville Bats. Milone made 3 starts for Louisville, going 0-2 with a 14.40 ERA and 10 strikeouts. On October 11, Milone elected free agency. Pitching Repertoire Milone's four-seam fastball ranges from 85–87 mph, and he complements it with a cutter at the same speed, a curveball (75–79), and changeup (79–81), as well as a rare two-seam fastball. Milone's repertoire against left-handed hitters tends to be fastball-cutter-curveball, while against right-handers it is fastball-changeup-cutter. He uses his changeup heavily in 2-strike counts against righties. His curve is his best swing-and-miss pitch with a whiff rate of about 33%. Milone has shown good control early in his career, with a walk rate under 2 per 9 innings. Personal life Milone married Tina Sarnecki. They welcomed their first child, daughter Mia, in July 2016. See also List of Major League Baseball players with a home run in their first major league at bat References External links USC Trojans bio 1987 births Living people American people of Italian descent Atlanta Braves players Baltimore Orioles players Baseball players from California Binghamton Rumble Ponies players Buffalo Bisons (minor league) players Chatham Anglers players Hagerstown Suns players Harrisburg Senators players Oakland Athletics players Major League Baseball pitchers Milwaukee Brewers players Minnesota Twins players New York Mets players People from Saugus, Santa Clarita, California Potomac Nationals players Rochester Red Wings players Sacramento River Cats players Seattle Mariners players Syracuse Chiefs players Tacoma Rainiers players Toronto Blue Jays players USC Trojans baseball players Vermont Lake Monsters players Washington Nationals players
56516812
https://en.wikipedia.org/wiki/20th%20century%20in%20science
20th century in science
Science advanced dramatically during the 20th century. There were new and radical developments in the physical, life and human sciences, building on the progress made in the 19th century. The development of post-Newtonian theories in physics, such as special relativity, general relativity, and quantum mechanics led to the development of nuclear weapons. New models of the structure of the atom led to developments in theories of chemistry and the development of new materials such as nylon and plastics. Advances in biology led to large increases in food production, as well as the elimination of diseases such as polio. A massive amount of new technologies were developed in the 20th century. Technologies such as electricity, the incandescent light bulb, the automobile and the phonograph, first developed at the end of the 19th century, were perfected and universally deployed. The first airplane flight occurred in 1903, and by the end of the century large airplanes such as the Boeing 777 and Airbus A330 flew thousands of miles in a matter of hours. The development of the television and computers caused massive changes in the dissemination of information. Astronomy and space exploration A much better understanding of the evolution of the universe was achieved, its age (about 13.8 billion years) was determined, and the Big Bang theory on its origin was proposed and generally accepted. The age of the Solar System, including Earth, was determined, and it turned out to be much older than believed earlier: more than 4 billion years, rather than the 20 million years suggested by Lord Kelvin in 1862. The planets of the Solar System and their moons were closely observed via numerous space probes. Pluto was discovered in 1930 on the edge of the solar system, although in the early 21st century, it was reclassified as a dwarf planet (planetoid) instead of a planet proper, leaving eight planets. No trace of life was discovered on any of the other planets in the Solar System, although it remained undetermined whether some forms of primitive life might exist, or might have existed, somewhere. Extrasolar planets were observed for the first time. In 1969, Apollo 11 was launched towards the Moon and Neil Armstrong and Buzz Aldrin became the first persons from Earth to walk on another celestial body. That same year, Soviet astronomer Victor Safronov published his book Evolution of the protoplanetary cloud and formation of the Earth and the planets. In this book, almost all major problems of the planetary formation process were formulated and some of them solved. Safronov's ideas were further developed in the works of George Wetherill, who discovered runaway accretion. The Space Race between the United States and the Soviet Union gave a peaceful outlet to the political and military tensions of the Cold War, leading to the first human spaceflight with the Soviet Union's Vostok 1 mission in 1961, and man's first landing on another world—the Moon—with America's Apollo 11 mission in 1969. Later, the first space station was launched by the Soviet space program. The United States developed the first (and to date only) reusable spacecraft system with the Space Shuttle program, first launched in 1981. As the century ended, a permanent manned presence in space was being founded with the ongoing construction of the International Space Station. In addition to human spaceflight, unmanned space probes became a practical and relatively inexpensive form of exploration. The first orbiting space probe, Sputnik 1, was launched by the Soviet Union in 1957. Over time, a massive system of artificial satellites was placed into orbit around Earth. These satellites greatly advanced navigation, communications, military intelligence, geology, climate, and numerous other fields. Also, by the end of the 20th century, unmanned probes had visited the Moon, Mercury, Venus, Mars, Jupiter, Saturn, Uranus, Neptune, and various asteroids and comets. The Hubble Space Telescope, launched in 1990, greatly expanded our understanding of the Universe and brought brilliant images to TV and computer screens around the world. Biology and medicine Genetics was unanimously accepted and significantly developed. The structure of DNA was determined in 1953 by James Watson, Francis Crick, Rosalind Franklin and Maurice Wilkins, following by developing techniques which allow to read DNA sequences and culminating in starting the Human Genome Project (not finished in the 20th century) and cloning the first mammal in 1996. The role of sexual reproduction in evolution was understood, and bacterial conjugation was discovered. The convergence of various sciences for the formulation of the modern evolutionary synthesis (produced between 1936 and 1947), providing a widely accepted account of evolution. Placebo-controlled, randomized, blinded clinical trials became a powerful tool for testing new medicines. Antibiotics drastically reduced mortality from bacterial diseases and their prevalence. A vaccine was developed for polio, ending a worldwide epidemic. Effective vaccines were also developed for a number of other serious infectious diseases, including influenza, diphtheria, pertussis (whooping cough), tetanus, measles, mumps, rubella (German measles), chickenpox, hepatitis A, and hepatitis B. Epidemiology and vaccination led to the eradication of the smallpox virus in humans. X-rays became powerful diagnostic tool for wide spectrum of diseases, from bone fractures to cancer. In the 1960s, computerized tomography was invented. Other important diagnostic tools developed were sonography and magnetic resonance imaging. Development of vitamins virtually eliminated scurvy and other vitamin-deficiency diseases from industrialized societies. New psychiatric drugs were developed. These include antipsychotics for treating hallucinations and delusions, and antidepressants for treating depression. The role of tobacco smoking in the causation of cancer and other diseases was proven during the 1950s (see British Doctors Study). New methods for cancer treatment, including chemotherapy, radiation therapy, and immunotherapy, were developed. As a result, cancer could often be cured or placed in remission. The development of blood typing and blood banking made blood transfusion safe and widely available. The invention and development of immunosuppressive drugs and tissue typing made organ and tissue transplantation a clinical reality. New methods for heart surgery were developed, including pacemakers and artificial hearts. Cocaine/crack and heroin were found to be dangerous addictive drugs, and their wide usage had been outlawed; mind-altering drugs such as LSD and MDMA were discovered and later outlawed. In many countries, a war on drugs caused prices to soar 10–20 times higher, leading to profitable black market drugdealing, and in some countries (e.g. the United States) to prison inmate sentences being 80% related to drug use by the 1990s. Contraceptive drugs were developed, which reduced population growth rates in industrialized countries, as well as decreased the taboo of premarital sex throughout many western countries. The development of medical insulin during the 1920s helped raise the life expectancy of diabetics to three times of what it had been earlier. Vaccines, hygiene and clean water improved health and decreased mortality rates, especially among infants and the young. Notable diseases An influenza pandemic, Spanish Flu, killed anywhere from 20 to 100 million people between 1918 and 1919. A new viral disease, called the Human Immunodeficiency Virus, or HIV, arose in Africa and subsequently killed millions of people throughout the world. HIV leads to a syndrome called Acquired Immunodeficiency Syndrome, or AIDS. Treatments for HIV remained inaccessible to many people living with AIDS and HIV in developing countries, and a cure has yet to be discovered. Because of increased life spans, the prevalence of cancer, Alzheimer's disease, Parkinson's disease, and other diseases of old age increased slightly. Sedentary lifestyles, due to labor-saving devices and technology, along with the increase in home entertainment and technology such as television, video games, and the internet contributed to an "epidemic" of obesity, at first in the rich countries, but by the end of the 20th century spreading to the developing world. Chemistry In 1903, Mikhail Tsvet invented chromatography, an important analytic technique. In 1904, Hantaro Nagaoka proposed an early nuclear model of the atom, where electrons orbit a dense massive nucleus. In 1905, Fritz Haber and Carl Bosch developed the Haber process for making ammonia, a milestone in industrial chemistry with deep consequences in agriculture. The Haber process, or Haber-Bosch process, combined nitrogen and hydrogen to form ammonia in industrial quantities for production of fertilizer and munitions. The food production for half the world's current population depends on this method for producing fertilizer. Haber, along with Max Born, proposed the Born–Haber cycle as a method for evaluating the lattice energy of an ionic solid. Haber has also been described as the "father of chemical warfare" for his work developing and deploying chlorine and other poisonous gases during World War I. In 1905, Albert Einstein explained Brownian motion in a way that definitively proved atomic theory. Leo Baekeland invented bakelite, one of the first commercially successful plastics. In 1909, American physicist Robert Andrews Millikan - who had studied in Europe under Walther Nernst and Max Planck - measured the charge of individual electrons with unprecedented accuracy through the oil drop experiment, in which he measured the electric charges on tiny falling water (and later oil) droplets. His study established that any particular droplet's electrical charge is a multiple of a definite, fundamental value — the electron's charge — and thus a confirmation that all electrons have the same charge and mass. Beginning in 1912, he spent several years investigating and finally proving Albert Einstein's proposed linear relationship between energy and frequency, and providing the first direct photoelectric support for Planck's constant. In 1923 Millikan was awarded the Nobel Prize for Physics. In 1909, S. P. L. Sørensen invented the pH concept and develops methods for measuring acidity. In 1911, Antonius Van den Broek proposed the idea that the elements on the periodic table are more properly organized by positive nuclear charge rather than atomic weight. In 1911, the first Solvay Conference was held in Brussels, bringing together most of the most prominent scientists of the day. In 1912, William Henry Bragg and William Lawrence Bragg proposed Bragg's law and established the field of X-ray crystallography, an important tool for elucidating the crystal structure of substances. In 1912, Peter Debye develops the concept of molecular dipole to describe asymmetric charge distribution in some molecules. In 1913, Niels Bohr, a Danish physicist, introduced the concepts of quantum mechanics to atomic structure by proposing what is now known as the Bohr model of the atom, where electrons exist only in strictly defined circular orbits around the nucleus similar to rungs on a ladder. The Bohr Model is a planetary model in which the negatively charged electrons orbit a small, positively charged nucleus similar to the planets orbiting the Sun (except that the orbits are not planar) - the gravitational force of the solar system is mathematically akin to the attractive Coulomb (electrical) force between the positively charged nucleus and the negatively charged electrons. In 1913, Henry Moseley, working from Van den Broek's earlier idea, introduces concept of atomic number to fix inadequacies of Mendeleev's periodic table, which had been based on atomic weight. The peak of Frederick Soddy's career in radiochemistry was in 1913 with his formulation of the concept of isotopes, which stated that certain elements exist in two or more forms which have different atomic weights but which are indistinguishable chemically. He is remembered for proving the existence of isotopes of certain radioactive elements, and is also credited, along with others, with the discovery of the element protactinium in 1917. In 1913, J. J. Thomson expanded on the work of Wien by showing that charged subatomic particles can be separated by their mass-to-charge ratio, a technique known as mass spectrometry. In 1916, Gilbert N. Lewis published his seminal article "The Atom of the Molecule", which suggested that a chemical bond is a pair of electrons shared by two atoms. Lewis's model equated the classical chemical bond with the sharing of a pair of electrons between the two bonded atoms. Lewis introduced the "electron dot diagrams" in this paper to symbolize the electronic structures of atoms and molecules. Now known as Lewis structures, they are discussed in virtually every introductory chemistry book. Lewis in 1923 developed the electron pair theory of acids and base: Lewis redefined an acid as any atom or molecule with an incomplete octet that was thus capable of accepting electrons from another atom; bases were, of course, electron donors. His theory is known as the concept of Lewis acids and bases. In 1923, G. N. Lewis and Merle Randall published Thermodynamics and the Free Energy of Chemical Substances, first modern treatise on chemical thermodynamics. The 1920s saw a rapid adoption and application of Lewis's model of the electron-pair bond in the fields of organic and coordination chemistry. In organic chemistry, this was primarily due to the efforts of the British chemists Arthur Lapworth, Robert Robinson, Thomas Lowry, and Christopher Ingold; while in coordination chemistry, Lewis's bonding model was promoted through the efforts of the American chemist Maurice Huggins and the British chemist Nevil Sidgwick. Quantum chemistry Some view the birth of quantum chemistry in the discovery of the Schrödinger equation and its application to the hydrogen atom in 1926. However, the 1927 article of Walter Heitler and Fritz London is often recognised as the first milestone in the history of quantum chemistry. This is the first application of quantum mechanics to the diatomic hydrogen molecule, and thus to the phenomenon of the chemical bond. In the following years much progress was accomplished by Edward Teller, Robert S. Mulliken, Max Born, J. Robert Oppenheimer, Linus Pauling, Erich Hückel, Douglas Hartree, Vladimir Aleksandrovich Fock, to cite a few. Still, skepticism remained as to the general power of quantum mechanics applied to complex chemical systems. The situation around 1930 is described by Paul Dirac: Hence the quantum mechanical methods developed in the 1930s and 1940s are often referred to as theoretical molecular or atomic physics to underline the fact that they were more the application of quantum mechanics to chemistry and spectroscopy than answers to chemically relevant questions. In 1951, a milestone article in quantum chemistry is the seminal paper of Clemens C. J. Roothaan on Roothaan equations. It opened the avenue to the solution of the self-consistent field equations for small molecules like hydrogen or nitrogen. Those computations were performed with the help of tables of integrals which were computed on the most advanced computers of the time. In the 1940s many physicists turned from molecular or atomic physics to nuclear physics (like J. Robert Oppenheimer or Edward Teller). Glenn T. Seaborg was an American nuclear chemist best known for his work on isolating and identifying transuranium elements (those heavier than uranium). He shared the 1951 Nobel Prize for Chemistry with Edwin Mattison McMillan for their independent discoveries of transuranium elements. Seaborgium was named in his honour, making him the only person, along Albert Einstein and Yuri Oganessian, for whom a chemical element was named during his lifetime. Molecular biology and biochemistry By the mid 20th century, in principle, the integration of physics and chemistry was extensive, with chemical properties explained as the result of the electronic structure of the atom; Linus Pauling's book on The Nature of the Chemical Bond used the principles of quantum mechanics to deduce bond angles in ever-more complicated molecules. However, though some principles deduced from quantum mechanics were able to predict qualitatively some chemical features for biologically relevant molecules, they were, till the end of the 20th century, more a collection of rules, observations, and recipes than rigorous ab initio quantitative methods. This heuristic approach triumphed in 1953 when James Watson and Francis Crick deduced the double helical structure of DNA by constructing models constrained by and informed by the knowledge of the chemistry of the constituent parts and the X-ray diffraction patterns obtained by Rosalind Franklin. This discovery lead to an explosion of research into the biochemistry of life. In the same year, the Miller–Urey experiment, conducted by Stanley Miller and Harold Urey demonstrated that basic constituents of protein, simple amino acids, could themselves be built up from simpler molecules in a simulation of primordial processes on Earth. Though many questions remain about the true nature of the origin of life, this was the first attempt by chemists to study hypothetical processes in the laboratory under controlled conditions. In 1983 Kary Mullis devised a method for the in-vitro amplification of DNA, known as the polymerase chain reaction (PCR), which revolutionized the chemical processes used in the laboratory to manipulate it. PCR could be used to synthesize specific pieces of DNA and made possible the sequencing of DNA of organisms, which culminated in the huge human genome project. An important piece in the double helix puzzle was solved by one of Pauling's students Matthew Meselson and Frank Stahl, the result of their collaboration (Meselson–Stahl experiment) has been called as "the most beautiful experiment in biology". They used a centrifugation technique that sorted molecules according to differences in weight. Because nitrogen atoms are a component of DNA, they were labelled and therefore tracked in replication in bacteria. Late 20th century In 1970, John Pople developed the Gaussian program greatly easing computational chemistry calculations. In 1971, Yves Chauvin offered an explanation of the reaction mechanism of olefin metathesis reactions. In 1975, Karl Barry Sharpless and his group discovered a stereoselective oxidation reactions including Sharpless epoxidation, Sharpless asymmetric dihydroxylation, and Sharpless oxyamination. In 1985, Harold Kroto, Robert Curl and Richard Smalley discovered fullerenes, a class of large carbon molecules superficially resembling the geodesic dome designed by architect R. Buckminster Fuller. In 1991, Sumio Iijima used electron microscopy to discover a type of cylindrical fullerene known as a carbon nanotube, though earlier work had been done in the field as early as 1951. This material is an important component in the field of nanotechnology. In 1994, Robert A. Holton and his group achieved the first total synthesis of Taxol. In 1995, Eric Cornell and Carl Wieman produced the first Bose–Einstein condensate, a substance that displays quantum mechanical properties on the macroscopic scale. Earth science In 1912 Alfred Wegener proposed the theory of Continental Drift. This theory suggests that the shapes of continents and matching coastline geology between some continents indicates they were joined together in the past and formed a single landmass known as Pangaea; thereafter they separated and drifted like rafts over the ocean floor, currently reaching their present position. Additionally, the theory of continental drift offered a possible explanation as to the formation of mountains; Plate Tectonics built on the theory of continental drift. Unfortunately, Wegener provided no convincing mechanism for this drift, and his ideas were not generally accepted during his lifetime. Arthur Homes accepted Wegener's theory and provided a mechanism: mantle convection, to cause the continents to move. However, it was not until after the Second World War that new evidence started to accumulate that supported continental drift. There followed a period of 20 extremely exciting years where the Theory of Continental Drift developed from being believed by a few to being the cornerstone of modern Geology. Beginning in 1947 research found new evidence about the ocean floor, and in 1960 Bruce C. Heezen published the concept of mid-ocean ridges.Soon after this, Robert S. Dietz and Harry H. Hess proposed that the oceanic crust forms as the seafloor spreads apart along mid-ocean ridges in seafloor spreading. This was seen as confirmation of mantle convection and so the major stumbling block to the theory was removed. Geophysical evidence suggested lateral motion of continents and that oceanic crust is younger than continental crust. This geophysical evidence also spurred the hypothesis of paleomagnetism, the record of the orientation of the Earth's magnetic field recorded in magnetic minerals. British geophysicist S. K. Runcorn suggested the concept of paleomagnetism from his finding that the continents had moved relative to the Earth's magnetic poles. Tuzo Wilson, who was a promoter of the sea floor spreading hypothesis and continental drift from the very beginning, added the concept of transform faults to the model, completing the classes of fault types necessary to make the mobility of the plates on the globe function. A symposium on continental drift was held at the Royal Society of London in 1965 must be regarded as the official start of the acceptance of plate tectonics by the scientific community.The abstracts from the symposium are issued as Blacket, Bullard, Runcorn;1965.In this symposium, Edward Bullard and co-workers showed with a computer calculation how the continents along both sides of the Atlantic would best fit to close the ocean, which became known as the famous "Bullard's Fit". By the late 1960s the weight of the evidence available saw Continental Drift as the generally accepted theory. Other theories of the causes of climate change fared no better. The principal advances were in observational paleoclimatology, as scientists in various fields of geology worked out methods to reveal ancient climates. Wilmot H. Bradley found that annual varves of clay laid down in lake beds showed climate cycles. Andrew Ellicott Douglass saw strong indications of climate change in tree rings. Noting that the rings were thinner in dry years, he reported climate effects from solar variations, particularly in connection with the 17th-century dearth of sunspots (the Maunder Minimum) noticed previously by William Herschel and others. Other scientists, however, found good reason to doubt that tree rings could reveal anything beyond random regional variations. The value of tree rings for climate study was not solidly established until the 1960s. Through the 1930s the most persistent advocate of a solar-climate connection was astrophysicist Charles Greeley Abbot. By the early 1920s, he had concluded that the solar "constant" was misnamed: his observations showed large variations, which he connected with sunspots passing across the face of the Sun. He and a few others pursued the topic into the 1960s, convinced that sunspot variations were a main cause of climate change. Other scientists were skeptical. Nevertheless, attempts to connect the solar cycle with climate cycles were popular in the 1920s and 1930s. Respected scientists announced correlations that they insisted were reliable enough to make predictions. Sooner or later, every prediction failed, and the subject fell into disrepute. Meanwhile Milutin Milankovitch, building on James Croll's theory, improved the tedious calculations of the varying distances and angles of the Sun's radiation as the Sun and Moon gradually perturbed the Earth's orbit. Some observations of varves (layers seen in the mud covering the bottom of lakes) matched the prediction of a Milankovitch cycle lasting about 21,000 years. However, most geologists dismissed the astronomical theory. For they could not fit Milankovitch's timing to the accepted sequence, which had only four ice ages, all of them much longer than 21,000 years. In 1938 Guy Stewart Callendar attempted to revive Arrhenius's greenhouse-effect theory. Callendar presented evidence that both temperature and the level in the atmosphere had been rising over the past half-century, and he argued that newer spectroscopic measurements showed that the gas was effective in absorbing infrared in the atmosphere. Nevertheless, most scientific opinion continued to dispute or ignore the theory. Another clue to the nature of climate change came in the mid-1960s from analysis of deep-sea cores by Cesare Emiliani and analysis of ancient corals by Wallace Broecker and collaborators. Rather than four long ice ages, they found a large number of shorter ones in a regular sequence. It appeared that the timing of ice ages was set by the small orbital shifts of the Milankovitch cycles. While the matter remained controversial, some began to suggest that the climate system is sensitive to small changes and can readily be flipped from a stable state into a different one. Scientists meanwhile began using computers to develop more sophisticated versions of Arrhenius's calculations. In 1967, taking advantage of the ability of digital computers to integrate absorption curves numerically, Syukuro Manabe and Richard Wetherald made the first detailed calculation of the greenhouse effect incorporating convection (the "Manabe-Wetherald one-dimensional radiative-convective model"). They found that, in the absence of unknown feedbacks such as changes in clouds, a doubling of carbon dioxide from the current level would result in approximately 2 °C increase in global temperature. By the 1960s, aerosol pollution ("smog") had become a serious local problem in many cities, and some scientists began to consider whether the cooling effect of particulate pollution could affect global temperatures. Scientists were unsure whether the cooling effect of particulate pollution or warming effect of greenhouse gas emissions would predominate, but regardless, began to suspect that human emissions could be disruptive to climate in the 21st century if not sooner. In his 1968 book The Population Bomb, Paul R. Ehrlich wrote, "the greenhouse effect is being enhanced now by the greatly increased level of carbon dioxide... [this] is being countered by low-level clouds generated by contrails, dust, and other contaminants... At the moment we cannot predict what the overall climatic results will be of our using the atmosphere as a garbage dump." A 1968 study by the Stanford Research Institute for the American Petroleum Institute noted: In 1969, NATO was the first candidate to deal with climate change on an international level. It was planned then to establish a hub of research and initiatives of the organization in the civil area, dealing with environmental topics as acid rain and the greenhouse effect. The suggestion of US President Richard Nixon was not very successful with the administration of German Chancellor Kurt Georg Kiesinger. But the topics and the preparation work done on the NATO proposal by the German authorities gained international momentum, (see e.g. the Stockholm United Nations Conference on the Human Environment 1970) as the government of Willy Brandt started to apply them on the civil sphere instead. Also in 1969, Mikhail Budyko published a theory on the ice-albedo feedback, a foundational element of what is today known as Arctic amplification. The same year a similar model was published by William D. Sellers. Both studies attracted significant attention, since they hinted at the possibility for a runaway positive feedback within the global climate system. In the early 1970s, evidence that aerosols were increasing worldwide encouraged Reid Bryson and some others to warn of the possibility of severe cooling. Meanwhile, the new evidence that the timing of ice ages was set by predictable orbital cycles suggested that the climate would gradually cool, over thousands of years. For the century ahead, however, a survey of the scientific literature from 1965 to 1979 found 7 articles predicting cooling and 44 predicting warming (many other articles on climate made no prediction); the warming articles were cited much more often in subsequent scientific literature. Several scientific panels from this time period concluded that more research was needed to determine whether warming or cooling was likely, indicating that the trend in the scientific literature had not yet become a consensus. John Sawyer published the study Man-made Carbon Dioxide and the “Greenhouse” Effect in 1972. He summarized the knowledge of the science at the time, the anthropogenic attribution of the carbon dioxide greenhouse gas, distribution and exponential rise, findings which still hold today. Additionally he accurately predicted the rate of global warming for the period between 1972 and 2000. The mainstream news media at the time exaggerated the warnings of the minority who expected imminent cooling. For example, in 1975, Newsweek magazine published a story that warned of "ominous signs that the Earth's weather patterns have begun to change." The article continued by stating that evidence of global cooling was so strong that meteorologists were having "a hard time keeping up with it." On 23 October 2006, Newsweek issued an update stating that it had been "spectacularly wrong about the near-term future". In the first two "Reports for the Club of Rome" in 1972 and 1974, the anthropogenic climate changes by increase as well as by waste heat were mentioned. About the latter John Holdren wrote in a study cited in the 1st report, “… that global thermal pollution is hardly our most immediate environmental threat. It could prove to be the most inexorable, however, if we are fortunate enough to evade all the rest.” Simple global-scale estimates that recently have been actualized and confirmed by more refined model calculations show noticeable contributions from waste heat to global warming after the year 2100, if its growth rates are not strongly reduced (below the averaged 2% p.a. which occurred since 1973). Evidence for warming accumulated. By 1975, Manabe and Wetherald had developed a three-dimensional Global climate model that gave a roughly accurate representation of the current climate. Doubling in the model's atmosphere gave a roughly 2 °C rise in global temperature. Several other kinds of computer models gave similar results: it was impossible to make a model that gave something resembling the actual climate and not have the temperature rise when the concentration was increased. The 1979 World Climate Conference (12 to 23 February) of the World Meteorological Organization concluded "it appears plausible that an increased amount of carbon dioxide in the atmosphere can contribute to a gradual warming of the lower atmosphere, especially at higher latitudes....It is possible that some effects on a regional and global scale may be detectable before the end of this century and become significant before the middle of the next century." In July 1979 the United States National Research Council published a report, concluding (in part): By the early 1980s, the slight cooling trend from 1945 to 1975 had stopped. Aerosol pollution had decreased in many areas due to environmental legislation and changes in fuel use, and it became clear that the cooling effect from aerosols was not going to increase substantially while carbon dioxide levels were progressively increasing. Hansen and others published the 1981 study Climate impact of increasing atmospheric carbon dioxide, and noted: In 1982, Greenland ice cores drilled by Hans Oeschger, Willi Dansgaard, and collaborators revealed dramatic temperature oscillations in the space of a century in the distant past. The most prominent of the changes in their record corresponded to the violent Younger Dryas climate oscillation seen in shifts in types of pollen in lake beds all over Europe. Evidently drastic climate changes were possible within a human lifetime. In 1985 a joint UNEP/WMO/ICSU Conference on the "Assessment of the Role of Carbon Dioxide and Other Greenhouse Gases in Climate Variations and Associated Impacts" concluded that greenhouse gases "are expected" to cause significant warming in the next century and that some warming is inevitable. Meanwhile, ice cores drilled by a Franco-Soviet team at the Vostok Station in Antarctica showed that and temperature had gone up and down together in wide swings through past ice ages. This confirmed the -temperature relationship in a manner entirely independent of computer climate models, strongly reinforcing the emerging scientific consensus. The findings also pointed to powerful biological and geochemical feedbacks. In June 1988, James E. Hansen made one of the first assessments that human-caused warming had already measurably affected global climate. Shortly after, a "World Conference on the Changing Atmosphere: Implications for Global Security" gathered hundreds of scientists and others in Toronto. They concluded that the changes in the atmosphere due to human pollution "represent a major threat to international security and are already having harmful consequences over many parts of the globe," and declared that by 2005 the world would be well-advised to push its emissions some 20% below the 1988 level. The 1980s saw important breakthroughs with regard to global environmental challenges. Ozone depletion was mitigated by the Vienna Convention (1985) and the Montreal Protocol (1987). Acid rain was mainly regulated on national and regional levels. Colors indicate temperature anomalies (NASA/NOAA; 20 January 2016). In 1988 the WMO established the Intergovernmental Panel on Climate Change with the support of the UNEP. The IPCC continues its work through the present day, and issues a series of Assessment Reports and supplemental reports that describe the state of scientific understanding at the time each report is prepared. Scientific developments during this period are summarized about once every five to six years in the IPCC Assessment Reports which were published in 1990 (First Assessment Report), 1995 (Second Assessment Report), 2001 (Third Assessment Report), 2007 (Fourth Assessment Report), and 2013/2014 (Fifth Assessment Report). Since the 1990s, research on climate change has expanded and grown, linking many fields such as atmospheric sciences, numerical modeling, behavioral sciences, geology and economics, or security. Engineering and technology One of the prominent traits of the 20th century was the dramatic growth of technology. Organized research and practice of science led to advancement in the fields of communication, engineering, travel, medicine, and war. The number and types of home appliances increased dramatically due to advancements in technology, electricity availability, and increases in wealth and leisure time. Such basic appliances as washing machines, clothes dryers, furnaces, exercise machines, refrigerators, freezers, electric stoves, and vacuum cleaners all became popular from the 1920s through the 1950s. The microwave oven was built on 25th of october 1955 and became popular during the 1980s and have become a standard in all homes by the 1990s. Radios were popularized as a form of entertainment during the 1920s, which extended to television during the 1950s. Cable and satellite television spread rapidly during the 1980s and 1990s. Personal computers began to enter the home during the 1970s–1980s as well. The age of the portable music player grew during the 1960s with the development of the transistor radio, 8-track and cassette tapes, which slowly began to replace record players These were in turn replaced by the CD during the late 1980s and 1990s. The proliferation of the Internet in the mid-to-late 1990s made digital distribution of music (mp3s) possible. VCRs were popularized in the 1970s, but by the end of the 20th century, DVD players were beginning to replace them, making the VHS obsolete by the end of the first decade of the 21st century. The first airplane was flown in 1903. With the engineering of the faster jet engine in the 1940s, mass air travel became commercially viable. The assembly line made mass production of the automobile viable. By the end of the 20th century, billions of people had automobiles for personal transportation. The combination of the automobile, motor boats and air travel allowed for unprecedented personal mobility. In western nations, motor vehicle accidents became the greatest cause of death for young people. However, expansion of divided highways reduced the death rate. The triode tube, transistor and integrated circuit successively revolutionized electronics and computers, leading to the proliferation of the personal computer in the 1980s and cell phones and the public-use Internet in the 1990s. New materials, most notably stainless steel, Velcro, silicone, teflon, and plastics such as polystyrene, PVC, polyethylene, and nylon came into widespread use for many various applications. These materials typically have tremendous performance gains in strength, temperature, chemical resistance, or mechanical properties over those known prior to the 20th century. Aluminum became an inexpensive metal and became second only to iron in use. Semiconductor materials were discovered, and methods of production and purification developed for use in electronic devices. Silicon became one of the purest substances ever produced. Thousands of chemicals were developed for industrial processing and home use. Mathematics The 20th century saw mathematics become a major profession. As in most areas of study, the explosion of knowledge in the scientific age has led to specialization: by the end of the century there were hundreds of specialized areas in mathematics and the Mathematics Subject Classification was dozens of pages long. Every year, thousands of new Ph.D.s in mathematics were awarded, and jobs were available in both teaching and industry. More and more mathematical journals were published and, by the end of the century, the development of the World Wide Web led to online publishing. Mathematical collaborations of unprecedented size and scope took place. An example is the classification of finite simple groups (also called the "enormous theorem"), whose proof between 1955 and 1983 required 500-odd journal articles by about 100 authors, and filling tens of thousands of pages. In a 1900 speech to the International Congress of Mathematicians, David Hilbert set out a list of 23 unsolved problems in mathematics. These problems, spanning many areas of mathematics, formed a central focus for much of 20th-century mathematics. Today, 10 have been solved, 7 are partially solved, and 2 are still open. The remaining 4 are too loosely formulated to be stated as solved or not. In 1929 and 1930, it was proved the truth or falsity of all statements formulated about the natural numbers plus one of addition and multiplication, was decidable, i.e. could be determined by some algorithm. In 1931, Kurt Gödel found that this was not the case for the natural numbers plus both addition and multiplication; this system, known as Peano arithmetic, was in fact incompletable. (Peano arithmetic is adequate for a good deal of number theory, including the notion of prime number.) A consequence of Gödel's two incompleteness theorems is that in any mathematical system that includes Peano arithmetic (including all of analysis and geometry), truth necessarily outruns proof, i.e. there are true statements that cannot be proved within the system. Hence mathematics cannot be reduced to mathematical logic, and David Hilbert's dream of making all of mathematics complete and consistent needed to be reformulated. In 1963, Paul Cohen proved that the continuum hypothesis is independent of (could neither be proved nor disproved from) the standard axioms of set theory. In 1976, Wolfgang Haken and Kenneth Appel used a computer to prove the four color theorem. Andrew Wiles, building on the work of others, proved Fermat's Last Theorem in 1995. In 1998 Thomas Callister Hales proved the Kepler conjecture. Differential geometry came into its own when Albert Einstein used it in general relativity. Entirely new areas of mathematics such as mathematical logic, topology, and John von Neumann's game theory changed the kinds of questions that could be answered by mathematical methods. All kinds of structures were abstracted using axioms and given names like metric spaces, topological spaces etc. As mathematicians do, the concept of an abstract structure was itself abstracted and led to category theory. Grothendieck and Serre recast algebraic geometry using sheaf theory. Large advances were made in the qualitative study of dynamical systems that Poincaré had begun in the 1890s. Measure theory was developed in the late 19th and early 20th centuries. Applications of measures include the Lebesgue integral, Kolmogorov's axiomatisation of probability theory, and ergodic theory. Knot theory greatly expanded. Quantum mechanics led to the development of functional analysis. Other new areas include Laurent Schwartz's distribution theory, fixed point theory, singularity theory and René Thom's catastrophe theory, model theory, and Mandelbrot's fractals. Lie theory with its Lie groups and Lie algebras became one of the major areas of study. Non-standard analysis, introduced by Abraham Robinson, rehabilitated the infinitesimal approach to calculus, which had fallen into disrepute in favour of the theory of limits, by extending the field of real numbers to the Hyperreal numbers which include infinitesimal and infinite quantities. An even larger number system, the surreal numbers were discovered by John Horton Conway in connection with combinatorial games. The development and continual improvement of computers, at first mechanical analog machines and then digital electronic machines, allowed industry to deal with larger and larger amounts of data to facilitate mass production and distribution and communication, and new areas of mathematics were developed to deal with this: Alan Turing's computability theory; complexity theory; Derrick Henry Lehmer's use of ENIAC to further number theory and the Lucas-Lehmer test; Rózsa Péter's recursive function theory; Claude Shannon's information theory; signal processing; data analysis; optimization and other areas of operations research. In the preceding centuries much mathematical focus was on calculus and continuous functions, but the rise of computing and communication networks led to an increasing importance of discrete concepts and the expansion of combinatorics including graph theory. The speed and data processing abilities of computers also enabled the handling of mathematical problems that were too time-consuming to deal with by pencil and paper calculations, leading to areas such as numerical analysis and symbolic computation. Some of the most important methods and algorithms of the 20th century are: the simplex algorithm, the fast Fourier transform, error-correcting codes, the Kalman filter from control theory and the RSA algorithm of public-key cryptography. Physics New areas of physics, like special relativity, general relativity, and quantum mechanics, were developed during the first half of the century. In the process, the internal structure of atoms came to be clearly understood, followed by the discovery of elementary particles. It was found that all the known forces can be traced to only four fundamental interactions. It was discovered further that two forces, electromagnetism and weak interaction, can be merged in the electroweak interaction, leaving only three different fundamental interactions. Discovery of nuclear reactions, in particular nuclear fusion, finally revealed the source of solar energy. Radiocarbon dating was invented, and became a powerful technique for determining the age of prehistoric animals and plants as well as historical objects. Stellar nucleosynthesis was refined as a theory in 1954 by Fred Hoyle; the theory was supported by astronomical evidence that showed chemical elements were created by nuclear fusion reactions within stars. Quantum mechanics Social sciences Ivan Pavlov developed the theory of classical conditioning. The Austrian School of economic theory gained in prominence. The fields of sociobiology and evolutionary psychology started uniting disciplines within biology, psychology, ethology, neuroscience, and anthropology. References
8272160
https://en.wikipedia.org/wiki/CacheFS
CacheFS
CacheFS is the name used for several similar software technologies designed to speed up distributed file system file access for networked computers. These technologies operate by storing (cached) copies of files on secondary memory, typically a local hard disk, so that if a file is accessed again, it can be done locally at much higher speeds than networks typically allow. CacheFS software is used on several Unix-like operating systems. The original Unix version was developed by Sun Microsystems in 1993. Another version was written for Linux and released in 2003. Network filesystems are dependent on a network link and a remote server; obtaining a file from such a filesystem can be significantly slower than getting the file locally. For this reason, it can be desirable to cache data from these filesystems on a local disk, thus potentially speeding up future accesses to that data by avoiding the need to go to the network and fetch it again. The software has to check that the remote file has not changed since it was cached, but this is much faster than reading the whole file again. Prior art Sprite (operating system) used large disk block caches. These were located in main-memory to achieve high performance in its file system. The term CacheFS has found little or no use to describe caches in main memory. Grossmont version The first CacheFS implementation, in 6502 assembler, was a write through cache developed by Mathew R Mathews at Grossmont College. It was used from Fall 1986 to Spring 1990 on three diskless 64 kB main memory Apple IIe computers to cache files from a Nestar file server onto Big Board, a 1 MB DRAM secondary memory device partitioned into CacheFS and TmpFS. The computers ran Pineapple DOS, an Apple DOS 3.3 derivative developed in the course of a follow on to WR Bornhorst's NSF funded Instructional Computing System. Pineapple DOS features, including caching, were unnamed; the name CacheFS was introduced seven years later by Sun Microsystems. Sun version The first Unix CacheFS implementation was developed by Sun Microsystems and released in the Solaris 2.3 operating system release in 1993, as part of an expanded feature set for the NFS or Network File System suite known as Open Network Computing Plus (ONC+). It was subsequently used in other UNIX operating systems such as Irix (starting with the 5.3 release in 1994). Linux version Linux operating systems now commonly use a new version of CacheFS developed by David Howells. Howells appears to have rewritten CacheFS from scratch, not using Sun's original code. The Linux CacheFS currently is designed to operate on Andrew File System and Network File System filesystems. Terminology Because of its similar naming to FS-Cache, CacheFS' terminology is confusing to outsiders. CacheFS is a backend for FS-Cache and handles the actual data storage and retrieval. FS-Cache passes the requests from netfs to CacheFS. FS-Cache The cache facility/layer between the cache backends just like CacheFS and NFS or AFS. Cache backends CacheFS CacheFS is a Filesystem for the FS-Cache facility. A block device can be used as cache by simply mounting it. Needs no special activation and is deactivated by unmounting it. Cachefiles (daemon) Daemon using an existing filesystem (ext3 with user_xattr) as cache. Cache is bound with "cachefilesd -s". Project status Project status seems to be stalled, and some people are attempting to revive the code and bring it up to date. Features The facility can be conceptualised by the following diagram: The facility (known as FS-Cache) is designed to be as transparent as possible to a user of the system. Applications should just be able to use NFS files as normal, without any knowledge of there being a cache. See also Page cache References External links Fscache-ols2006 Presentation D.Howells@Red Hat Steve D.@Red Hat Red Hat CacheFS mailinglist Outdated articles? LWN.NET A general caching filesystem LWN.NET Initial mail introducing cacheFS for Linux Network file systems
252279
https://en.wikipedia.org/wiki/PEEK%20and%20POKE
PEEK and POKE
In computing, PEEK and POKE are commands used in some high-level programming languages for accessing the contents of a specific memory cell referenced by its memory address. PEEK gets the byte located at the specified memory address. POKE sets the memory byte at the specified address. These commands originated with machine code monitors such as the DECsystem-10 monitor; these commands are particularly associated with the BASIC programming language, though some other languages such as Pascal and COMAL also have these commands. These commands are comparable in their roles to pointers in the C language and some other programming languages. One of the earliest references to these commands in BASIC, if not the earliest, is in Altair BASIC. The PEEK and POKE commands were conceived in early personal computing systems to serve a variety of purposes, especially for modifying special memory-mapped hardware registers to control particular functions of the computer such as the input/output peripherals. Alternatively programmers might use these commands to copy software or even to circumvent the intent of a particular piece of software (e.g. manipulate a game program to allow the user to cheat). Today it is unusual to control computer memory at such a low level using a high-level language like BASIC. As such the notions of PEEK and POKE commands are generally seen as antiquated. The terms peek and poke are sometimes used colloquially in computer programming to refer to memory access in general. Statement syntax The PEEK function and POKE commands are usually invoked as follows, either in direct mode (entered and executed at the BASIC prompt) or in indirect mode (as part of a program): integer_variable = PEEK(address) POKE address, value The address and value parameters may contain complex expressions, as long as the evaluated expressions correspond to valid memory addresses or values, respectively. A valid address in this context is an address within the computer's address space, while a valid value is (typically) an unsigned value between zero and the maximum unsigned number that the minimum addressable unit (memory cell) may hold. Memory cells and hardware registers The address locations that are POKEd or PEEKed at may refer either to ordinary memory cells or to memory-mapped hardware registers of I/O units or support chips such as sound chips and video graphics chips, or even to memory-mapped registers of the CPU itself (which makes software implementations of powerful machine code monitors and debugging/simulation tools possible). As an example of a POKE-driven support chip control scheme, the following POKE command is directed at a specific register of the Commodore 64's built-in VIC-II graphics chip, which will make the screen border turn black: POKE 53280, 0 A similar example from the Atari 8-bit family tells the ANTIC display driver to turn all text upside-down: POKE 755, 4 The difference between machines, and the importance and utility of the hard-wired memory locations, meant that "memory maps" of various machines were important documents. An example is Mapping the Atari, which starts at location zero and mapped out the entire 64 kB memory of the Atari 8-bit systems location by location. PEEK and POKE in other BASICs North Star Computers, a vendor from the early 1980s, offered their own dialect of BASIC with their NSDOS operating system. Concerned about possible legal issues, they renamed the commands EXAM and FILL. There were also BASIC dialects that used the reserved words MEMW and MEMR instead. BBC BASIC, used on the BBC Micro and other Acorn Computers machines, did not feature the keywords PEEK and POKE but used the question mark symbol (?), known as query in BBC BASIC, for both operations, as a function and command. For example: > DIM W% 4 : REM reserve 4 bytes of memory, pointed to by integer variable W% > ?W% = 42 : REM store constant 42; equivalent of 'POKE W%, 42' > PRINT ?W% : REM print the byte pointed to by W%; equivalent of 'PRINT PEEK(W%)' 42 32-bit values could be POKEd and PEEKed using the exclamation mark symbol (!), known as pling, with the least significant byte first (little-endian). In addition, the address could be offset by specifying either query or pling after the address and following it with the offset: > !W% = &12345678 : REM ampersand (&) specifies hexadecimal > PRINT ~?W%, ~W%?3 : REM tilde (~) prints in hexadecimal 78 12 Strings of text could be PEEKed and POKEd in a similar way using the Dollar sign ($). The end of the string is marked with the Carriage return character (&0D in ASCII); when read back, this terminating character is not returned. Offsets cannot be used with the dollar sign. > DIM S% 20 : REM reserve 20 bytes of memory pointed to by S% > $S% = "MINCE PIES" : REM store string 'MINCE PIES', terminated by &0D > PRINT $(S% + 6) : REM retrieve string, excluding &0D terminator, and starting at S% + 6 bytes PIES 16 and 32-bit versions As most early home computers uses 8-bit processors, PEEK or POKE values are between 0 and 255. Setting or reading a 16-bit value on such machines requires two commands, such as to read a 16-bit integer at address A, and followed by and to store a 16-bit integer V at address A. Some BASICs, even on 8-bit machines, have commands for reading and writing 16-bit values from memory. BASIC XL for the Atari 8-bit family uses a "D" (for "double") prefix: DPEEK and DPOKE The East-German "Kleincomputer" KC85/1 and KC87 calls them DEEK and DOKE. The Sinclair QL has PEEK_W and POKE_W for 16-bit values and PEEK_L and POKE_L for 32-bit values. ST BASIC for the Atari ST uses the traditional names but allows defining 8/16/32 bit memory segments and addresses that determine the size. POKEs as cheats In the context of games for many 8-bit computers, users could load games into memory and, before launching them, modify specific memory addresses in order to cheat, getting an unlimited number of lives, immunity, invisibility, etc. Such modifications were performed using POKE statements. The Commodore 64, ZX Spectrum and Amstrad CPC also allowed players with the relevant cartridges or Multiface add-on to freeze the running program, enter POKEs, and resume. For example, in Knight Lore for the ZX Spectrum, immunity can be achieved with the following command: POKE 47196,201 In this case, the value 201 corresponds to a RET instruction, so that the game returns from a subroutine early before triggering collision detection. Magazines such as Your Sinclair published lists of such POKEs for games. Such codes were generally identified by reverse-engineering the machine code to locate the memory address containing the desired value that related to, for example, the number of lives, detection of collisions, etc. Using a 'POKE' cheat is more difficult in modern games, as many include anti-cheat or copy-protection measures that inhibit modification of the game's memory space. Modern operating systems enforce virtual memory protection schemes to deny external program access to non-shared memory (for example, separate page tables for each application, hence inaccessible memory spaces). Generic usage of POKE "POKE" is sometimes used to refer to any direct manipulation of the contents of memory, rather than just via BASIC, particularly among people who learned computing on the 8-bit microcomputers of the late 1970s and early 1980s. BASIC was often the only language available on those machines (on home computers, usually present in ROM), and therefore the obvious, and simplest, way to program in machine language was to use BASIC to POKE the opcode values into memory. Doing much low-level coding like this usually came from lack of access to an assembler. An example of the generic usage of POKE and PEEK is in Visual Basic for Windows, where DDE can be achieved with the LinkPoke keyword. See also Killer poke Type-in program Self-modifying code Pointer (computer programming) References Microcomputer software BASIC commands Cheating in video games Computer memory
22761055
https://en.wikipedia.org/wiki/Projektron%20BCS
Projektron BCS
Projektron BCS is a web-based project management software for planning, managing and controlling a multitude of projects simultaneously. Distribution The software is currently used in 9 countries by a total of approximately 37,000 users. Users are E.ON, Nintendo, the German National Library, the HanseMerkur Versicherungsgruppe, UniVersa Versicherungen and Hella Aglaia Mobile Vision. Technology Projektron BCS is a 3-tier software, based on Java. The user-interface is used in an internet-browser without plugins, ActiveX, local Java applets. It supports Oracle, PostgreSQL and Microsoft SQL as databases. Manufacturer and development history The manufacturer of Projektron BCS is the Berlin-based Projektron GmbH. The company was founded in 2001 by Dr Marten Huisinga, Maik Dorl and Jörg Cohrs. The initial aim was to provide a web-based, platform-independent, easily configurable software for project management. Over the years, other functions were added: 2001: Projektron BCS is available in the first version. 2002: Projektron BCS supports dependencies and milestones as well as the possibility to compare several projects with each other. Also added is a notification system via e-mails that indicate project changes. 2003: Projektron BCS offers an interface to Microsoft Outlook and an English-language interface. 2004: The Resource Management module is presented at CeBIT. The new report generator for reporting will be presented at Systems. 2005: Version 5.0 now offers the Top-down and Bottom-up|Top-Down-Budget planning, Functions for invoice creation, the import of e-mails via IMAP or POP3 and an order management system that can also include subsidies. The file storage was adapted to the WebDAV-standard adapted. 2006: The contract management module was presented at CeBIT. In addition, Projektron BCS now offers interfaces to SAP and Microsoft Exchange as well as a ticket module with which customer enquiries and error messages can be managed. Version 6.0 offers a completely revised user interface, based on AJAX-Technology. 2007: The management of expenses and holidays as well as resource-oriented scheduling are supported by Projektron BCS. 2008: At CeBIT, the project portfolio management, an interface to the open-source reporting framework BIRT and the Spanish surface were presented. The new offer generation supports service providers who carry out projects on behalf of customers. 2009: The support of Flexitimemodels, Milestone trend analysis, a project completion assistant and the archiving of multiple baselines are added. The agile procedure model Scrum is supported and a French interface is offered. 2010: The multi-currency capability allows working with several currencies within one project. The Computer Telephony Integration (CTI) ensures that Projektron BCS can be connected to telephone systems. Address data can be visualised on a map via the interface to BGI ThematicMapper. The head office moves from Kreuzberg to Berlin-Mitte. 2011: Mobile time recording via smartphones will be presented at CeBIT. The project preparation offers functions such as target cross, considers project environment and stakeholders. The user interface is also available in Dutch, Italian and Hungarian. 2012: Completely revised user interface in version 7.0. New edition Projektron BCS.start for project teams with up to 15 employees. 2013: Projektron BCS offers an external support portal, new options for automated project invoicing, a JIRA interface as well as visualisations in the form of tachometers on costs, effort and profit. The user conference will be held for the first time. 2014: Efforts can now also be recorded via app using the smartphone. There is also a new planning view for the Scrum module and profile pictures for project work. 2015: The JIRA interface becomes bidirectional and new solutions for resource management in the matrix organisation are offered. A full-text search and new tagging and reuse functions have been added to the ticket system. 2016: Phase plans support multi-project management, the ticket system receives an extension for service times, response times and advance warning levels to ensure service quality, project ideas can be recorded and submitted with the help of an assistant. 2017: In the project application, the radar chart shows the strategic importance of the project with and without weighting, project applications can be compared and the Kanban Board can be used for tickets and tasks for a quick overview. 2018: Calendar synchronisation via the cloud-based Office 365 is supported and the new app records expenses and costs offline. 2019: Costs can be viewed over time and compared directly with the planned values and Projektron BCS offers the possibility to conduct surveys and evaluate their results directly in the software. The software has also been given a new design. 2020: More efficient rights management is made possible thanks to the possibility of defining user licences and roles for groups of people. In addition, the resource utilisation has been revised and a Polish interface is offered. See also List of project management software References External links Projektron official site Article of "Projekt Magazin" of the usage of Projektron BCS at EADS Projectmanagement-Study of BARC Description at pm-software.info The magazine of the german "Gesellschaft für Projektmanagement" about Projektron BCS Edicos' experiences with Projektron BCS Project management software 2001 software Projects established in 2001
3965587
https://en.wikipedia.org/wiki/Microsoft%20Speech%20API
Microsoft Speech API
The Speech Application Programming Interface or SAPI is an API developed by Microsoft to allow the use of speech recognition and speech synthesis within Windows applications. To date, a number of versions of the API have been released, which have shipped either as part of a Speech SDK or as part of the Windows OS itself. Applications that use SAPI include Microsoft Office, Microsoft Agent and Microsoft Speech Server. In general, all versions of the API have been designed such that a software developer can write an application to perform speech recognition and synthesis by using a standard set of interfaces, accessible from a variety of programming languages. In addition, it is possible for a 3rd-party company to produce their own Speech Recognition and Text-To-Speech engines or adapt existing engines to work with SAPI. In principle, as long as these engines conform to the defined interfaces they can be used instead of the Microsoft-supplied engines. In general, the Speech API is a freely redistributable component which can be shipped with any Windows application that wishes to use speech technology. Many versions (although not all) of the speech recognition and synthesis engines are also freely redistributable. There have been two main 'families' of the Microsoft Speech API. SAPI versions 1 through 4 are all similar to each other, with extra features in each newer version. SAPI 5, however, was a completely new interface, released in 1993. Since then several sub-versions of this API have been released. Basic architecture The Speech API can be viewed as an interface or piece of middleware which sits between applications and speech engines (recognition and synthesis). In SAPI versions 1 to 4, applications could directly communicate with engines. The API included an abstract interface definition which applications and engines conformed to. Applications could also use simplified higher-level objects rather than directly call methods on the engines. In SAPI 5 however, applications and engines do not directly communicate with each other. Instead, each talks to a runtime component (sapi.dll). There is an API implemented by this component which applications use, and another set of interfaces for engines. Typically in SAPI 5 applications issue calls through the API (for example to load a recognition grammar; start recognition; or provide text to be synthesized). The sapi.dll runtime component interprets these commands and processes them, where necessary calling on the engine through the engine interfaces (for example, the loading of grammar from a file is done in the runtime, but then the grammar data is passed to the recognition engine to actually use in recognition). The recognition and synthesis engines also generate events while processing (for example, to indicate an utterance has been recognized or to indicate word boundaries in the synthesized speech). These pass in the reverse direction, from the engines, through the runtime DLL, and on to an event sink in the application. In addition to the actual API definition and runtime DLL, other components are shipped with all versions of SAPI to make a complete Speech Software Development Kit. The following components are among those included in most versions of the Speech SDK: API definition files - in MIDL and as C or C++ header files. Runtime components - e.g. sapi.dll. Control Panel applet - to select and configure default speech recognizer and synthesizer. Text-To-Speech engines in multiple languages. Speech Recognition engines in multiple languages. Redistributable components to allow developers to package the engines and runtime with their application code to produce a single installable application. Sample application code. Sample engines - implementations of the necessary engine interfaces but with no true speech processing which could be used as a sample for those porting an engine to SAPI. Documentation. Versions Xuedong Huang was a key person who led Microsoft's early SAPI efforts. SAPI 1-4 API family SAPI 1 The first version of SAPI was released in 1995, and was supported on Windows 95 and Windows NT 3.51. This version included low-level Direct Speech Recognition and Direct Text To Speech APIs which applications could use to directly control engines, as well as simplified 'higher-level' Voice Command and Voice Talk APIs. SAPI 3 SAPI 3.0 was released in 1997. It added limited support for dictation speech recognition (discrete speech, not continuous), and additional sample applications and audio sources. SAPI 4 SAPI 4.0 was released in 1998. This version of SAPI included both the core COM API; together with C++ wrapper classes to make programming from C++ easier; and ActiveX controls to allow drag-and-drop Visual Basic development. This was shipped as part of an SDK that included recognition and synthesis engines. It also shipped (with synthesis engines only) in Windows 2000. The main components of the SAPI 4 API (which were all available in C++, COM, and ActiveX flavors) were: Voice Command - high-level objects for command & control speech recognition Voice Dictation - high-level objects for continuous dictation speech recognition Voice Talk - high-level objects for speech synthesis Voice Telephony - objects for writing telephone speech applications Direct Speech Recognition - objects for direct control of recognition engine Direct Text To Speech - objects for direct control of synthesis engine Audio objects - for reading to and from an audio device or file SAPI 5 API family The Speech SDK version 5.0, incorporating the SAPI 5.0 runtime was released in 2000. This was a complete redesign from previous versions and neither engines nor applications which used older versions of SAPI could use the new version without considerable modification. The design of the new API included the concept of strictly separating the application and engine so all calls were routed through the runtime sapi.dll. This change was intended to make the API more 'engine-independent', preventing applications from inadvertently depending on features of a specific engine. In addition, this change was aimed at making it much easier to incorporate speech technology into an application by moving some management and initialization code into the runtime. The new API was initially a pure COM API and could be used easily only from C/C++. Support for VB and scripting languages were added later. Operating systems from Windows 98 and NT 4.0 upwards were supported. Major features of the API include: Shared Recognizer. For desktop speech recognition applications, a recognizer object can be used that runs in a separate process (sapisvr.exe). All applications using the shared recognizer communicate with this single instance. This allows sharing of resources, removes contention for the microphone and allows for a global UI for control of all speech applications. In-proc recognizer. For applications that require explicit control of the recognition process, the in-proc recognizer object can be used instead of the shared one. Grammar objects. Speech grammars are used to specify the words that the recognizer is listening for. SAPI 5 defines an XML markup for specifying a grammar, as well as mechanisms to create them dynamically in code. Methods also exist for instructing the recognizer to load a built-in dictation language model. Voice object. This performs speech synthesis, producing an audio stream from a text. A markup language (similar to XML, but not strictly XML) can be used for controlling the synthesis process. Audio interfaces. The runtime includes objects for performing speech input from the microphone or speech output to speakers (or any sound device); as well as to and from wave files. It is also possible to write a custom audio object to stream audio to or from a non-standard location. User lexicon object. This allows custom words and pronunciations to be added by a user or application. These are added to the recognition or synthesis engine's built-in lexicons. Object tokens. This is a concept allowing recognition and TTS engines, audio objects, lexicons and other categories of an object to be registered, enumerated and instantiated in a common way. SAPI 5.0 This version shipped in late 2000 as part of the Speech SDK version 5.0, together with version 5.0 recognition and synthesis engines. The recognition engines supported continuous dictation and command & control and were released in U.S. English, Japanese and Simplified Chinese versions. In the U.S. English system, special acoustic models were available for children's speech and telephony speech. The synthesis engine was available in English and Chinese. This version of the API and recognition engines also shipped in Microsoft Office XP in 2001. SAPI 5.1 This version shipped in late 2001 as part of the Speech SDK version 5.1. Automation-compliant interfaces were added to the API to allow use from Visual Basic, scripting languages such as JScript, and managed code. This version of the API and TTS engines were shipped in Windows XP. Windows XP Tablet PC Edition and Office 2003 also include this version but with a substantially improved version 6 recognition engine and Traditional Chinese. SAPI 5.2 This was a special version of the API for use only in the Microsoft Speech Server which shipped in 2004. It added support for SRGS and SSML mark-up languages, as well as additional server features and performance improvements. The Speech Server also shipped with the version 6 desktop recognition engine and the version 7 server recognition engine. SAPI 5.3 This is the version of the API that ships in Windows Vista together with new recognition and synthesis engines. As Windows Speech Recognition is now integrated into the operating system, the Speech SDK and APIs are a part of the Windows SDK. SAPI 5.3 includes the following new features: Support for W3C XML speech grammars for recognition and synthesis. The Speech Synthesis Markup Language (SSML) version 1.0 provides the ability to mark up voice characteristics, speed, volume, pitch, emphasis, and pronunciation. The Speech Recognition Grammar Specification (SRGS) supports the definition of context-free grammars, with two limitations: It does not support the use of SRGS to specify dual-tone modulated-frequency (touch-tone) grammars. It does not support Augmented Backus–Naur form (ABNF). Support for semantic interpretation script within grammars. SAPI 5.3 enables an SRGS grammar to be annotated with JavaScript for semantic interpretation to supplement the recognized text. User-Specified shortcuts in lexicons, which is the ability to add a string to the lexicon and associate it with a shortcut word. When dictating, the user can say the shortcut word and the recognizer will return the expanded string. Additional functionality and ease-of-programming provided by new types. Performance improvements, improved reliability, and security. Version 8 of the speech recognition engine ("Microsoft Speech Recognizer") SAPI 5.4 This is an updated version of the API that ships in Windows 7. SAPI 5 Voices Microsoft Sam (Speech Articulation Module) is a commonly shipped SAPI 5 voice. In addition, Microsoft Office XP and Office 2003 installed L&H Michael and Michelle voices. The SAPI 5.1 SDK installs 2 more voices, Mike and Mary. Windows Vista includes Microsoft Anna which replaces Microsoft Sam and sounds more natural and intelligible. It is also installed on Windows XP by Microsoft Streets & Trips 2006 and later versions. The Chinese version of Vista and later Windows client versions also include a female voice named Microsoft Lili. Managed code Speech API A managed code API ships as part of the .NET Framework 3.0. It has similar functionality to SAPI 5 but is more suitable to be used by managed code applications. The new API is available on Windows XP, Windows Server 2003, Windows Vista, and Windows Server 2008. The existing SAPI 5 API can also be used from managed code to a limited extent by creating a COM Interop code (helper code designed to assist in accessing COM interfaces and classes). This works well in some scenarios however the new API should provide a more seamless experience equivalent to using any other managed code library. However, major obstacle towards transitioning from the COM Interop is the fact that the managed implementation has subtle memory leaks which lead to memory fragmentation and exclude the use of the library in any non-trivial applications. As a workaround, Microsoft has suggested using a different API, which has fewer voices. Speech functionality in Windows Vista Windows Vista includes a number of new speech-related features including: Speech control of the full Windows GUI and applications New tutorial, microphone wizard, and UI for controlling speech recognition New version of the Speech API runtime: SAPI 5.3 Built-in updated Speech Recognition engine (Version 8) New Speech Synthesis engine and SAPI voice Microsoft Anna Managed code speech API (codenamed SpeechFX) Speech recognition support for 8 languages at release time: U.S. English, U.K. English, traditional Chinese, simplified Chinese, Japanese, Spanish, French, and German, with more language to be released later. Microsoft Agent most notably, and all other Microsoft speech applications use SAPI 5. Compatibility The Speech API is compatible with the following operating systems: SAPI 5 Microsoft Windows 10 Microsoft Windows 8 Microsoft Windows 7 Microsoft Windows Vista Microsoft Windows XP SAPI 4 Microsoft Windows Millennium Edition Microsoft Windows NT 4.0, Service Pack 6a, in English, Japanese and Simplified Chinese. Microsoft Windows 95 Major applications using SAPI Microsoft Windows XP Tablet PC Edition includes SAPI 5.1 and speech recognition engines 6.1 for English, Japanese, and Chinese (simplified and traditional) Windows Speech Recognition in Windows Vista and later Microsoft Narrator in Windows 1991 and later Windows operating systems Microsoft Office XP and Office 2003 Microsoft Excel 1995, Microsoft Excel 2003, and Microsoft Excel 2007 for speaking spreadsheet data Microsoft Voice Command for Windows Pocket PC and Windows Mobile Microsoft Plus! Voice Command for Windows Media Player Adobe Reader uses voice output to read document content CoolSpeech, a text-to-speech application that reads text aloud from a variety of sources Window-Eyes screen reader JAWS screen reader NonVisual Desktop Access (NVDA), a free and open source screen reader See also List of speech recognition software Microsoft Cognitive Services Speech SDK Microsoft Speech Application SDK (SASDK) Comparison of speech synthesizers External links Microsoft Cognitive Services Ignite 2018 event blog post Microsoft site for SAPI Microsoft download site for Speech API Software Developers Kit version 5.1 Microsoft Systems Journal Whitepaper by Mike Rozak on the first version of SAPI Microsoft Speech Team blog References Microsoft application programming interfaces Voice technology Speech processing software
25277512
https://en.wikipedia.org/wiki/Andrew%20Ng
Andrew Ng
Andrew Yan-Tak Ng (; born 1976) is a British-born American computer scientist and technology entrepreneur focusing on machine learning and AI. Ng was a co-founder and head of Google Brain and was the former chief scientist at Baidu, building the company's Artificial Intelligence Group into a team of several thousand people. Ng is an adjunct professor at Stanford University (formerly associate professor and Director of its Stanford AI Lab or SAIL). Also a pioneer in online education, Ng co-founded Coursera and deeplearning.ai. He has successfully spearheaded many efforts to "democratize deep learning" teaching over 2.5 million students through his online courses. He is one of the world's most famous and influential computer scientists being named one of Time magazine's 100 Most Influential People in 2012, and Fast Company Most Creative People in 2014. In 2018, he launched and currently heads the AI Fund, initially a $175-million investment fund for backing artificial intelligence startups. He has founded Landing AI, which provides AI-powered SaaS products. Biography Ng was born in the United Kingdom in 1976. His parents Ronald P. Ng and Tisa Ho are both immigrants from Hong Kong. He has at least one brother. Growing up, he spent time in Hong Kong and Singapore and later graduated from Raffles Institution in Singapore in 1992. In 1997, he earned his undergraduate degree with a triple major in computer science, statistics, and economics from Carnegie Mellon University in Pittsburgh, Pennsylvania, graduating at the top of his class. Between 1996 and 1998 he also conducted research on reinforcement learning, model selection, and feature selection at the AT&T Bell Labs. In 1998 Ng earned his master's degree from the Massachusetts Institute of Technology in Cambridge, Massachusetts. At MIT he built the first publicly available, automatically indexed web-search engine for research papers on the web (it was a precursor to CiteSeer/ResearchIndex, but specialized in machine learning). In 2002, he received his PhD from the University of California, Berkeley, under the supervision of Michael I. Jordan. His thesis is titled "Shaping and policy search in reinforcement learning" and is well cited to this day. He started working as an assistant professor at Stanford University in 2002, and as an associate professor at 2009. He currently lives in Los Altos Hills, California. In 2014, he married Carol E. Reiley and in February 2019 they had their first child, Nova, followed by their second, Neo Atlas, in the spring of 2021. The MIT Technology Review named Ng and Reiley an "AI power couple". Career Academia and teaching Ng is a professor at Stanford University Department of Computer Science and Department of Electrical Engineering. He served as the director of the Stanford Artificial Intelligence Lab (SAIL), where he taught students and undertook research related to data mining, big data, and machine learning. His machine learning course CS229 at Stanford is the most popular course offered on campus with over 1000 students enrolling some years. As of 2020, three of most popular courses on Coursera are Ng's: Machine Learning (#1), AI for Everyone, (#5), Neural Networks and Deep Learning (#6). In 2008 his group at Stanford was one of the first in the US to start advocating the use of GPUs in deep learning. The rationale was that an efficient computation infrastructure could speed up statistical model training by orders of magnitude, ameliorating some of the scaling issues associated with big data. At the time it was a controversial and risky decision, but since then and following Ng's lead, GPUs have become a cornerstone in the field. Since 2017 Ng has been advocating the shift to high performance computing (HPC) for scaling up deep learning and accelerating progress in the field. In 2012, along with Stanford computer scientist Daphne Koller he co-founded and was CEO of Coursera, a website that offers free online courses to everyone. It took off with over 100,000 students registered for Ng's popular CS229A course. Today, several million people have enrolled in Coursera courses, making the site one of the leading MOOCs in the world. Industry From 2011 to 2012, he worked at Google, where he founded and directed the Google Brain Deep Learning Project with Jeff Dean, Greg Corrado, and Rajat Monga. In 2014, he joined Baidu as chief scientist, and carried out research related to big data and AI. There he set up several research teams for things like facial recognition and Melody, an AI chatbot for healthcare. In March 2017, he announced his resignation from Baidu. He soon afterwards launched Deeplearning.ai, an online series of deep learning courses. Then Ng launched Landing AI, which provides AI-powered SaaS products. In January 2018, Ng unveiled the AI Fund, raising $175 million to invest in new startups. On November 2021, Landing AI secured a $57 million round of series A funding led by McRock Capital, to help manufacturers adopt computer vision. Research Ng researches primarily in machine learning, deep learning, machine perception, computer vision, and natural language processing; and is one of the world's most famous and influential computer scientists. He's frequently won best paper awards at academic conferences and has had a huge impact on the field of AI, computer vision, and robotics. During graduate school, together with David M. Blei and Michael I. Jordan, Ng coauthored the influential paper that introduced latent Dirichlet allocation (LDA) for his thesis on reinforcement learning for drones. His early work includes the Stanford Autonomous Helicopter project, which developed one of the most capable autonomous helicopters in the world. He was the leading scientist and principal investigator on the STAIR (STanford Artificial Intelligence Robot) project, which resulted in ROS, a widely used open-source robotics software platform. His vision to build an AI robot and put a robot in every home inspired Scott Hassan to back him and create Willow Garage. He is also one of the founding team members for the Stanford WordNet project, which uses machine learning to expand the Princeton WordNet database created by Christiane Fellbaum. In 2011, Ng founded the Google Brain project at Google, which developed large-scale artificial neural networks using Google's distributed computer infrastructure. Among its notable results was a neural network trained using deep learning algorithms on 16,000 CPU cores, which learned to recognize cats after watching only YouTube videos, and without ever having been told what a "cat" is. The project's technology is also currently used in the Android Operating System's speech recognition system. Online education: MOOCs In 2011, Stanford launched a total of three massive open online courses (MOOCs) on machine learning (CS229a), databases, and AI, taught by Ng, Peter Norvig, Sebastian Thrun, and Jennifer Widom. This has led to the modern MOOC movement. Ng taught machine learning and Widom taught databases. The course on AI taught by Thrun led to the genesis of Udacity. Coursera was the 6th online education website that Ng built and arguably the most successful to date.But we learned and learned and learned from the early prototypes, until in 2011 we managed to build something that really took off. The seeds of massive open online courses (MOOCs) go back a few years before the founding of Coursera in 2012. Two themes emphasized in the founding of modern MOOCs were scale and availability. Founding of Coursera Ng started the Stanford Engineering Everywhere (SEE) program, which in 2008 published a number of Stanford courses online for free. Ng taught one of these courses, "Machine Learning", which includes his video lectures, along with the student materials used in the Stanford CS229 class. It offered a similar experience to MIT's Open Courseware except it aimed at providing a more "complete course" experience, equipped with lectures, course materials, problems and solutions, etc. The SEE videos were viewed by the millions and inspired Ng to develop and iterate new versions of online tech. Within Stanford, they include Daphne Koller with her "blended learning experiences" and co-designing a peer-grading system, John Mitchell (Courseware, a Learning Management System), Dan Boneh (using machine learning to sync videos, later teaching cryptography on Coursera), Bernd Girod (ClassX), and others. Outside Stanford both Ng and Thrun credit Sal Khan of Khan Academy as a huge source of inspiration. Ng was also inspired by lynda.com and the design of the forums of Stack Overflow. Widom, Ng, and others were ardent advocates of Khan-styled tablet recordings, and between 2009 and 2011, several hundred hours of lecture videos recorded by Stanford instructors were recorded and uploaded. Ng tested some of the original designs with a local high school to figure the best practices for recording lessons. In October 2011, the "applied" version of the Stanford class (CS229a) was hosted on ml-class.org and launched, with over 100,000 students registered for its first edition. The course featured quizzes and graded programming assignments and became one of the first and most successful massive open online courses (MOOCs) created by a Stanford professor. Two other courses on databases (db-class.org) and AI (ai-class.org) were launched. The ml-class and db-class ran on a platform developed by students, including Frank Chen, Jiquan Ngiam, Chuan-Yu Foo, and Yifan Mai. Word spread through social media and popular press. The three courses were 10 weeks long, and over 40,000 "Statements of Accomplishment" were awarded. Ng tells the following story on the early days of Coursera:In 2011, I was working with four Stanford students. We were under tremendous pressure to build new features for the 100,000+ students that were already signed up. One of the students (Frank Chen) claims another one (Jiquan Ngiam) frequently stranded him in the Stanford building and refused to give him a ride back to his dorm until very late at night, so that he no choice but to stick around and keep working. I neither confirm nor deny this story. His work subsequently led to his founding of Coursera with Koller in 2012. As of 2019, the two most popular courses on the platform were taught and designed by Ng: "Machine Learning" (#1) and "Neural Networks and Deep Learning" (#2). Post-Coursera work In 2019, Ng launched a new course "AI for Everyone". This is a non-technical course designed to help people understand AI's impact on society and its benefits and costs for companies, as well as how they can navigate through this technological revolution. Venture capital Ng is the chair of the board for Woebot Labs, a psychological clinic that uses data science to provide cognitive behavioral therapy. It provides a therapy chatbot to help treat depression, among other things. He is also a member of the board of directors for drive.ai, which uses AI for self-driving cars and was acquired by Apple in 2019. Through Landing AI, he also focuses on democratizing AI technology and lowering the barrier for entrance to businesses and developers. Publications and awards Ng is also the author or co-author of over 300, robotics, and related fields. His work in computer vision and deep learning has been frequently featured in press releases and reviews. 1995. Bell Atlantic Network Services Scholarship 1995, 1996. Microsoft Technical Scholarship Award 1996. Andrew Carnegie Society Scholarship 1998–2000: Berkeley Fellowship 2001–2002: Microsoft Research Fellowship 2007. Alfred P. Sloan Research Fellowship Sloan Foundation Faculty Fellowship 2008. MIT Technology Review TR35 (Technology Review, 35 innovators under 35) 2009. IJCAI Computers and Thought Award (highest award in AI given to a researcher under 35) 2009. Vance D. & Arlene C. Coffman Faculty Scholar Award 2013. Time 100 Most Influential People 2013. Fortune 40 under 40 2013. CNN 10: Thinkers 2014. Fast Company Most Creative People in Business 2015. World Economic Forum Young Global Leaders He has co-refereed hundreds of AI publications in journals like NeurIPS. He has also been the editor for the Journal of Artificial Intelligence Research (JAIR), Associate Editor for the IEEE Robotics and Automation Society Conference Editorial Board (ICRA), and much more. He has given invited talks at NASA, Google, Microsoft, Lockheed Martin, the Max Planck Society, Stanford, Princeton, UPenn, Cornell, MIT, UC Berkeley, and dozens of other universities. Outside of the US, he has lectured in Spain, Germany, Israel, China, Korea, and Canada. He has also written for Harvard Business Review, HuffPost, Slate, Apple News, and Quora Sessions' Twitter. He also writes a weekly digital newsletter called The Batch. Books He also wrote a book Machine Learning Yearning, a practical guide for those interested in machine learning, which he distributed for free. In December 2018, he wrote a sequel called AI Transformation Playbook. Ng contributed one chapter to Architects of Intelligence: The Truth About AI from the People Building it (2018) by the American futurist Martin Ford. Views on AI Ng believes that AI technology will improve the lives of people, not that it is an anathema that will "enslave" the human race. Ng believes the potential benefits of AI outweigh the threats, which he believes are exaggerated. He has stated thatWorrying about AI evil superintelligence today is like worrying about overpopulation on the planet Mars. We haven't even landed on the planet yet!A real threat is regarding the future of work: "Rather than being distracted by evil killer robots, the challenge to labour caused by these machines is a conversation that academia and industry and government should have." A particular goal of Ng's work is to "democratize" AI learning so that people can learn more about it and understand its benefits. Ng's stance on AI is shared by Mark Zuckerberg, but opposed by Elon Musk. In 2017, Ng said he supported basic income to allow the unemployed to study AI so that they can re-enter the workforce. He has stated that he enjoyed Erik Brynjolfsson and Andrew McAfee's "The Second Machine Age" which discusses issues such as AI displacement of jobs. See also Robot Operating System Latent Dirichlet allocation Google Brain Coursera References External links Homepage Ng's Quora profile Ng's Medium blog Academic Genealogy 1976 births Living people American computer businesspeople American computer scientists American education businesspeople American people of Hong Kong descent American roboticists American people of Chinese descent American technology chief executives American technology company founders American technology writers American venture capitalists Artificial intelligence researchers British emigrants to the United States British people of Hong Kong descent Businesspeople from California Businesspeople from London Carnegie Mellon University alumni Computer vision researchers Machine learning researchers Massachusetts Institute of Technology alumni People from Los Altos, California Raffles Junior College alumni Scientists from California Scientists from London Stanford University Department of Computer Science faculty Stanford University School of Engineering faculty UC Berkeley College of Engineering alumni Writers from California Writers from London Natural language processing researchers Baidu people Google employees
3617677
https://en.wikipedia.org/wiki/Orbcomm
Orbcomm
Orbcomm (stylized as ORBCOMM) is an American company that offers industrial Internet of things (IoT) and machine to machine (M2M) communications hardware, software and services designed to track, monitor, and control fixed and mobile assets in markets including transportation, heavy equipment, maritime, oil and gas, utilities and government. The company provides hardware devices, modems, web applications and data services delivered over multiple satellite and cellular networks. As of June 30, 2021, ORBCOMM has more than 2.3 million billable subscriber communicators, serving original equipment manufacturers (OEMs) such as Caterpillar Inc., Doosan Infracore America, Hitachi Construction Machinery Co., Ltd., John Deere, Komatsu Limited, and Volvo Construction Equipment, as well as other customers such as J. B. Hunt, C&S Wholesale Grocers, Canadian National Railways, C.R. England, Hub Group, KLLM Transport Services, Marten Transport, Swift Transportation, Target, Tropicana, Tyson Foods, Walmart and Werner Enterprises. ORBCOMM owns and operates a global network of 31 low Earth orbit (LEO) communications satellites and accompanying ground infrastructure including 16 gateway Earth stations (GESs) around the world. ORBCOMM is licensed to provide service in more than 130 countries and territories worldwide. History Founding and development of low Earth orbit satellite system The ORBCOMM low Earth orbit (LEO) system was conceived by Orbital Sciences Corporation (Orbital) in the late 1980s. In 1990, Orbital filed the world's first license application with the Federal Communications Commission (FCC) for the operation of a network of small LEO spacecraft to provide global satellite services of commercial messaging and data communications services via the company's ORBCOMM program. During the initial stages of the program, Orbital pursued a multi-pronged approach: regulatory approvals; ground infrastructure development and procurement of sites; modem development and country licensing. In 1992, the World Administrative Radio Conference (WARC) supported the spectrum allocation for non-voice, non-geostationary mobile-satellite service. With WARC approval, Orbital set up a specific ORBCOMM program for the development of satellites and ground infrastructure, and ORBCOMM became a wholly owned subsidiary of Orbital. In 1995, ORBCOMM was granted a full license to operate a network with up to 200,000 mobile Earth stations (MESs). ORBCOMM began procuring gateway Earth station (GES) locations and contracted with a division of Orbital Sciences, located in Mesa, Arizona, to develop and build four sets of GESs and associated spares. Land for the four GESs was procured or leased in Arizona, Washington, New York and Georgia. After the 1992 WARC approval, ORBCOMM signed contracts with three modem developers and manufacturers: Kyushu Matsushita Electric Company, a division of Panasonic; Elisra Electronic Systems, an Israeli company with expertise in electronic warfare systems; and Torrey Science & Technology, a small San Diego-based company with long ties to Orbital Sciences. Panasonic provided the first ORBCOMM-approved MES in March 1995. Elisra followed with the EL2000 in late 1995, and Torrey Science provided the ComCore 200 in April 1996. During the development of equipment, ORBCOMM also pursued licensing and regulatory approvals in several countries. By 1995, ORBCOMM had obtained regulatory approval in 19 countries, with a number of additional countries well into the regulatory process. ORBCOMM was also in initial negotiations with groups in Indonesia, and Italy for becoming ORBCOMM licensees, as well as GES operators in their respective regions. During the conceptual stages of the LEO satellite communications system, Orbital Sciences purchased a small company in Boulder, Colorado, specializing in small-satellite design. This company built the first three satellites in the ORBCOMM system: ORBCOMM X, Communications Demonstration Satellite (CDS) 1 and CDS 2. ORBCOMM X was lost after a single orbit. To validate the feasibility of commercially tracking and communicating with a LEO satellite, Orbital built an additional communications payload and flew this payload on an SR-71 in 1992. These tests were successful, and work on CDS 1 and 2 continued. CDS 1 and CDS 2 were launched in February and April 1992 respectively. These satellites were used to further validate the design of the network and were showcased in Orbital's plans to sign up an equity partner for the completion of the ORBCOMM System. In June 1992, Orbital created an equal partnership called ORBCOMM Global L.P. with Teleglobe Mobile Partners (Teleglobe Mobile), an affiliate of Teleglobe Inc., for the design and development of the LEO satellite system. Teleglobe Mobile invested $85 million in the project and also provided international service distribution. Orbital agreed to construct and launch satellites for the ORBCOMM system and to construct the satellite control center, the network control center and four U.S. gateway Earth stations. In April 1995, two satellites (F Plane) were launched, and in the summer the ORBCOMM global mobile data communications network was tested. Teleglobe Mobile invested an additional $75 million in the project that year and joined Orbital as a full joint-venture partner in ORBCOMM. In February 1996, ORBCOMM initiated the world's first commercial service for global mobile data communications provided by LEO satellites. ORBCOMM also raised an additional $170 million. In October 1996, ORBCOMM licensed Malaysian partner Technology Resources Industries Bhd. (TRI) to sell ORBCOMM's global two-way messaging service in Singapore, Malaysia and Brunei. TRI became the owner of a 15% stake in ORBCOMM, Teleglobe owning 35% and the rest held by Orbital. In December 1997, ORBCOMM launched eight satellites (A Plane). In 1998 ORBCOMM launched two satellites (G Plane) in February, eight satellites (B Plane) in August and eight satellites (C Plane) in September. After a short hiatus, ORBCOMM launched seven more satellites (D Plane) in December 1999. With the launch and operation of the C Plane satellites, ORBCOMM became the first commercial provider of global LEO satellite data and messaging communications services. ORBCOMM inaugurated full commercial service with its satellite-based global data communications network on November 30, 1998. In March 1998, the FCC expanded ORBCOMM's original license from 36 to 48 satellites. In January 2000, Orbital halted funding of ORBCOMM, and Teleglobe and Orbital signed a new partnership agreement with 67% ownership to Teleglobe and 33% to Orbital. In May 2000, Teleglobe ceased funding ORBCOMM. Like its voice-centric competitors Iridium and Globalstar, it filed for Chapter 11 protection, in September 2000. New ownership In 2001, a group of private investors purchased ORBCOMM and its assets out of an auction process, and ORBCOMM LLC was organized on April 4, 2001. On April 23, 2001, this group of investors acquired substantially all of the non-cash assets of ORBCOMM Global L.P. and its subsidiaries, which included the in-orbit satellites and supporting U.S. ground infrastructure equipment that the company owns today. At the same time, ORBCOMM LLC also acquired the FCC licenses required to own and operate the communications system from a subsidiary of Orbital Sciences Corporation, which was not in bankruptcy, in a related transaction. ORBCOMM issued a public offering of stock in November 2006. The company sold 9.23 million shares of common stock. In September, 2007, ORBCOMM Inc. was sued for its IPO prospectus containing inaccurate statements of material fact. It failed to disclose that demand for the company's products was weakening. In 2009, a payment of $2,450,000 was agreed. In September 2009, ORBCOMM signed a contract with SpaceX to launch ORBCOMM's next-generation OG2 satellite constellation. ORBCOMM launched its commercial satellite Automatic Identification System (AIS) service in 2009. AIS technology is used mainly for collision avoidance, but also for maritime domain awareness, search and rescue, and environmental monitoring. ORBCOMM leased the capabilities of two additional satellites, VesselSat-1 and VesselSat-2, launched in October 2011 and January 2012 respectively, for its AIS service from Luxspace. On July 14, 2014, ORBCOMM launched six next-generation OG2 satellites aboard a SpaceX Falcon 9 rocket from Cape Canaveral Air Force Station, Florida. In December 2015, the company launched eleven OG2 satellites from Cape Canaveral Air Force Station in Florida with the launch of the SpaceX Falcon 9 rocket. This dedicated launch marked ORBCOMM's second and final OG2 mission to complete its next-generation satellite constellation. In September 2021, the company announced the completion of its acquisition by GI Partners, in an all-cash transaction that values ORBCOMM at approximately $1.1 billion, including net debt. As a result, ORBCOMM is a privately held company, and its common stock is no longer listed on the Nasdaq Stock Market. Acquisitions Since 2011, ORBCOMM has acquired a number of companies including: StarTrak SystemsIn 2011 ORBCOMM acquired StarTrak and its ReeferTrak and GenTrak brands, including devices and applications for tracking refrigerated trailers, assets and gensets. StarTrak's customers included refrigerated unit manufacturers such as Carrier and Thermo King as well as companies such as Tropicana, Maersk Line, Prime Inc, CR England, FFE Transport, Inc. and Exel Transportation. PAR Logistics Management SystemsIn 2012, ORBCOMM acquired the assets of PAR Logistics Management Systems (PAR LMS), a subsidiary of PAR Technology Corporation and provider of asset and cargo tracking and monitoring equipment. MobileNetIn 2013, ORBCOMM acquired MobileNet, a GPS provider specializing in heavy equipment and rail support industries, with a platform geared toward heavy equipment Original Equipment Manufacturers (OEMs), dealers and fleet owners. MobileNet's customers included Doosan North America as well as rail companies Union Pacific, CSX and BNSF. GlobalTrakAlso in 2013, ORBCOMM acquired GlobalTrak, an information services company that uses networks, sensors and proprietary software platforms to provide container/vehicle tracking, awareness and intelligence for military, government and commercial customer applications. SENS Asset TrackingA third acquisition for ORBCOMM in 2013 was Comtech's Sensor Enabled Notification System (SENS) operation—a provider of one-way satellite products and services. The SENS system, which consists of satellite-based tracking devices, a network hub and an Internet-based back-office platform, enables data retrieval via the Globalstar satellite network. EuroscanIn 2014 ORBCOMM acquired Euroscan, a provider of refrigerated transportation temperature compliance recording systems used primarily for the transport of food and pharmaceuticals. As part of this transaction, in addition to acquiring Euroscan's distribution channel in Europe and other geographies, ORBCOMM acquired Ameriscan, Euroscan's North American subsidiary. At the time of the acquisition, Euroscan had a worldwide installed base of 200,000 recording units of which approximately 10,000 were wireless subscribers. InSync SoftwareIn 2015, ORBCOMM acquired InSync Software—a provider of Internet of Things (IoT) enterprise solutions and software applications. The acquisition expanded InSync's uniform software platform beyond RFID, cellular and sensor technologies to include satellite. InSync's software enables sensor-driven asset tracking and remote monitoring applications. SkyWave Mobile CommunicationsAlso in 2015 ORBCOMM acquired SkyWave Mobile Communications—an M2M service provider on the Inmarsat global L-band satellite network. The addition of SkyWave's higher bandwidth, lower-latency satellite products and services leverage IsatDataPro (IDP) technology, which is now jointly owned by ORBCOMM and Inmarsat. With the acquisition, SkyWave brought ORBCOMM more than 250,000 subscribers, 400 channel partners, and annualized revenues of over $60 million. WAM TechnologiesIn 2015 ORBCOMM also acquired WAM Technologies LLC (WAM), an affiliate of Mark-It Services, Inc. and provider of remote management and control solutions for ocean transport refrigerated containers and intermodal equipment. With the acquisition of WAM, ORBCOMM's cold chain monitoring solutions now include trailers, railcars, gensets and sea containers. SkygisticsIn 2016 ORBCOMM acquired Skygistics (Pty) Ltd. and its South African and Australian subsidiaries. Skygistics provides satellite and cellular connectivity options as well as telematics solutions and adds distribution for ORBCOMM's products in South Africa and 22 other African nations. InthincIn 2017 ORBCOMM acquired Inthinc, Inc., a provider of fleet management, vehicle telematics, driver safety and regulatory compliance solutions to a broad range of industrial enterprises. Inthinc provides an entry point for ORBCOMM into the vehicle fleet management market. Blue Tree SystemsAlso in 2017, ORBCOMM acquired Blue Tree Systems Limited, a provider of transportation management solutions across multiple classes of assets that include trucks, refrigerated straight trucks as well as refrigerated and dry trailers. Blue Tree offers country-specific compliance solutions in both North America and Europe. Blue Tree's solutions meet the requirements for the Electronic Logging Device (ELD) Mandate regulations, which will require companies to replace drivers’ paper log-books with electronic hours of service applications by December 18, 2017. Satellites The first-generation OG1 satellites each weigh . Two disc-shaped solar panels articulate in 1-axis to track the sun and provide 160 watts of power. Communication with subscriber units is done using SDPSK modulation at 4800 bit/s for the downlink and 2400 bit/s for the uplink. Each satellite has a 56 kbit/s backhaul that utilises the popular TDMA multiplexing scheme and QPSK modulation. ORBCOMM is the only current satellite licensee operating in the 137-150 MHz VHF band, which was allocated globally for "Little LEO" systems. Several such systems were planned in the early to mid-1990s but ORBCOMM was the only one to successfully launch. In the continental United States, ORBCOMM statistically relays 90% of the text messages within six minutes, but gaps between satellites can result in message delivery times of 15 minutes or more. ORBCOMM reported during an earnings report call in early 2007 that 50% of subscriber-initiated reports (messages of six bytes in size) were received in less than one minute, 90% in less than 4 minutes and 98% in less than 15 minutes. With the current constellation of ORBCOMM satellites, there is likely to be a satellite within range of almost any spot on Earth at any time of the day or night. Every satellite has an on-board GPS receiver for positioning. Typical data payloads are 6 bytes to 30 bytes, adequate for sending GPS position data or simple sensor readings. A total of 35 satellites were launched by ORBCOMM Global in the mid to late 1990s. Of the original 35, a total of 24 remain operational today, according to company filings. The plane F polar satellite, one of the original prototype first-generation satellites launched in 1995, was retired in April 2007 due to intermittent service. Two additional satellites (one from each of Plane B and Plane D) were retired in 2008 also due to intermittent service. The other five satellites that are not operational experienced failures earlier. The absence of these eight satellites can increase system latency and decrease overall capacity. ORBCOMM has invested in replacement satellites as the first generation is at or nearing end of life. On 19 June 2008 six additional ORBCOMM satellites were launched with the Cosmos-3M rocket: one ORBCOMM CDS weighing 80 kg, and five ORBCOMM Quick Launches weighing 115 kg each. These new satellites were built by German OHB System AG (platform) and by Orbital Sciences Corporation (payload) and included a secondary AIS. Design and production of the satellite platform was subcontracted by OHB System to Russian KB Polyot. On November 9, 2009, ORBCOMM filed a report to the US Securities and Exchange Commission stating that since launch, communications capability for three of the quick-launch satellites and the CDS has been lost. The failed satellites experienced attitude control system anomalies as well as anomalies with its power systems, which resulted in the satellites not pointing towards the sun and the earth as expected and as a consequence have reduced power generation. The company filed a $50 million claim with its insurers covering the loss of all six satellites and received $44.5 million in compensation. OG2 On 3 September 2009 a deal was announced between ORBCOMM and Space Exploration Technologies (SpaceX) to launch 18 second-generation satellites with SpaceX launch vehicles between 2010 and 2014. SpaceX originally planned to use Falcon 1e rocket, but on March 14, 2011, it was announced that SpaceX will use Falcon 9 to carry the first two ORBCOMM next-generation OG2 satellites to orbit in 2011. On Oct. 7, 2012, the first SpaceX Falcon 9 launch of a prototype OG2 ORBCOMM communications satellite from Cape Canaveral failed to achieve proper orbit and the company filed a $10 million claim with its insurers. The ORBCOMM satellite was declared a total loss and burned up in the atmosphere upon rentry on October 10, 2012. On July 14, 2014, ORBCOMM launched six next-generation OG2 satellites aboard a SpaceX Falcon 9 rocket from Cape Canaveral Air Force Station, Florida. In September 2014, ORBCOMM announced that, after in-orbit testing, the six satellites had been properly spaced within their orbital planes and were processing over 20% of the network's M2M traffic. In June 2015, the company lost communication with one of the in-orbit OG2 satellites. The company recorded an impairment charge of $12.7 million to write-off the net book value of this satellite as of June 30, 2015. The company stated that the loss of this one satellite is not expected to have a material adverse effect on network communications services. In October 2015, the company announced that SpaceX plans to launch eleven OG2 satellites from Cape Canaveral Air Force Station in Florida on the next launch of the SpaceX Falcon 9 rocket. The satellites were deployed on December 21, 2015. This dedicated launch marked ORBCOMM's second and final OG2 mission to complete its next-generation satellite constellation. Compared to its current OG1 satellites, ORBCOMM's OG2 satellites are designed for faster message delivery, larger message sizes and better coverage at higher latitudes, while increasing network capacity. In addition, the OG2 satellites are equipped with an Automatic Identification System (AIS) payload to receive and report transmissions from AIS-equipped vessels for ship tracking and other maritime navigational and safety efforts. Network services ORBCOMM provides satellite data services. As of May 2016, ORBCOMM has more than 1.6 million billable subscriber communicators. ORBCOMM has control centers in the United States, Brazil, Japan, and South Korea, as well as U.S. ground stations in New York, Georgia, Arizona, Washington and international ground stations in Curaçao, Italy, Australia, Kazakhstan, Brazil, Argentina, Morocco, Japan, South Korea, and Malaysia. Plans for additional ground station locations are underway. The ORBCOMM satellite network is best suited for users who send small amounts of data. To avoid interference, terminals are not permitted to be active more than 1% of the time, and thus they may only execute a 450 ms data burst twice every fifteen minutes. The latency inherent in ORBCOMM's network design prevents it from supporting certain safety-critical applications. ORBCOMM's acquisition of SkyWave Mobile Communications in January 2015 gave the company access to higher bandwidth, lower-latency satellite products and services that leverage IsatData Pro (IDP) technology over Inmarsat's global L-band satellite network. ORBCOMM's direct competition includes Globalstar's simplex services (which ORBCOMM also resells) and L-band leased capacity services such as those offered by SkyBitz. ORBCOMM's most significant competitor is Iridium Communications, which offers the Iridium SBD service, which features data packet, latency and antenna capabilities similar to that of IDP technology, which is now jointly owned by ORBCOMM and Inmarsat. ORBCOMM satellite services can be easily integrated with business applications. Customer data can be retrieved or auto-forwarded via SMTP or HTTP/XML feed directly over the Internet or through a dedicated link. ORBCOMM also partners with seven different cellular providers to offer wireless connectivity, cellular airtime data plans and SIM cards for M2M and IoT applications. ORBCOMM's other network service business is Automatic Identification System, or AIS, which is a widely deployed system used to track ocean vessels. Six satellites with AIS capability were launched in June 2008, referred to as the Quick Launch satellites. However, all six satellites eventually failed prematurely. When ORBCOMM's next-generation satellites launched on July 14, 2014, each one was equipped with an Automatic Identification System (AIS) payload to receive and report transmissions from AIS-equipped vessels for ship tracking and other maritime applications. ORBCOMM combines its satellite AIS data with a variety of terrestrial feeds to track over 150,000 vessels daily for over 100 customers in a variety of government and commercial organizations. Hardware Devices ORBCOMM offers cellular, satellite and dual-mode hardware devices for IoT and M2M tracking, monitoring and control applications, for a variety of industry applications. GT 1200: a dry van trailer tracking device with solar panel and rechargeable batteries that can be connected to door and cargo sensors. Available with cellular-only, satellite-only or dual cellular-satellite communication options. Chassis tracking option also available. PT 6000: a cellular or dual-mode refrigerated trailer management device providing fuel and temperature monitoring and control, maintenance, logistics and regulatory compliance. PT 7000: a cellular or dual-mode heavy equipment tracking and monitoring device providing access to real-time data and analytics. ST 6000 Series: programmable satellite communications terminals using the two-way Inmarsat IsatData Pro satellite service for remotely managing fixed and mobile assets. ST 9100 : an integrated, dual-mode communications terminal that delivers connectivity to mobile and fixed assets using either cellular or satellite communications over the two-way IsatData Pro satellite data service. ST 2100 : a lower cost, non-programmable terminal delivering satellite connectivity over the two-way IsatData Pro network. IDP-800: programmable satellite communications device with an integrated battery compartment that uses the two-way IsatData Pro satellite data service. Designed for tracking unpowered equipment in both land and maritime applications. Euroscan X3: a temperature recorder and printer designed to provide proof of an uninterrupted cold chain from point of origin to destination for food, pharmaceuticals and other temperature controlled cargo. Euroscan MX2: a HACCP-compliant two-way refrigerated tracking device with embedded cellular communication for the transport refrigeration market. CT 3000 Series: telematics device for use with marine reefer container monitoring. Communicates with OEM reefer controllers to read, monitor and remotely control the reefer container. BT 500: truck management solution providing GPS tracking, data collection from the CANbus and connection to third-party apps and other devices. GT 1020: cellular-based telematics device for general asset management application. Pro-400: a cellular or dual-mode device that uses verbal coaching to monitor driver behaviour and alert drivers and managers when actions that may compromise safety are detected. Modems ORBCOMM's interchangeable OG2 (OG2-M and OG2-GPS) and OG-ISAT (IsatData Pro L-Band) satellite modems feature an identical footprint (smaller than a credit card), connectors, power input, programming environment, communication interface and protocols. Sensors ORBCOMM offers a variety of sensors for IoT applications including a multi-zone cargo sensor, trailer or container door sensor, hook/unhook trailer sensor, fuel level sensor and temperature sensor. Antennas Antennas for ORBCOMM's IDP service have a small form factor and are similar to the ones used in the other L-band (1.5-1.6 GHz) satellite networks (Iridium, Globalstar). ORBCOMM's spectrum of 137 to 150 MHz is at a much lower frequency, and requires a much larger antenna than other networks. Most antennas are basic "whip" antennas that can be several feet long. Smaller, more compact designs are available but with performance trade-offs. Solutions Platforms ORBCOMM offers a variety of web applications for different industry markets: ORBCOMM platform: unified analytics and reporting platform for managing trucks, dry van trailers and refrigerated transportation assets ReeferConnect: solution for remote refrigerated container management, enabling real-time tracking and two-way control of refrigerated containers and cargo. VesselConnect: solution for local and remote management and control of refrigerated containers on board a sea vessel. Coldchainview: a web application compatible with Euroscan-branded products, enabling cold chain compliance via monitoring of food, pharmaceuticals and other refrigerated assets. FleetEdge: a heavy equipment telematics application, providing location data, operational status as well as analytic, predictive and diagnostic tools for fleet managers. AssetWatch Mining: Real-time locating system (RTLS) Solution for above and underground mine asset tracking. Enterprise applications ORBCOMM also offers a series of enterprise applications: ORBCOMMconnect: ORBCOMM's subscriber management and M2M service delivery platform for managing assets across multiple networks and technologies. Enables multi-network subscriber management, up to four levels of account structure, alerts and automation, a mobile app integration. DeviceCloud: A single interface for managing multiple networks and devices, where connectivity and device-specific messaging is abstracted to a common interface and messaging API. ORBCOMM Enterprise Connect: a 4G xLTE wireless failover and business continuity solution for distributed enterprises and retail locations, providing WAN connectivity for M2M and IoT applications. iApp: a cloud-based platform for enterprise application enablement. Enables the development, deployment and management of RFID and sensor-based IoT applications. The iApp platform was acquired as part of ORBCOMM's acquisition of InSync Software in January 2015. Military Contracting On December 10, 2020, US Army Contracting Command, Rock Island Arsenal, Illinois, contracted ORBCOMM for transponders. Awards and recognition 2021 IoT Evolution Industrial IoT Product of the Year Award - ST 9100 2021 IoT Breakthrough Award - M2M Vehicle Telematics Company of the Year 2020 Internet Telephony Unified Communications Excellence Award - BT 500 in-cab solution 2020 Food Logistics Top 100 List 2020 Compass Intelligence IoT Innovator Award - ORBCOMM Smart Grid monitoring solution 2020 Compass Intelligence IoT Innovator Award - ORBCOMM Smart Grid monitoring solution 2020 IoT Edge Computing Excellence Award - ORBCOMM Platform 2020 IoT Evolution Product of the Year Award - GT 1200 series trailer tracking devices 2019 Gold Stevie Award: New Transportation Products - CT 3000 Container Telematics Solution 2017 Compass Intelligence A-List in IoT & M2M Award: IoT Satellite Service Provider of the Year 2017 Connected World IoT Innovations Award: AssetWatch Mining 2016 IoT Breakthrough M2M Satellite Service Provider of the Year 2016 Deloitte Technology Fast 500 2016 IoT Evolution Asset Tracking Awards: CargoWatch Secure and Cold Chain Telematics Solution 2016 IoT Evolution Product of the Year Awards: PT 7000 & ORBCOMMconnect 2016 Smart Grid Product of the Year Award 2016 Connected World IoT Innovations Awards: ORBCOMMconnect 2016 Connected World Award: iApp Platform 2016 MSUA Innovation Award: OG2 Satellite Modem List 2016 Compass Intelligence Satellite M2M Service Provider of the Year Connected World 2016 CW List American Business Awards - 2015 Silver Stevie Awards: Telecommunications Company of the Year, Most Innovative Tech Company of the Year, Most Innovative Company of the Year, Communications Team of the Year 2015 Internet of Things (IoT) Evolution Product of the Year Awards: GT 1000 & SkyWave IDP-782 2015 Connected World IoT Innovations Awards: GT 1100 & SkyWave IDP-782 2015 MSUA Innovation Award: SkyWave IsatData Pro Service 2014 IOT Evolution Excellence Award Compass Intelligence 2015 A-List in M2M Award 2014 M2M Evolution Magazine Asset Tracking Award: GT 1100 2014 M2M Evolution Magazine Product of the Year Award: GT 2300 Compass Intelligence 2014 A-List in M2M Award 2013 CTIA E-Tech Award: GT 1100 See also Mobile-satellite service Satellite phone Globalstar Globalsat Group Gonets Gurtam ICO Global Communications Inmarsat Iridium Satellite LLC O3b Networks Solaris Mobile Sky and Space Global SkyTerra SkyWave Mobile Communications TerreStar Corporation References Communications satellite operators Satellite Internet access Telecommunications companies of the United States Companies that filed for Chapter 11 bankruptcy in 2000 Companies formerly listed on the Nasdaq
638263
https://en.wikipedia.org/wiki/Higher%20education%20in%20Mauritius
Higher education in Mauritius
Higher education in Mauritius includes colleges, universities and other technical institutions. Public university education has been free to students since 2019. The sector is managed by the Higher Education Commission (HEC) which has the responsibility for allocating public funds, and fostering, planning and coordinating the development of post-secondary education and training. Formerly the Tertiary Education Commission, in 2020 it was reformed into the HEC and a separate Quality Assurance Authority (QAA) for auditing of qualifications. Universities The University of Mauritius Starting as the College of Agriculture, the University of Mauritius, established in 1965, dominates the Tertiary Education Sector locally. Originally, it had three Schools, namely Agriculture, Administration and Industrial Technology. It has since expanded to comprise five faculties, namely Agriculture, Engineering, Law and Management, Science, and Social Studies & Humanities. It has a Centre for Medical Research and Studies and hosts the SSR Medical College (Sir Seewoosagur Ramgoolam Medical College). It has a Centre for Distance Education, a Centre for Information Technology and Systems, and a Consultancy Centre. The UoM is expanding, with a student growth rate of about 10% annually. Programmes have changed steadily from sub-degree certificate/diploma levels to undergraduate and taught masters Programmes, as well as research at postgraduate level. University of Technology, Mauritius The University of Technology, Mauritius (UTM) Act was promulgated in May 2000 and became operational in September 2000. UTM has three schools: the School of Business Informatics and Software Engineering, the School of Public Sector Policy and Management, and the School of Sustainable Development Science. Open University of Mauritius The Open University of Mauritius (OU) was established on 12 July 2012 according to the Open University of Mauritius ACT 2010. OU delivers education to learners who wish to study full-time as well as to those who are unable to be physically present on campus. With flexible study options, its learners can study from home, work, or anywhere in the world. The Mauritius College of the Air, which was established in 1971, has integrated with the Open University of Mauritius in July 2012. OU is a member of the Association of Commonwealth Universities, International Council for Open and Distance Education, and African Council for Distance Education. In collaboration with other universities it offers courses both on open/distance/blended and full-time learning modes. There are three campuses in Mauritius and one in Rodrigues. Distance learners watch or listen to materials supplied, work on course activities and assignments with support from the tutor who is an email away. Tutorials are organized but they are mostly optional and give a chance to meet the tutors and fellow learners. Through its virtual classrooms, international learners also participate in interactive sessions conducted by its resource persons. It also offers short courses that help to improve the employability skills. OU offers qualifications ranging from short employability courses, B.Sc./B.Ed., M.Sc./M.Ed./M.A, MBA to DBA/Ph.D. to fields including Management, Business Administration, Human Resource Management/Development, Education, Finance, Accounting, Taxation, Law, Graphics Design, Journalism, Multimedia, Audio-visual Production, Translation Studies, Mathematics, Logistics, Transport, French and English. OU conducts corporate training for private and public institutions. Employees are empowered through continuous professional development courses. OU has a School of Public Health; a Centre for Research on Interculturality; and a Big Data Research Centre. http://www.open.ac.mu Université des Mascareignes The Université des Mascareignes was founded as the country's fourth public university in 2012, merging the IST (Institut Supérieur de Technologie) of Camp Levieux and Swami Dayanand Institute of Management (SDIM). Foreign universities Greenwich University Pakistan, Mauritius Branch Campus The Mauritius Branch Campus of Greenwich University (Pakistan) is located in at 51, Ebene, Cybercity, Mauritius and fully accredited by the Higher Education Commission of Mauritius, offering undergraduate and graduate degrees in Business Administration. The Campus is also approved by Mauritius Qualification Authority (MQA). Greenwich University has been imparting quality higher education in Pakistan for the past 30 years. The University is recognized by the Higher Education Commission (HEC) of Pakistan and is ranked among the top ten business schools of Pakistan. Greenwich University has a campus in The Republic of Mauritius, duly recognized by the HEC of Mauritius and the MQA, making it the only university from Pakistan who has an international campus duly recognized by accrediting bodies. It is member of Association of International Education (NAFSA), International Association of Universities (IAU), The Association of Advance Collegiate Schools in Business (AACSB), Asia Pacific Quality Network (APQN), International Network for Quality Assurance Agencies in Higher Education (INQAAHE), Association of Commonwealth Universities (ACU) and Association of MBA (AMBA). Aberystwyth University Mauritius Branch Campus Aberystwyth University (Mauritius Branch Campus) was built in the Quartier-Militaire, registered with the HEC and opened in 2016. All the courses at undergraduate and postgraduate level were accredited by the UK Quality Assurance Agency for Higher Education. Two years later the university closed its Mauritius campus to new enrolments due to low enrolment numbers. Other institutions The Mauritius Institute of Education Founded in 1973, the MIE was charged with teacher education, research in education and curriculum development. The role of the MIE as a curriculum development centre has been phased so that it is now predominantly involved in training teachers and educational research. However, it continued to play a role in curriculum planning and development so that the MIE has been entrusted with the responsibility for Curriculum Development as from October 2010. There are five schools at MIE: Applied Sciences, Education, Science and Mathematics, Arts and Humanities and a Centre for Open and Distance Learning. It offers training to school teachers from pre-primary, primary and secondary schools, as well as managerial cadres in programme ranging from Certificate, Diploma, the Post Graduate Certificate in Education and Post Graduate Diplomas. It offers B.Ed as well as Masters programmes in Education, in collaboration with the University of Mauritius and the University of Brighton, UK respectively. It offers Doctoral programmes in association with the University of KwaZulu Natal, South Africa and the University of Brighton, UK. The MIE, through its Centre for Open & Distance Learning, is involved in the Sankore Project for digitising curriculum materials for primary and secondary schools, and the introduction of interactive white board technology in schools. The Mahatma Gandhi Institute MGI was established in 1970 as a joint government of Mauritius and government of India venture for the promotion of education and culture, with emphasis on Indian culture and traditions. It runs, within the tertiary set-up, programmes in Indian Studies, Indological Studies, Performing Arts, Fine Arts, Chinese and Mauritian Studies. MGI has three main schools operating at the tertiary level: the School of Indian Studies, the School of Music and Fine Arts, and the School of Mauritian and Area Studies. It also runs diploma and certificate level programmes, degree level programmes in Languages, Fine Arts and Performing Arts, in collaboration with the UoM. A secondary school and the Gandhian Basic School operate within the ambit of the MGI. The Mauritius College of the Air MCA was established in 1971 to promote education, arts and science and culture in Mauritius through mass media. When the MCA statute was re-enacted in 1985, distance education was maintained as a major strategy to meet these objectives. Merged with the Audio-Visual Centre of the Ministry of Education and Science in 1986, the MCA has until recently been catering mainly for the primary and secondary education sector through the production of educational programmes for broadcast on radio and television. The MCA has also been producing educational materials for non-formal or continuing education, for non-broadcast use. Since the beginning of 1995, it has been involved in dispensing tertiary level programmes in collaboration with overseas institutions through the distance mode. The MCA is being reconfigured as the Open University of Mauritius. Rabindranath Tagore Institute Set up in December 2002, the Rabindranath Tagore Institute has a cultural vocation and operates under the aegis of the Mahatma Gandhi Institute. It is still in an early phase of development. The Polytechnics Two polytechnic institutions exist in the country. The Swami Dayanand Institute of Management runs diploma level Programmes in Information Technology, Administration and Accounting; such diploma programmes were formerly provided by the UoM. The Institut Supérieur de Téchnologie offers diploma level programmes (Brevét de Téchnicien Supérieur) in Electro-Technics, Mecatronics and Building Engineering. All the programmes are run on a full-time basis. Technical School Management Trust Fund The TSMTF was created in 1990 to manage the Polytechnics. It is administered by a Board. Industry Advisory Committees, composed of representatives of both the public and private sectors, are appointed in respect of the programmes. The committees function is to: establish programme objectives, curriculum content and delivery modes; establish terminal standards and certification; prescribe training equipment, hardware and software; prescribe training facilities and environment; advise on industrial training attachments; review programme results and diploma holders’ employment performance; monitor and review market demand; review and upgrade programmes. Mauritius Institute of Training and Development (Ex:IVTB) The IVTB was set up in 1988 to promote vocational education and training with the purposes of supplying workforce for the industrial, services and domestic sectors. Most of the programmes are vocational nature leading to the National Trade Certification (levels 3 and 2). As from 1998 the IVTB began tertiary level programmes at the levels of certificate and diploma in areas including Hotel Management, Automation and Information Technology. Since 2004, the IVTB has introduced the National Trade Certificate Foundation Course for students who have completed their Prevocational Education Level 3. The course teaches Craftsmanship. The IVTB has also introduced other courses such as IC3 for students of National Trade Certificate Level 2 and 3. The Mauritius Institute of Health The MIH was set up in 1989 to cater for the training needs of health professionals, local and regional. It organizes courses and programmes, mostly of short duration for medical and para-medical personnel. Private institutions/distance education Private institutions and organisations, which amount to more than 35, are offering programmes in Management, Accountancy and Information Technology. Most of these private institutions are local counterparts of overseas institutions and are offering programmes ranging from sub-degree to postgraduate ones through a mixed-mode system, encompassing both distance learning and face-to face tutorials. A majority of the examinations are conducted by the Mauritius Examinations Syndicate (MES) and a few are organised and invigilated by the overseas institutions themselves in collaboration with the local partner. Key players include the Charles Telfair Institute, MALEM, the Mauritius Employers Federation and the Mauritius Chamber of Commerce & Industry. University of the Indian Ocean The UIO, established in January 1998 under the aegis of the Indian Ocean Commission (IOC), is a network of tertiary education and research institutions of the five member states, namely Comores, Madagascar, Mauritius, Reunion and Seychelles. It offers tertiary level programmes of a regional vocation in the five member countries. During its three-year pilot phase, the secretariat of the UIO was based in Reunion. In line with a recent decision, the seat of the UIO will be a rotating one among member states. Institut de la Francophonie pour L’Entrepreneuriat IFE came into operation in 1999, within the context of an agreement signed between the Ministry of Education and Scientific Research and the "Association des Universités Partiellement ou Entièrement de Langue Française et L’Université des Réseaux d’Expréssion Française". It offers Masters and Doctoral programmes and undertake research in entrepreneuriat and related fields with a regional vocation. Sir Seewosagur Ramgoolam Medical College [SSR] Medical College SSRMC was created in 1999 and is affiliated with the University of Mauritius. It is situated at Belle Rive nearby the city of Curepipe. It caters for local and overseas students, from India, South Africa, Malaysia, Gulf and other Indian Ocean Rim countries. A report from an international monitoring committee highlighted that the teaching room of the medical college is inadequate and badly sited for clinical teaching. The five year degree costs US$50000. Approximately 20% of seats are allocated to local students on the 5-year MBBS program which is divided into ten semesters of 6 months each and BDS course of four years followed by a one-year compulsory rotatory internship. The college offers postgraduation courses MD/MS in five specialities of Medicine, Surgery, Radio diagnosis, Anaesthesiology, and Ophthalmology. The minimum entry requirement is completion of both Cambridge SC and Cambridge HSC examinations, with a minimum requirement of AAB at HSC level. The MBBS degree Of SSR is recognised by the Medical board of California, IMED/ECFMG USA and by the World Health Organisation. Indian Ocean Dental School and Hospital The Indian Ocean Dental School and Hospital, managed by the R.F. Gandhi A.K. Trust Limited, began in 2003. It is affiliated to Bhavnagar University, Gujarat, India. The institute is the key provider of dental education in the region, providing the BDS programme. The program duration is 4 years plus 1 year of internship. Further details can be accessed from www.iodsh.com See also Higher education List of tertiary institutions in Mauritius Indigenous education References External links Higher Education Commission Ministry of Tertiary Education, Science, Research and Technology Human Resource Development Council Mauritius Qualifications Authority Mauritius Research and Innovation Council Human Resource, Knowledge and Arts Development Fund Education in Mauritius Mauritius
46380
https://en.wikipedia.org/wiki/Coaxial%20cable
Coaxial cable
Coaxial cable, or coax (pronounced ) is a type of electrical cable consisting of an inner conductor surrounded by a concentric conducting shield, with the two separated by a dielectric (insulating material); many coaxial cables also have a protective outer sheath or jacket. The term coaxial refers to the inner conductor and the outer shield sharing a geometric axis. Coaxial cable is a type of transmission line, used to carry high-frequency electrical signals with low losses. It is used in such applications as telephone trunk lines, broadband internet networking cables, high-speed computer data busses, cable television signals, and connecting radio transmitters and receivers to their antennas. It differs from other shielded cables because the dimensions of the cable and connectors are controlled to give a precise, constant conductor spacing, which is needed for it to function efficiently as a transmission line. Coaxial cable was used in the first (1858) and following transatlantic cable installations, but its theory was not described until 1880 by English physicist, engineer, and mathematician Oliver Heaviside, who patented the design in that year (British patent No. 1,407). Applications Coaxial cable is used as a transmission line for radio frequency signals. Its applications include feedlines connecting radio transmitters and receivers to their antennas, computer network (e.g., Ethernet) connections, digital audio (S/PDIF), and distribution of cable television signals. One advantage of coaxial over other types of radio transmission line is that in an ideal coaxial cable the electromagnetic field carrying the signal exists only in the space between the inner and outer conductors. This allows coaxial cable runs to be installed next to metal objects such as gutters without the power losses that occur in other types of transmission lines. Coaxial cable also provides protection of the signal from external electromagnetic interference. Description Coaxial cable conducts electrical signal using an inner conductor (usually a solid copper, stranded copper or copper plated steel wire) surrounded by an insulating layer and all enclosed by a shield, typically one to four layers of woven metallic braid and metallic tape. The cable is protected by an outer insulating jacket. Normally, the outside of the shield is kept at ground potential and a signal carrying voltage is applied to the center conductor. The advantage of coaxial design is that with differential mode, equal push-pull currents on the inner conductor, and inside of the outer conductor, the signal's electric and magnetic fields are restricted to the dielectric, with little leakage outside the shield. Further, electric and magnetic fields outside the cable are largely kept from interfering with signals inside the cable, if unequal currents are filtered out at the receiving end of the line. This property makes coaxial cable a good choice both for carrying weak signals, that cannot tolerate interference from the environment, and for stronger electrical signals, that must not be allowed to radiate or couple into adjacent structures or circuits. Larger diameter cables and cables with multiple shields have less leakage. Common applications of coaxial cable include video and CATV distribution, RF and microwave transmission, and computer and instrumentation data connections. The characteristic impedance of the cable () is determined by the dielectric constant of the inner insulator and the radii of the inner and outer conductors. In radio frequency systems, where the cable length is comparable to the wavelength of the signals transmitted, a uniform cable characteristic impedance is important to minimize loss. The source and load impedances are chosen to match the impedance of the cable to ensure maximum power transfer and minimum standing wave ratio. Other important properties of coaxial cable include attenuation as a function of frequency, voltage handling capability, and shield quality. Construction Coaxial cable design choices affect physical size, frequency performance, attenuation, power handling capabilities, flexibility, strength, and cost. The inner conductor might be solid or stranded; stranded is more flexible. To get better high-frequency performance, the inner conductor may be silver-plated. Copper-plated steel wire is often used as an inner conductor for cable used in the cable TV industry. The insulator surrounding the inner conductor may be solid plastic, a foam plastic, or air with spacers supporting the inner wire. The properties of the dielectric insulator determine some of the electrical properties of the cable. A common choice is a solid polyethylene (PE) insulator, used in lower-loss cables. Solid Teflon (PTFE) is also used as an insulator, and exclusively in plenum-rated cables. Some coaxial lines use air (or some other gas) and have spacers to keep the inner conductor from touching the shield. Many conventional coaxial cables use braided copper wire forming the shield. This allows the cable to be flexible, but it also means there are gaps in the shield layer, and the inner dimension of the shield varies slightly because the braid cannot be flat. Sometimes the braid is silver-plated. For better shield performance, some cables have a double-layer shield. The shield might be just two braids, but it is more common now to have a thin foil shield covered by a wire braid. Some cables may invest in more than two shield layers, such as "quad-shield", which uses four alternating layers of foil and braid. Other shield designs sacrifice flexibility for better performance; some shields are a solid metal tube. Those cables cannot be bent sharply, as the shield will kink, causing losses in the cable. When a foil shield is used a small wire conductor incorporated into the foil makes soldering the shield termination easier. For high-power radio-frequency transmission up to about 1 GHz, coaxial cable with a solid copper outer conductor is available in sizes of 0.25 inch upward. The outer conductor is corrugated like a bellows to permit flexibility and the inner conductor is held in position by a plastic spiral to approximate an air dielectric. One brand name for such cable is Heliax. Coaxial cables require an internal structure of an insulating (dielectric) material to maintain the spacing between the center conductor and shield. The dielectric losses increase in this order: Ideal dielectric (no loss), vacuum, air, polytetrafluoroethylene (PTFE), polyethylene foam, and solid polyethylene. An inhomogeneous dielectric needs to be compensated by a non-circular conductor to avoid current hot-spots. While many cables have a solid dielectric, many others have a foam dielectric that contains as much air or other gas as possible to reduce the losses by allowing the use of a larger diameter center conductor. Foam coax will have about 15% less attenuation but some types of foam dielectric can absorb moisture—especially at its many surfaces — in humid environments, significantly increasing the loss. Supports shaped like stars or spokes are even better but more expensive and very susceptible to moisture infiltration. Still more expensive were the air-spaced coaxials used for some inter-city communications in the mid-20th century. The center conductor was suspended by polyethylene discs every few centimeters. In some low-loss coaxial cables such as the RG-62 type, the inner conductor is supported by a spiral strand of polyethylene, so that an air space exists between most of the conductor and the inside of the jacket. The lower dielectric constant of air allows for a greater inner diameter at the same impedance and a greater outer diameter at the same cutoff frequency, lowering ohmic losses. Inner conductors are sometimes silver-plated to smooth the surface and reduce losses due to skin effect. A rough surface extends the current path and concentrates the current at peaks, thus increasing ohmic loss. The insulating jacket can be made from many materials. A common choice is PVC, but some applications may require fire-resistant materials. Outdoor applications may require the jacket to resist ultraviolet light, oxidation, rodent damage, or direct burial. Flooded coaxial cables use a water-blocking gel to protect the cable from water infiltration through minor cuts in the jacket. For internal chassis connections the insulating jacket may be omitted. Signal propagation Twin-lead transmission lines have the property that the electromagnetic wave propagating down the line extends into the space surrounding the parallel wires. These lines have low loss, but also have undesirable characteristics. They cannot be bent, tightly twisted, or otherwise shaped without changing their characteristic impedance, causing reflection of the signal back toward the source. They also cannot be buried or run along or attached to anything conductive, as the extended fields will induce currents in the nearby conductors causing unwanted radiation and detuning of the line. Standoff insulators are used to keep them away from parallel metal surfaces. Coaxial lines largely solve this problem by confining virtually all of the electromagnetic wave to the area inside the cable. Coaxial lines can therefore be bent and moderately twisted without negative effects, and they can be strapped to conductive supports without inducing unwanted currents in them, so long as provisions are made to ensure differential mode signal push-pull currents in the cable. In radio-frequency applications up to a few gigahertz, the wave propagates primarily in the transverse electric magnetic (TEM) mode, which means that the electric and magnetic fields are both perpendicular to the direction of propagation. However, above a certain cutoff frequency, transverse electric (TE) or transverse magnetic (TM) modes can also propagate, as they do in a hollow waveguide. It is usually undesirable to transmit signals above the cutoff frequency, since it may cause multiple modes with different phase velocities to propagate, interfering with each other. The outer diameter is roughly inversely proportional to the cutoff frequency. A propagating surface-wave mode that does not involve or require the outer shield but only a single central conductor also exists in coax but this mode is effectively suppressed in coax of conventional geometry and common impedance. Electric field lines for this [TM] mode have a longitudinal component and require line lengths of a half-wavelength or longer. Coaxial cable may be viewed as a type of waveguide. Power is transmitted through the radial electric field and the circumferential magnetic field in the TEM00 transverse mode. This is the dominant mode from zero frequency (DC) to an upper limit determined by the electrical dimensions of the cable. Connectors The ends of coaxial cables usually terminate with connectors. Coaxial connectors are designed to maintain a coaxial form across the connection and have the same impedance as the attached cable. Connectors are usually plated with high-conductivity metals such as silver or tarnish-resistant gold. Due to the skin effect, the RF signal is only carried by the plating at higher frequencies and does not penetrate to the connector body. Silver however tarnishes quickly and the silver sulfide that is produced is poorly conductive, degrading connector performance, making silver a poor choice for this application. Important parameters Coaxial cable is a particular kind of transmission line, so the circuit models developed for general transmission lines are appropriate. See Telegrapher's equation. Physical parameters In the following section, these symbols are used: Length of the cable, . Outside diameter of inner conductor, . Inside diameter of the shield, . Dielectric constant of the insulator, . The dielectric constant is often quoted as the relative dielectric constant referred to the dielectric constant of free space : . When the insulator is a mixture of different dielectric materials (e.g., polyethylene foam is a mixture of polyethylene and air), then the term effective dielectric constant is often used. Magnetic permeability of the insulator, . Permeability is often quoted as the relative permeability referred to the permeability of free space : . The relative permeability will almost always be 1. Fundamental electrical parameters Shunt capacitance per unit length, in farads per metre. Series inductance per unit length, in henrys per metre. Series resistance per unit length, in ohms per metre. The resistance per unit length is just the resistance of inner conductor and the shield at low frequencies. At higher frequencies, skin effect increases the effective resistance by confining the conduction to a thin layer of each conductor. Shunt conductance per unit length, in siemens per metre. The shunt conductance is usually very small because insulators with good dielectric properties are used (a very low loss tangent). At high frequencies, a dielectric can have a significant resistive loss. Derived electrical parameters *Characteristic impedance in ohms (Ω). The complex impedance of an infinite length of transmission line is: Where is the resistance per unit length, is the inductance per unit length, is the conductance per unit length of the dielectric, is the capacitance per unit length, and is the frequency. The "per unit length" dimensions cancel out in the impedance formula. At DC the two reactive terms are zero, so the impedance is real-valued, and is extremely high. It looks like . With increasing frequency, the reactive components take effect and the impedance of the line is complex-valued. At very low frequencies (audio range, of interest to telephone systems) is typically much smaller than , so the impedance at low frequencies is , which has a phase value of -45 degrees. At higher frequencies, the reactive terms usually dominate and , and the cable impedance again becomes real-valued. That value is , the characteristic impedance of the cable: . Assuming the dielectric properties of the material inside the cable do not vary appreciably over the operating range of the cable, the characteristic impedance is frequency independent above about five times the shield cutoff frequency. For typical coaxial cables, the shield cutoff frequency is 600 (RG-6A) to 2,000 Hz (RG-58C). The parameters and are determined from the ratio of the inner () and outer () diameters and the dielectric constant (). The characteristic impedance is given by Attenuation (loss) per unit length, in decibels per meter. This is dependent on the loss in the dielectric material filling the cable, and resistive losses in the center conductor and outer shield. These losses are frequency dependent, the losses becoming higher as the frequency increases. Skin effect losses in the conductors can be reduced by increasing the diameter of the cable. A cable with twice the diameter will have half the skin effect resistance. Ignoring dielectric and other losses, the larger cable would halve the dB/meter loss. In designing a system, engineers consider not only the loss in the cable but also the loss in the connectors. Velocity of propagation, in meters per second. The velocity of propagation depends on the dielectric constant and permeability (which is usually 1). Single-mode band. In coaxial cable, the dominant mode (the mode with the lowest cutoff frequency) is the TEM mode, which has a cutoff frequency of zero; it propagates all the way down to d.c. The mode with the next lowest cutoff is the TE11 mode. This mode has one 'wave' (two reversals of polarity) in going around the circumference of the cable. To a good approximation, the condition for the TE11 mode to propagate is that the wavelength in the dielectric is no longer than the average circumference of the insulator; that is that the frequency is at least . Hence, the cable is single-mode from to d.c. up to this frequency, and might in practice be used up to 90% of this frequency. Peak Voltage. The peak voltage is set by the breakdown voltage of the insulator.: where Ed is the insulator's breakdown voltage in volts per meter d is the inner diameter in meters D is the outer diameter in meters The calculated peak voltage is often reduced by a safety factor. Choice of impedance The best coaxial cable impedances in high-power, high-voltage, and low-attenuation applications were experimentally determined at Bell Laboratories in 1929 to be 30, 60, and 77 Ω, respectively. For a coaxial cable with air dielectric and a shield of a given inner diameter, the attenuation is minimized by choosing the diameter of the inner conductor to give a characteristic impedance of 76.7 Ω. When more common dielectrics are considered, the best-loss impedance drops down to a value between 52–64 Ω. Maximum power handling is achieved at 30 Ω. The approximate impedance required to match a centre-fed dipole antenna in free space (i.e., a dipole without ground reflections) is 73 Ω, so 75 Ω coax was commonly used for connecting shortwave antennas to receivers. These typically involve such low levels of RF power that power-handling and high-voltage breakdown characteristics are unimportant when compared to attenuation. Likewise with CATV, although many broadcast TV installations and CATV headends use 300 Ω folded dipole antennas to receive off-the-air signals, 75 Ω coax makes a convenient 4:1 balun transformer for these as well as possessing low attenuation. The arithmetic mean between 30 Ω and 77 Ω is 53.5 Ω; the geometric mean is 48 Ω. The selection of 50 Ω as a compromise between power-handling capability and attenuation is in general cited as the reason for the number. 50 Ω also works out tolerably well because it corresponds approximately to the feedpoint impedance of a half-wave dipole, mounted approximately a half-wave above "normal" ground (ideally 73 Ω, but reduced for low-hanging horizontal wires). RG-62 is a 93 Ω coaxial cable originally used in mainframe computer networks in the 1970s and early 1980s (it was the cable used to connect IBM 3270 terminals to IBM 3274/3174 terminal cluster controllers). Later, some manufacturers of LAN equipment, such as Datapoint for ARCNET, adopted RG-62 as their coaxial cable standard. The cable has the lowest capacitance per unit-length when compared to other coaxial cables of similar size. All of the components of a coaxial system should have the same impedance to avoid internal reflections at connections between components (see Impedance matching). Such reflections may cause signal attenuation. They introduce standing waves, which increase losses and can even result in cable dielectric breakdown with high-power transmission. In analog video or TV systems, reflections cause ghosting in the image; multiple reflections may cause the original signal to be followed by more than one echo. If a coaxial cable is open (not connected at the end), the termination has nearly infinite resistance, which causes reflections. If the coaxial cable is short-circuited, the termination resistance is nearly zero, which causes reflections with the opposite polarity. Reflections will be nearly eliminated if the coaxial cable is terminated in a pure resistance equal to its impedance. Coaxial characteristic impedance derivation Taking the characteristic impedance at high frequencies, One should also know the inductance and capacitance of the two concentric cylindrical conductors which is the coaxial cable. By definition and getting the electric field by the formula of electric field of an infinite line, where is charge, is the permittivity of free space, is the radial distance and is the unit vector in the direction away from the axis. The voltage, V, is where is the inner diameter of the outer conductor and is the diameter of the inner conductor. The capacitance can then be solved by substitution, and the inductance is taken from Ampere's Law for two concentric conductors (coaxial wire) and with the definition of inductance, and where is magnetic induction, is the permeability of free space, is the magnetic flux and is the differential surface. Taking the inductance per meter, , Substituting the derived capacitance and inductance, and generalizing them to the case where a dielectric of permeability and permittivity is used in between the inner and outer conductors, Issues Signal leakage Signal leakage is the passage of electromagnetic fields through the shield of a cable and occurs in both directions. Ingress is the passage of an outside signal into the cable and can result in noise and disruption of the desired signal. Egress is the passage of signal intended to remain within the cable into the outside world and can result in a weaker signal at the end of the cable and radio frequency interference to nearby devices. Severe leakage usually results from improperly installed connectors or faults in the cable shield. For example, in the United States, signal leakage from cable television systems is regulated by the FCC, since cable signals use the same frequencies as aeronautical and radionavigation bands. CATV operators may also choose to monitor their networks for leakage to prevent ingress. Outside signals entering the cable can cause unwanted noise and picture ghosting. Excessive noise can overwhelm the signal, making it useless. In-channel ingress can be digitally removed by ingress cancellation. An ideal shield would be a perfect conductor with no holes, gaps, or bumps connected to a perfect ground. However, a smooth solid highly conductive shield would be heavy, inflexible, and expensive. Such coax is used for straight-line feeds to commercial radio broadcast towers. More economical cables must make compromises between shield efficacy, flexibility, and cost, such as the corrugated surface of flexible hardline, flexible braid, or foil shields. Since shields cannot be perfect conductors, current flowing on the inside of the shield produces an electromagnetic field on the outer surface of the shield. Consider the skin effect. The magnitude of an alternating current in a conductor decays exponentially with distance beneath the surface, with the depth of penetration being proportional to the square root of the resistivity. This means that, in a shield of finite thickness, some small amount of current will still be flowing on the opposite surface of the conductor. With a perfect conductor (i.e., zero resistivity), all of the current would flow at the surface, with no penetration into and through the conductor. Real cables have a shield made of an imperfect, although usually very good, conductor, so there must always be some leakage. The gaps or holes, allow some of the electromagnetic field to penetrate to the other side. For example, braided shields have many small gaps. The gaps are smaller when using a foil (solid metal) shield, but there is still a seam running the length of the cable. Foil becomes increasingly rigid with increasing thickness, so a thin foil layer is often surrounded by a layer of braided metal, which offers greater flexibility for a given cross-section. Signal leakage can be severe if there is poor contact at the interface to connectors at either end of the cable or if there is a break in the shield. To greatly reduce signal leakage into or out of the cable, by a factor of 1000, or even 10,000, superscreened cables are often used in critical applications, such as for neutron flux counters in nuclear reactors. Superscreened cables for nuclear use are defined in IEC 96-4-1, 1990, however as there have been long gaps in the construction of nuclear power stations in Europe, many existing installations are using superscreened cables to the UK standard AESS(TRG) 71181 which is referenced in IEC 61917. Ground loops A continuous current, even if small, along the imperfect shield of a coaxial cable can cause visible or audible interference. In CATV systems distributing analog signals the potential difference between the coaxial network and the electrical grounding system of a house can cause a visible "hum bar" in the picture. This appears as a wide horizontal distortion bar in the picture that scrolls slowly upward. Such differences in potential can be reduced by proper bonding to a common ground at the house. See ground loop. Noise External fields create a voltage across the inductance of the outside of the outer conductor between sender and receiver. The effect is less when there are several parallel cables, as this reduces the inductance and, therefore, the voltage. Because the outer conductor carries the reference potential for the signal on the inner conductor, the receiving circuit measures the wrong voltage. Transformer effect The transformer effect is sometimes used to mitigate the effect of currents induced in the shield. The inner and outer conductors form the primary and secondary winding of the transformer, and the effect is enhanced in some high-quality cables that have an outer layer of mu-metal. Because of this 1:1 transformer, the aforementioned voltage across the outer conductor is transformed onto the inner conductor so that the two voltages can be cancelled by the receiver. Many senders and receivers have means to reduce the leakage even further. They increase the transformer effect by passing the whole cable through a ferrite core one or more times. Common mode current and radiation Common mode current occurs when stray currents in the shield flow in the same direction as the current in the center conductor, causing the coax to radiate. They are the opposite of the desired "push-pull" differential mode currents, where the signal currents on the inner and outer conductor are equal and opposite. Most of the shield effect in coax results from opposing currents in the center conductor and shield creating opposite magnetic fields that cancel, and thus do not radiate. The same effect helps ladder line. However, ladder line is extremely sensitive to surrounding metal objects, which can enter the fields before they completely cancel. Coax does not have this problem, since the field is enclosed in the shield. However, it is still possible for a field to form between the shield and other connected objects, such as the antenna the coax feeds. The current formed by the field between the antenna and the coax shield would flow in the same direction as the current in the center conductor, and thus not be canceled. Energy would radiate from the coax itself, affecting the radiation pattern of the antenna. With sufficient power, this could be a hazard to people near the cable. A properly placed and properly sized balun can prevent common-mode radiation in coax. An isolating transformer or blocking capacitor can be used to couple a coaxial cable to equipment, where it is desirable to pass radio-frequency signals but to block direct current or low-frequency power. Standards Most coaxial cables have a characteristic impedance of either 50, 52, 75, or 93 Ω. The RF industry uses standard type-names for coaxial cables. Thanks to television, RG-6 is the most commonly used coaxial cable for home use, and the majority of connections outside Europe are by F connectors. A series of standard types of coaxial cable were specified for military uses, in the form "RG-#" or "RG-#/U". They date from World War II and were listed in MIL-HDBK-216 published in 1962. These designations are now obsolete. The RG designation stands for Radio Guide; the U designation stands for Universal. The current military standard is MIL-SPEC MIL-C-17. MIL-C-17 numbers, such as "M17/75-RG214", are given for military cables and manufacturer's catalog numbers for civilian applications. However, the RG-series designations were so common for generations that they are still used, although critical users should be aware that since the handbook is withdrawn there is no standard to guarantee the electrical and physical characteristics of a cable described as "RG-# type". The RG designators are mostly used to identify compatible connectors that fit the inner conductor, dielectric, and jacket dimensions of the old RG-series cables. Dielectric material codes FPE is foamed polyethylene PE is solid polyethylene PF is polyethylene foam PTFE is polytetrafluoroethylene; ASP is air space polyethylene VF is the Velocity Factor; it is determined by the effective and VF for solid PE is about 0.66 VF for foam PE is about 0.78 to 0.88 VF for air is about 1.00 VF for solid PTFE is about 0.70 VF for foam PTFE is about 0.84 There are also other designation schemes for coaxial cables such as the URM, CT, BT, RA, PSF and WF series. Uses Short coaxial cables are commonly used to connect home video equipment, in ham radio setups, and in NIM. While formerly common for implementing computer networks, in particular Ethernet ("thick" 10BASE5 and "thin" 10BASE2), twisted pair cables have replaced them in most applications except in the growing consumer cable modem market for broadband Internet access. Long distance coaxial cable was used in the 20th century to connect radio networks, television networks, and Long Distance telephone networks though this has largely been superseded by later methods (fibre optics, T1/E1, satellite). Shorter coaxials still carry cable television signals to the majority of television receivers, and this purpose consumes the majority of coaxial cable production. In 1980s and early 1990s coaxial cable was also used in computer networking, most prominently in Ethernet networks, where it was later in late 1990s to early 2000s replaced by UTP cables in North America and STP cables in Western Europe, both with 8P8C modular connectors. Micro coaxial cables are used in a range of consumer devices, military equipment, and also in ultra-sound scanning equipment. The most common impedances that are widely used are 50 or 52 ohms, and 75 ohms, although other impedances are available for specific applications. The 50 / 52 ohm cables are widely used for industrial and commercial two-way radio frequency applications (including radio, and telecommunications), although 75 ohms is commonly used for broadcast television and radio. Coax cable is often used to carry data/signals from an antenna to a receiver—from a satellite dish to a satellite receiver, from a television antenna to a television receiver, from a radio mast to a radio receiver, etc. In many cases, the same single coax cable carries power in the opposite direction, to the antenna, to power the low-noise amplifier. In some cases a single coax cable carries (unidirectional) power and bidirectional data/signals, as in DiSEqC. Types Hard line Hard line is used in broadcasting as well as many other forms of radio communication. It is a coaxial cable constructed using round copper, silver or gold tubing or a combination of such metals as a shield. Some lower-quality hard line may use aluminum shielding, aluminum however is easily oxidized and unlike silver oxide, aluminum oxide drastically loses effective conductivity. Therefore, all connections must be air and water tight. The center conductor may consist of solid copper, or copper-plated aluminum. Since skin effect is an issue with RF, copper plating provides sufficient surface for an effective conductor. Most varieties of hardline used for external chassis or when exposed to the elements have a PVC jacket; however, some internal applications may omit the insulation jacket. Hard line can be very thick, typically at least a half inch or 13 mm and up to several times that, and has low loss even at high power. These large-scale hard lines are almost always used in the connection between a transmitter on the ground and the antenna or aerial on a tower. Hard line may also be known by trademarked names such as Heliax (CommScope), or Cablewave (RFS/Cablewave). Larger varieties of hardline may have a center conductor that is constructed from either rigid or corrugated copper tubing. The dielectric in hard line may consist of polyethylene foam, air, or a pressurized gas such as nitrogen or desiccated air (dried air). In gas-charged lines, hard plastics such as nylon are used as spacers to separate the inner and outer conductors. The addition of these gases into the dielectric space reduces moisture contamination, provides a stable dielectric constant, and provides a reduced risk of internal arcing. Gas-filled hardlines are usually used on high-power RF transmitters such as television or radio broadcasting, military transmitters, and high-power amateur radio applications but may also be used on some critical lower-power applications such as those in the microwave bands. However, in the microwave region, waveguide is more often used than hard line for transmitter-to-antenna, or antenna-to-receiver applications. The various shields used in hard line also differ; some forms use rigid tubing, or pipe, while others may use a corrugated tubing, which makes bending easier, as well as reduces kinking when the cable is bent to conform. Smaller varieties of hard line may be used internally in some high-frequency applications, in particular in equipment within the microwave range, to reduce interference between stages of the device. Radiating Radiating or leaky cable is another form of coaxial cable which is constructed in a similar fashion to hard line, however it is constructed with tuned slots cut into the shield. These slots are tuned to the specific RF wavelength of operation or tuned to a specific radio frequency band. This type of cable is to provide a tuned bi-directional "desired" leakage effect between transmitter and receiver. It is often used in elevator shafts, US Navy Ships, underground transportation tunnels and in other areas where an antenna is not feasible. One example of this type of cable is Radiax (CommScope). RG-6 RG-6 is available in four different types designed for various applications. In addition, the core may be copper clad steel (CCS) or bare solid copper (BC). "Plain" or "house" RG-6 is designed for indoor or external house wiring. "Flooded" cable is infused with waterblocking gel for use in underground conduit or direct burial. "Messenger" may contain some waterproofing but is distinguished by the addition of a steel messenger wire along its length to carry the tension involved in an aerial drop from a utility pole. "Plenum" cabling is expensive and comes with a special Teflon-based outer jacket designed for use in ventilation ducts to meet fire codes. It was developed since the plastics used as the outer jacket and inner insulation in many "Plain" or "house" cabling gives off poisonous gas when burned. Triaxial cable Triaxial cable or triax is coaxial cable with a third layer of shielding, insulation and sheathing. The outer shield, which is earthed (grounded), protects the inner shield from electromagnetic interference from outside sources. Twin-axial cable Twin-axial cable or twinax is a balanced, twisted pair within a cylindrical shield. It allows a nearly perfect differential mode signal which is both shielded and balanced to pass through. Multi-conductor coaxial cable is also sometimes used. Semi-rigid Semi-rigid cable is a coaxial form using a solid copper outer sheath. This type of coax offers superior screening compared to cables with a braided outer conductor, especially at higher frequencies. The major disadvantage is that the cable, as its name implies, is not very flexible, and is not intended to be flexed after initial forming. (See ) Conformable cable is a flexible reformable alternative to semi-rigid coaxial cable used where flexibility is required. Conformable cable can be stripped and formed by hand without the need for specialized tools, similar to standard coaxial cable. Rigid line Rigid line is a coaxial line formed by two copper tubes maintained concentric every other meter using PTFE-supports. Rigid lines cannot be bent, so they often need elbows. Interconnection with rigid line is done with an inner bullet/inner support and a flange or connection kit. Typically, rigid lines are connected using standardised EIA RF Connectors whose bullet and flange sizes match the standard line diameters. For each outer diameter, either 75 or 50 ohm inner tubes can be obtained. Rigid line is commonly used indoors for interconnection between high-power transmitters and other RF-components, but more rugged rigid line with weatherproof flanges is used outdoors on antenna masts, etc. In the interests of saving weight and costs, on masts and similar structures the outer line is often aluminium, and special care must be taken to prevent corrosion. With a flange connector, it is also possible to go from rigid line to hard line. Many broadcasting antennas and antenna splitters use the flanged rigid line interface even when connecting to flexible coaxial cables and hard line. Rigid line is produced in a number of different sizes: Cables used in the UK At the start of analog satellite TV broadcasts in the UK by Sky, a 75 ohm cable referred to as RG6 was used. This cable had a 1 mm copper core, air-spaced polyethylene dielectric and copper braid on an aluminum foil shield. When installed outdoors without protection, the cable was affected by UV radiation, which cracked the PVC outer sheath and allowed moisture ingress. The combination of copper, aluminum, moisture and air caused rapid corrosion, sometimes resulting in a 'snake swallowed an egg' appearance. Consequently, despite the higher cost, the RG6 cable was dropped in favor of CT100 when Sky launched its digital broadcasts. From around 1999 to 2005 (when CT100 manufacturer Raydex went out of business), CT100 remained the 75 ohm cable of choice for satellite TV and especially Sky. It had an air-spaced polyethylene dielectric, a 1 mm solid copper core and copper braid on copper foil shield. CT63 was a thinner cable in 'shotgun' style, meaning that it was two cables molded together and was used mainly by Sky for the twin connection required by the Sky+ satellite TV receiver, which incorporated a hard drive recording system and a second, independent tuner. In 2005, these cables were replaced by WF100 and WF65, respectively, manufactured by Webro and having a similar construction but a foam dielectric that provided the same electrical performance as air-spaced but was more robust and less likely to be crushed. At the same time, with the price of copper steadily rising, the original RG6 was dropped in favor of a construction that used a copper-clad steel core and aluminum braid on aluminum foil. Its lower price made it attractive to aerial installers looking for a replacement for the so-called low-loss cable traditionally used for UK terrestrial aerial installations. This cable had been manufactured with a decreasing number of strands of braid, as the price of copper increased, such that the shielding performance of cheaper brands had fallen to as low as 40 percent. With the advent of digital terrestrial transmissions in the UK, this low-loss cable was no longer suitable. The new RG6 still performed well at high frequencies because of the skin effect in the copper cladding. However, the aluminum shield had a high DC resistance and the steel core an even higher one. The result is that this type of cable could not reliably be used in satellite TV installations, where it was required to carry a significant amount of current, because the voltage drop affected the operation of the low noise block downconverter (LNB) on the dish. A problem with all the aforementioned cables, when passing current, is that electrolytic corrosion can occur in the connections unless moisture and air are excluded. Consequently, various solutions to exclude moisture have been proposed. The first was to seal the connection by wrapping it with self-amalgamating rubberized tape, which bonds to itself when activated by stretching. The second proposal, by the American Channel Master company (now owned by Andrews corp.) at least as early as 1999, was to apply silicone grease to the wires making connection. The third proposal was to fit a self-sealing plug to the cable. All of these methods are reasonably successful if implemented correctly. Interference and troubleshooting Coaxial cable insulation may degrade, requiring replacement of the cable, especially if it has been exposed to the elements on a continuous basis. The shield is normally grounded, and if even a single thread of the braid or filament of foil touches the center conductor, the signal will be shorted causing significant or total signal loss. This most often occurs at improperly installed end connectors and splices. Also, the connector or splice must be properly attached to the shield, as this provides the path to ground for the interfering signal. Despite being shielded, interference can occur on coaxial cable lines. Susceptibility to interference has little relationship to broad cable type designations (e.g. RG-59, RG-6) but is strongly related to the composition and configuration of the cable's shielding. For cable television, with frequencies extending well into the UHF range, a foil shield is normally provided, and will provide total coverage as well as high effectiveness against high-frequency interference. Foil shielding is ordinarily accompanied by a tinned copper or aluminum braid shield, with anywhere from 60 to 95% coverage. The braid is important to shield effectiveness because (1) it is more effective than foil at preventing low-frequency interference, (2) it provides higher conductivity to ground than foil, and (3) it makes attaching a connector easier and more reliable. "Quad-shield" cable, using two low-coverage aluminum braid shields and two layers of foil, is often used in situations involving troublesome interference, but is less effective than a single layer of foil and single high-coverage copper braid shield such as is found on broadcast-quality precision video cable. In the United States and some other countries, cable television distribution systems use extensive networks of outdoor coaxial cable, often with in-line distribution amplifiers. Leakage of signals into and out of cable TV systems can cause interference to cable subscribers and to over-the-air radio services using the same frequencies as those of the cable system. History 1858 — Coaxial cable used in first (1858) transatlantic cable. 1880 — Coaxial cable patented in England by Oliver Heaviside, patent no. 1,407. 1884 — Siemens & Halske patent coaxial cable in Germany (Patent No. 28,978, 27 March 1884). 1894 — Nikola Tesla (U.S. Patent 514,167) 1929 — First modern coaxial cable patented by Lloyd Espenschied and Herman Affel of AT&T's Bell Telephone Laboratories. 1936 — First closed circuit transmission of TV pictures on coaxial cable, from the 1936 Summer Olympics in Berlin to Leipzig. 1936 — Underwater coaxial cable installed between Apollo Bay, near Melbourne, Australia, and Stanley, Tasmania. The cable can carry one 8.5-kHz broadcast channel and seven telephone channels. 1936 — AT&T installs experimental coaxial telephone and television cable between New York and Philadelphia, with automatic booster stations every . Completed in December, it can transmit 240 telephone calls simultaneously. 1936 — Coaxial cable laid by the General Post Office (now BT) between London and Birmingham, providing 40 telephone channels. 1941 — First commercial use in USA by AT&T, between Minneapolis, Minnesota and Stevens Point, Wisconsin. L1 system with capacity of one TV channel or 480 telephone circuits. 1949 — On January 11, eight stations on the US East Coast and seven Midwestern stations are linked via a long-distance coaxial cable. 1956 — First transatlantic coaxial cable laid, TAT-1. 1962 — Sydney–Melbourne co-axial cable commissioned, carrying 3 x 1,260 simultaneous telephone connections, and-or simultaneous inter-city television transmission. See also Balanced line BNC Connector LEMO Connector Radio frequency power transmission References External links RF Transmission Lines and Fittings. Military Standardization Handbook MIL-HDBK-216, U.S. Department of Defense, 4 January 1962. Withdrawal Notice for MIL-HDBK-216 2001 Cables, Radio Frequency, Flexible and Rigid Details Specification MIL-DTL-17H, 19 August 2005 (superseding MIL-C-17G, 9 March 1990). Radio-Frequency Cables, International Standard IEC 60096. Coaxial Communication Cables, International Standard IEC 61196. Coaxial Cables, British Standard BS EN 50117 H. P. Westman et al., (ed), Reference Data for Radio Engineers, Fifth Edition, 1968, Howard W. Sams and Co., no ISBN, Library of Congress Card No. 43-14665 "What's the Best Coaxial cable to use for..." https://books.google.com/books?id=e9wEntQmA0IC&pg=PA20&lpg=PA20&source=bl&hl=en&sa=X&f=false Brooke Clarke, "Transmission Line Zo vs. Frequency" English inventions Signal cables Antennas (radio) Transmission lines 19th-century inventions
38606537
https://en.wikipedia.org/wiki/Radeon%20HD%203000%20series
Radeon HD 3000 series
The graphics processing unit (GPU) codenamed the Radeon R600 is the foundation of the Radeon HD 2000/3000 series and the FireGL 2007 series video cards developed by ATI Technologies. Architecture This article is about all products under the brand "Radeon HD 3000 Series". All products of this series contain a GPU which implements TeraScale 1. Video acceleration The Unified Video Decoder (UVD) SIP core is present on the dies of the GPUs used in the HD 2400 and the HD 2600 but not of the HD 2900. The HD 2900 introduced the ability to decode video within the 3D engine. This approach also exonerates the CPU from doing these computations, but consumes considerably more electric current. Desktop products Radeon HD 3800 The Radeon HD 3800 series was based on the codenamed RV670 GPU, packed 666 million transistors on a 55 nm fabrication process and had a die size at 192 mm2, with the same 64 shader clusters as the R600 core, but the memory bus width was reduced to 256 bits. The RV670 GPU is also the base of the FireStream 9170 stream processor, which uses the GPU to perform general purpose floating-point calculations which were done in the CPU previously. The Radeon HD 3850 and 3870 became available mid-November 2007. Radeon HD 3690/3830 The Radeon HD 3690, which was limited only to the Chinese market where it was named HD 3830, has the same core as the Radeon 3800 series but with only a 128-bit memory controller and 256 MiB of GDDR3 memory. All other hardware specifications are retained. A further announcement was made that there would be a Radeon HD 3830 variant bearing the same features as Radeon HD 3690, but with a unique device ID that does not allow add-in card partners in China to re-enable the burnt-out portion of the GPU core for more memory bandwidth. The Radeon HD 3690 was released early February 2008 for the Chinese market only. Radeon HD 3870 X2 Radeon HD 3870 X2 (codenamed R680) was released on January 28, 2008, featuring 2 RV670 cores with a maximum of 1 GiB GDDR3 SDRAM, targeting the enthusiast market and replacing the Radeon HD 2900 XT. The processor achieved a peak single-precision floating point performance of 1.06 TFLOPS, being the world's first single-PCB graphics product breaking the 1 TFLOP mark. Technically, this Radeon HD 3870 X2 can really be understood as a CrossFire of two HD 3870 on a single PCB. The card only integrates a PCI Express 1.1 bridge to connect the two GPUs. They communicate via a bidirectional bus that has 16 lines for a bandwidth of 2 x 4 Gb/s. This has no negative effect on performance. Starting with the Catalyst 8.3 drivers, Amd/Ati officially supports CrossFireX technology for the 3800 series, which means that up to four GPUs can be used in a pair of Radeon HD 3870 X2. AMD stated the possibility of supporting 4 Radeon HD 3870 X2 cards, allowing 8 GPUs to be used on several motherboards, including the MSI K9A2 Platinum and Intel D5400XS, because these motherboards have sufficient spaces between PCI-E slots for dual-slot cooler video cards, presumably as a combination of two separate hardware CrossFire setups with a software CrossFire setup bridging the two, but currently with no driver support. Radeon HD 3600 The Radeon HD 3600 series was based on the codenamed RV635 GPU, packed 378 million transistors on 55 nm fabrication process, and had 128-bit memory bus width. The support for HDMI and D-sub ports is also achieved through separate dongles. Beside the DisplayPort implementations, there also exists other display output layouts as dual DVI port or DVI with D-sub display output layout. The only variant, the Radeon HD 3650, was released on January 23, 2008 and has also an AGP slot with 64-bit bus width or the standard PCI-E slot with 128-bit. Radeon HD 3400 The Radeon HD 3400 series was based on the codenamed RV620 GPU, packed 181 million transistors on a 55 nm fabrication process, and had 64-bit memory bus width. Products were available both as full height cards and as low-profile cards. One of the notable features is that the Radeon HD 3400 series (including Mobility Radeon HD 3400 series) video cards support ATI Hybrid Graphics. The Radeon HD 3450 and Radeon HD 3470 were released on January 23, 2008. Mobile products All Mobility Radeon HD 2000/3000 series share the same feature set support as their desktop counterparts, as well as the addition of the battery-conserving PowerPlay 7.0 features, which are augmented from the previous generation's PowerPlay 6.0. The Mobility Radeon HD 2300 is a budget product which includes UVD in silica but lacks unified shader architecture and DirectX 10.0/SM 4.0 support, limiting support to DirectX 9.0c/SM 3.0 using the more traditional architecture of the previous generation. A high-end variant, the Mobility Radeon HD 2700, with higher core and memory frequencies than the Mobility Radeon HD 2600, was released in mid-December 2007. The Mobility Radeon HD 2400 is offered in two model variants; the standard HD 2400 and the HD 2400 XT. The Mobility Radeon HD 2600 is also available in the same two flavors; the plain HD 2600 and, at the top of the mobility lineup, the HD 2600 XT. The half-generation update treatment had also applied to mobile products. Announced prior to CES 2008 was the Mobility Radeon HD 3000 series. Released in the first quarter of 2008, the Mobility Radeon HD 3000 series consisted of two families, the Mobility Radeon HD 3400 series and the Mobility Radeon HD 3600 series. The Mobility Radeon HD 3600 series also featured the industry's first implementation of on-board 128-bit GDDR4 memory. About the time of late March to early April, 2008, AMD renewed the device ID list on its website with the inclusion of Mobility Radeon HD 3850 X2 and Mobility Radeon HD 3870 X2 and their respective device IDs. Later in Spring IDF 2008 held in Shanghai, a development board of the Mobility Radeon HD 3870 X2 was demonstrated alongside a Centrino 2 platform demonstration system. The Mobility Radeon HD 3870 X2 was based on two M88 GPUs with the addition of a PCI Express switch chip on a single PCB. The demonstrated development board is on PCI Express 2.0 ×16 bus, while the final product is expected to be on AXIOM/MXM modules. Radeon Feature Matrix Graphics device drivers AMD's proprietary graphics device driver "Catalyst" AMD Catalyst is being developed for Microsoft Windows and Linux. As of July 2014, other operating systems are not officially supported. This may be different for the AMD FirePro brand, which is based on identical hardware but features OpenGL-certified graphics device drivers. AMD Catalyst supports of course all features advertised for the Radeon brand. The Radeon HD 3000 series has been transitioned to legacy support, where drivers will be updated only to fix bugs instead of being optimized for new applications. Free and open-source graphics device driver "Radeon" The free and open-source drivers are primarily developed on Linux and for Linux, but have been ported to other operating systems as well. Each driver is composed out of five parts: Linux kernel component DRM Linux kernel component KMS driver: basically the device driver for the display controller user-space component libDRM user-space component in Mesa 3D; a special and distinct 2D graphics device driver for X.Org Server, which is finally about to be replaced by Glamor The free and open-source "Radeon" graphics driver supports most of the features implemented into the Radeon line of GPUs. They are not reverse engineered, but based on documentation released by AMD. See also AMD FirePro FireStream 9170, the GPGPU version of Radeon HD 3870 graphics card List of AMD graphics processing units References External links ATI Radeon HD 2000 Series ATI Radeon HD 3000 Series ATI Mobility Radeon HD 2000 Series ATI Mobility Radeon HD 3000 Series techPowerUp! GPU Database Advanced Micro Devices graphics cards ATI brand
53185166
https://en.wikipedia.org/wiki/2017%20Troy%20Trojans%20baseball%20team
2017 Troy Trojans baseball team
The 2017 Troy Trojans baseball team represented the Troy University in the 2017 NCAA Division I baseball season. The Trojans played their home games at Riddle–Pace Field. Schedule and results Troy announced its 2017 football schedule on October 27, 2016. The 2017 schedule consisted of 28 home and 28 away games in the regular season. The Trojans hosted Sun Belts foes Appalachian State, Georgia Southern, Louisiana–Lafayette, Louisiana–Monroe, and Texas State and will travel to Coastal Carolina, Georgia State, Little Rock, South Alabama, and Texas–Arlington. The 2017 Sun Belt Conference Championship was contested May 24–28 in Statesboro, Georgia, and was hosted by Georgia Southern. Troy finished 4th in the east division of the conference which qualified the Trojans to compete in the tournament as the 6th seed seeking for the team's 4th Sun Belt Conference tournament title. Rankings are based on the team's current ranking in the Collegiate Baseball poll. References Troy Troy Trojans baseball seasons
2523233
https://en.wikipedia.org/wiki/The%20Iconfactory
The Iconfactory
The Iconfactory is a software and graphic design company that specializes in creating icons and software for creating and using icons. The company was founded in April 1996 by Corey Marion, Talos Tsui, and Gedeon Maheux. Lead Engineer Craig Hockenberry joined the company in 1997 and Artist Dave Brasgalla joined in January 1999. The company incorporated in January 2000. The Iconfactory gained popularity through the creation of packages of free icons for download, but quickly grew to become one of the leading studios in commercial icon design. The Iconfactory also publishes software for creating, organizing and using icons as well as general GUI applications. From 1997 until 2004, the Iconfactory held an annual icon design contest for the Macintosh icon community called Pixelpalooza. The competition was a chance for aspiring artists from all over the world to design and produce original icon creations for the chance of winning software and hardware prizes. Pixelpalooza was discontinued in 2005, although the company says it may make a return in the future. The Iconfactory's most notable client project to date was creating over 100 icons for Microsoft to be included in the Windows XP operating system as well as creating the base icons in Windows Vista's Aero interface. They have also created over 100 icons for the Xbox 360 UI and website. Software iPulse - mac OS Utility to visualize system activity. xScope - macOS Tools for developers and designers to measure elements on screen. xScope Mirror - iOS Tool to mirror Photoshop image onto iOS screen. DownloadCheck — Simple utility inspired by the MP3Concept trojan horse Linea Sketch - Sketching app for iPad Pro and Apple Pencil Linea Go - Sketching app for iPhone on the go. Linea Link - Get sketches drawn on iPad and iPhone over to your macOS desktop. Twitterrific - macOS version - Third-party client for the social networking site Twitter. Twitterrific - iOS version of Twitterrific, which won an Apple Design Award in 2008. Flare - Photo editing program for adding effects to and editing pictures. BitCam - an app that turns an iOS camera into a low res camera. Exify - an app that can provide a lot of information on the pictures in your camera roll. Clicker - an Apple Watch app that help you count things. Discontinued: CandyBar — Tool to organize and change system icons. Dine-O-Matic — Dashboard widget to help decide where to eat. Pixadex — Icon management tool similar to iPhoto. Replaced by CandyBar. IconBuilder — Adobe Photoshop Plugin for creating icons. IconDropper — Icon management utility for System 7, Mac OS 8 and Mac OS 9. Replaced by Pixadex. iControl — Utility to change any system icon in Mac OS 8 & 9. Replaced by CandyBar. Frenzic — Puzzle game that "takes minutes to learn and months to master" - macOS, iPhone, iPod Touch Android, and Nintendo version of Frenzic. Take Five - Program that fades paused music back in after being forgotten after a distraction. Ramp Champ — A new twist on classic boardwalk/carnival games such as Skee ball for the iPhone and iPod Touch, developed with DS Media Labs CandyBar and Pixadex were maintained with Panic. xScope and Frenzic are maintained with ARTIS Software. Sales In 2010 the total sales for the Iconfactory was $US 370,000 Alleged patent infringement On May 31, 2011, Lodsys asserted two of its four patents: U.S. Patent No. 7,620,565 ("the '565 patent") on a "customer-based design module" and U.S. Patent No. 7,222,078 ("the '078 patent") on "Methods and Systems for Gathering Information from Units of a Commodity Across a Network." against Iconfactory and 6 other developers for using Apple, Inc.'s API for In-app purchase. See also Computer Icons Wikipedia:Icons IconBuilder Icon Composer References External links Companies established in 1996 Companies based in Greensboro, North Carolina Icon software Macintosh software companies
54528841
https://en.wikipedia.org/wiki/Anika%20Apostalon
Anika Apostalon
Anika Apostalon (born February 2, 1995) is a Czech-American competitive swimmer who specializes in freestyle and backstroke events. She was born in Albuquerque, New Mexico and graduated from Albuquerque Academy in 2013. Apostalon graduated from the University of Southern California in 2017 with a 3.92 grade point average. She currently swims for the Toronto Titans and is affiliated with USK in Prague, CZ. She is the Czech national record holder in the 50m Freestyle (SCM) and a 17-time NCAA All American. Career International Swimming League In fall of 2019 Apostalon signed for DC Trident in the ISL's inaugural season. In spring 2020, Apostalon signed for the newly formed Toronto Titans, the first Canadian based ISL team. Collegiate career Apostalon began her collegiate career at San Diego State University, where she was a Division I dual-sport athlete in water polo and swimming. In 2014 she was named the Mountain West Swimmer of the Year and the conference's Freshman of the year. She also set the Mountain West records for 50-yard freestyle, 100-yard freestyle, and 100-yard backstroke while at SDSU. In 2015, Apostalon transferred to the University of Southern California to focus on her swimming career. In 2016 she led the Trojans to a Pac-12 Championship. She was a member of the 2016 400 yard freestyle NCAA national championship relay. Apostalon finished her college career as a 17-time NCAA All-American and eight-time individual scorer at the NCAA national championships. However, her successes have not been limited to swimming. As a senior, Apostalon made the CoSIDA Academic All-American Division I At-Large Team and was named the Pac-12 Women's Swimming Scholar Athlete of the Year. She was also named the Trojans' female recipient of the 2016-17 Tom Hansen Medal. US Olympic Trials Apostalon first competed in the United States Olympic Trials in 2012, when she turned in a 66th-place finish in the 100 backstroke. She returned to the Olympic Trials in 2016, and turned in a 12th-place performance in the 50 yard freestyle and 24th-place finish in the 100-yard freestyle. Professional career In July, 2018, Apostalon qualified for the Czech Republic national team and set a national record in the 100-meter freestyle. She went on to represent the Czech Republic in the 2018 European Championships in Glasgow, Scotland. She finished 12th in the 50m Freestyle and 11th in the 100m Freestyle. At the 2019 World Championships, Apostalon anchored the Czech 4x100 Freestyle Relay with a split of 54.26 to place 7th and qualify for the 2020 Tokyo Olympics. Apostalon currently holds the Czech national record in the 50m Freestyle (SCM). References American female swimmers 1995 births Living people San Diego State Aztecs athletes USC Trojans women's swimmers American people of Czech descent Swimmers at the 2020 Summer Olympics
5906381
https://en.wikipedia.org/wiki/List%20of%20Maya%20plugins
List of Maya plugins
Maya Plugins are extensions for the 3D animation software Autodesk Maya. There are plugins for many different areas such as modeling, animation, and rendering. Some of them also interact with external applications (for instance renderers, game engines, or other software packages). Crowd simulation Dynamics Fluid Import/Export Modeling Rendering Maya plugins 3D graphics software Computer-aided design software Animation software IRIX software
7450
https://en.wikipedia.org/wiki/Context%20menu
Context menu
A context menu (also called contextual, shortcut, and pop up or pop-up menu) is a menu in a graphical user interface (GUI) that appears upon user interaction, such as a right-click mouse operation. A context menu offers a limited set of choices that are available in the current state, or context, of the operating system or application to which the menu belongs. Usually the available choices are actions related to the selected object. From a technical point of view, such a context menu is a graphical control element. History Context menus first appeared in the Smalltalk environment on the Xerox Alto computer, where they were called pop-up menus; they were invented by Dan Ingalls in the mid-1970s. Microsoft Office v3.0 introduced the context menu for copy and paste functionality in 1990. Borland demonstrated extensive use of the context menu in 1991 at the Second Paradox Conference in Phoenix Arizona. Lotus 1-2-3/G for OS/2 v1.0 added additional formatting options in 1991. Borland Quattro Pro for Windows v1.0 introduced the Properties context menu option in 1992. Implementation Context menus are opened via various forms of user interaction that target a region of the GUI that supports context menus. The specific form of user interaction and the means by which a region is targeted vary: On a computer running Microsoft Windows, macOS, or Unix running the X Window System, clicking the secondary mouse button (usually the right button) opens a context menu for the region that is under the mouse pointer. For quickness, implementations may additionally support hold-and-release selection, meaning the pointer is held down and dragged, and released at desirable menu entry. On systems that support one-button mice, context menus are typically opened by pressing and holding the primary mouse button (this works on the icons in the Dock on macOS) or by pressing a keyboard/mouse button combination (e.g. Ctrl-mouse click in Classic Mac OS and macOS). A keyboard alternative for macOS is to enable Mouse keys in Universal Access. Then, depending on whether a laptop or compact or extended keyboard type is used, the shortcut is ++5 or +5 (numeric keypad) or ++i (laptop). On systems with a multi-touch interface such as MacBook or Surface, the context menu can be opened by pressing or tapping with two fingers instead of just one. Some cameras on smartphones for example recognize a QR code when a picture is taken. Then a pop-up appears if you want to 'open' the QR content. This could be anything like a website or to configure your phone to connect to Wi-Fi. See image. On some user interfaces, context menu items are accompanied by icons for quicker recognition upon navigation. Context menus can also have a top row of icons only for quick access to most frequently used options. Windows mouse click behavior is such that the context menu doesn't open while the mouse button is pressed, but only opens the menu when the button is released, so the user has to click again to select a context menu item. This behavior differs from that of macOS and most free software GUIs. In Microsoft Windows, pressing the Application key or Shift+F10 opens a context menu for the region that has focus. Context menus are sometimes hierarchically organized, allowing navigation through different levels of the menu structure. The implementations differ: Microsoft Word was one of the first applications to only show sub-entries of some menu entries after clicking an arrow icon on the context menu, otherwise executing an action associated with the parent entry. This makes it possible to quickly repeat an action with the parameters of the previous execution, and to better separate options from actions. X Window Managers The following window managers provide context menu functionality: 9wm Awesome IceWM—middle-click and right-click context menus on desktop, menubar. titlebars, and titleicon olwm openbox sawfish Usability Context menus have received some criticism from usability analysts when improperly used, as some applications make certain features only available in context menus, which may confuse even experienced users (especially when the context menus can only be activated in a limited area of the application's client window). Context menus usually open in a fixed position under the pointer, but when the pointer is near a screen edge the menu will be displaced - thus reducing consistency and impeding use of muscle memory. If the context menu is being triggered by keyboard, such as by using Shift + F10, the context menu appears near the focused widget instead of the position of the pointer, to save recognition efforts. In documentation Microsoft's guidelines call for always using the term context menu, and explicitly deprecate shortcut menu. See also Menu key Pie menu Screen hotspot References External links Graphical control elements Graphical user interface elements Macintosh operating systems user interface Windows administration
6748786
https://en.wikipedia.org/wiki/FLARM
FLARM
FLARM is proprietary electronic system used to selectively alert pilots to potential collisions between aircraft. It is not formally an implementation of ADS-B, as it is optimized for the specific needs of light aircraft, not for long-range communication or ATC interaction. FLARM is a portmanteau of "flight" and "alarm". The installation of all physical FLARM devices is approved as a “Standard Change”, and the PowerFLARM Core specifically as a “Minor Change” by the European Aviation Safety Agency.; and in addition the Minor Change also approves the PowerFLARM Core for its IFR and at night. Operation FLARM obtains its position and altitude readings from an internal GPS and a barometric sensor and then broadcasts this together with forecast data about the future 3D flight track. At the same time, its receiver listens for other FLARM devices within range and processes the information received. Advanced motion prediction algorithms predict potential conflicts for up to 50 other aircraft and alert the pilot using visual and aural warnings. FLARM has an integrated obstacle collision warning system together with an obstacle database. The database includes both point and segmented obstacles, such as split power lines and cableways. Unlike conventional transponders, FLARM has low power consumption and is relatively inexpensive to purchase and install. Furthermore, conventional Airborne Collision Avoidance Systems (ACAS) are not effective in preventing light aircraft from colliding with each other as light aircraft can be close to each other without danger of collision. ACAS would issue continuous and unnecessary warnings about all aircraft in the vicinity, whereas FLARM only issues selective warnings about collision risks. Appraisal and attention FLARM Technology and the inventors of FLARM have won several awards. The Swiss Office of Civil Aviation (FOCA) also published in Dec 2010: "The rapid distribution of such systems only a few months after their introduction was not accomplished through regulatory measures, but rather on a voluntary basis and as a result of the wish on the part of the involved players to contribute towards the reduction of collision risk. The FOCA recommends that glider tow planes and helicopters that operate in lower airspace should also use collision warning systems." In addition, FLARM is mandatory on gliders in several countries including France, and the Soaring Society of America (SSA) strongly recommends FLARM in lieu of ADS-B Out. Versions Versions are sold for use in light aircraft, helicopters, and gliders. Newer PowerFLARM models extend the FLARM range to over 10 km. They also have an integrated ADS-B and transponder Mode-C/S receiver, making it possible to also avoid mid-air collisions with large aircraft. Newer devices can also act as authorized flight recorders by producing files in the IGC format defined by the FAI Gliding Commission. All FLARM devices can be connected to FLARM displays or compatible avionics (EFIS, moving map, etc.) to give visual and aural warnings and also to show the intruder's position on the map. Licensed manufacturers produce integrated FLARM devices in different avionics products. FLARM devices can issue spoken warnings similar to TCAS. Hardware A typical FLARM system consists of the following hardware components: Central microcontroller for data processing, e.g. Atmel AVR ISM/SRD band transceiver, e.g. NRF905 (Europe: 868 MHz) GPS module, e.g. U-blox LEA-4S Barometric pressure sensor, which measures cabin pressure to estimate the altitude (not used for collision avoidance, which uses GPS altitude) Traffic and collision warning display, e.g. light emitting diodes or LC display and a buzzer (not installed in case of special remote units) (micro)SD card slot for configuration, logging and firmware updates RS-232 interface for external displays and firmware updates Protocol and criticism The FLARM radio protocol has always been encrypted, which is reasoned by the manufacturer to ensure the integrity of the system and also because of privacy and security considerations. Version 4 used in 2008 and Version 6 used in 2015 were reverse engineered despite its encryption. However, FLARM changes the protocol on a regular basis . The decryption of the FLARM radio protocol might be illegal, especially in EU countries. that traffic advisory data may legally be decrypted by third parties solely for the purpose of nearby traffic advisory and collision avoidance, which is the intended use of the system. The radio protocol has been criticised for its proprietary encryption, including a petition encouraging a change to an open protocol. It has been argued that encryption increases processing time and contradicts the goal to increase aviation safety due to a closed monopoly market, because an open protocol could enable third party manufacturers to develop compatible devices, spreading the use of interoperable traffic advisory systems. FLARM Technology opposed these claims as published on the petition page and published a white paper explaining the design of the system. They offer the technology to third parties, which requires the implementation of the OEM circuit board in compatible devices. Radio protocol specifications and crypto keys are not shared to third party manufacturers. While the FLARM serial data protocol is public, the prediction engine of FLARM is patented by Onera (France) and proprietary. It is licensed to manufacturers by FLARM Technology in Switzerland. Company FLARM was founded by Urs Rothacher and Andrea Schlapbach in 2003, who were later joined by Urban Mäder in 2004. First sales were made in early 2004. Currently there are nearly 30,000 FLARM-compatible devices (around half of them produced by FLARM Technology, the rest by licensed manufacturers who have now overtaken FLARM in current sales) in use mainly in Switzerland, Germany, France, Austria, Italy, UK, the Benelux, Scandinavia, Hungary, Israel, Australia, New Zealand and South Africa. FLARM's technology is also used in ground-based vehicles including vehicles used in surface-mining. These products are designed and produced by the Swiss company SAFEmine, now owned by Swedish Hexagon Group. References External links System Design and Compatibility Overview of collision avoidance systems Comparison of Mode A/C, S, FLARM and ADS-B Enhancing the efficacy of Flarm radio communication protocol by computer simulation (English, German) Interview with Gerhard Wesp, Development Manager Avionics at Flarm Technology GmbH, March 2014 Avionics Aircraft collision avoidance systems Warning systems
208074
https://en.wikipedia.org/wiki/ScummVM
ScummVM
Script Creation Utility for Maniac Mansion Virtual Machine (ScummVM) is a set of game engine recreations. Originally designed to play LucasArts adventure games that use the SCUMM system, it also supports a variety of non-SCUMM games by companies like Revolution Software and Adventure Soft. It was originally written by Ludvig Strigeus. Released under the terms of the GNU General Public License, ScummVM is free software. ScummVM is a re-implementation of the part of the software used to interpret the scripting languages such games used to describe the game world rather than emulating the hardware the games ran on; as such, ScummVM allows the games it supports to be played on platforms other than those for which they were originally released. The team behind it also add improvements such as bug-fixes and translations and works with commercial companies such as GOG.com about re-releases. Features ScummVM is a program that supports numerous adventure game engines via virtual machines, allowing the user to play supported adventure games on their platform of choice. ScummVM provides none of the original assets for the games it supports, and expects the user to properly own the original game's media so as to use the software legally. The official project website offers games that are freeware that work directly with ScummVM. Atop emulating the games, ScummVM enables players to save and load the state of the emulator at any time, enabling a save system atop whatever the emulated game may provide. It has also begun to work at providing alternate controls for newer devices, such as mobile devices with touch screens, which work atop the original games. While ScummVM appears to function equivalently as a game emulator, the ScummVM team does not consider it as such. Outside of some subsystems like audio engines which they are forced to rely on emulation, ScummVM recreates game engines from older languages into more portable C++ code, so that the high-level opcodes in a game's assets will execute in the same manner as their original release, while improving the portability of ScummVM to numerous platforms. The ScummVM team consider this an improvement over simply running the older games and their executables through an operating system emulator, such as DOSBox, since ScummVM's implements are more lightweight and require less processing power and memory, allowing use on more limited processing environments like mobile devices. Ports Portability is a design goal of the project. Ports of ScummVM are available for Microsoft Windows, macOS and a variety of Unix-like systems including Linux (based on RPM, Debian, or source), members of the BSD family (FreeBSD, NetBSD, OpenBSD, DragonFly BSD) and Solaris. It has also been ported to console systems. Less mainstream personal computer ports include those to Amiga, Atari-FreeMiNT, Haiku-BeOS-ZETA, RISC OS, and OS/2 (including derivatives such as ArcaOS). A variety of game consoles have official ports. ScummVM has been ported to gaming machines such as the PlayStation 2, PlayStation 3, Dreamcast, Nintendo 64, GameCube, and Wii, and to handheld consoles including the GCW Zero, GP2X, Nintendo DS, Pandora, PlayStation Portable and the PS Vita. Handheld computer platforms supported include Palm OS Tapwave Zodiac, Symbian (UIQ platform, Nokia 60, 80, and Nokia 7710 90 phone series), Nokia's Internet Tablet OS (used by the Nokia 770, N800 and N810), Apple's iPhone, MotoMAGX, MotoEZX phones and Windows Mobile. Platforms supported by unofficial ScummVM ports include the Microsoft's Xbox gaming console, BlackBerry PlayBook, Zaurus, Gizmondo and GP32 portable device platforms. Mobile phones running Android, webOS or unofficial Samsung's bada OS are also supported. History Work on ScummVM started in September 2001 (with the first public release at October and a site launch at November) by computer science student Ludvig Strigeus. Looking to write his own adventure game, he looked to see how the mechanics of an existing game engine, specifically working to create an emulator to play Monkey Island 2. At about the same time, Vincent Hamm was also looking to develop a SCUMM emulator, and though he had done deeper research into understanding how the SCUMM engine worked, found that Strigeus was much further along, and the two joined to craft the emulator. While Strigeus finished the required emulation for Monkey Island 2, Hamm worked separately to prepare the engine for Indiana Jones and the Fate of Atlantis, and once completed, the two found some dis-coordination on their efforts but eventually got the emulator working for both games. News of ScummVM was picked up by the tech news website Slashdot in November 2001, drawing a large interest to the project, and several other developers became part of the project to help support other games. These developers often turned to the creators of the original games to obtain information in informal ways, to help create the emulation. Further developers helped to support games that did not use SCUMM, such as Adventure Soft's Simon the Sorcerer; there was some debate about changing the name of the program at this point, but they ultimately kept the ScummVM title, believing that SCUMM was the most well-recognized adventure game engine. Strigeus had built support for iMUSE, the sound software used by many LucasArts games, but feared including it due to potential backlash from LucasArts. Other developers on the project advised him that there should be no legal issues and it was eventually included. Though Strigeus and Hamm would leave the project in 2002, by then it had a large enough development team to allow it to grow, led by James "Ender" Brown. Following this shift, the engine's source code was changed from C to C++, and a graphical user interface (GUI) was added. With increased awareness of the project, LucasArts sent a cease & desist letter to the project, believing they were using some of LucasArts' proprietary code. Brown worked over the next four years with LucasArts' legal representatives to explain the nature of the emulator and the source of their information to demonstrate that what they had created was legal. Brown considered that LucasArts was trying to be accommodating as ScummVM helped to raise interest in these titles. They ultimately came to a legal agreement to allow ScummVM to continue to be developed. The project would also incorporate other parallel efforts to make game emulators for other adventure games. Games from Sierra Online were of high demand for the project, requiring them to emulate the Adventure Game Interpreter (AGI) and the more advanced Sierra's Creative Interpreter (SCI) engines. AGI support was added in 2006 by incorporating efforts from the Sarien project, but efforts for SCI support were hampered by the parallel project, FreeSCI. Though both ScummVM and FreeSCI aimed to reverse engineer the workings of SCI, FreeSCI had stated they took a more clean-room approach to avoid any legal question about their reverse engineering, and believed the ScummVM project had run afoul of some of Sierra's approaches and thus were hesitant to work together. However, FreeSCI began to languish in interest compared to ScummVM; after a developer took it upon themselves to make the FreeSCI engine work in ScummVM, the FreeSCI saw more participation in their project, and they agreed to merge their efforts into ScummVM. Initial SCI support was subsequently released in a 2010 version of ScummVM. ScummVM continues to add new games or game engines, though the process to create these is relatively slow. According to the team's project lead Eugene Sandulenko (as of 2017), game engines are chosen for inclusion into ScummVM either if they are given the source code that makes it easy to port into the software's architecture, or if one or more of the team members are passionate about bringing a game engine into the program to do the difficult task of reconstructing the game's code from the compiled versions. The only restriction is that ScummVM will only include 2D game engines, leaving 3D games to be handled by the sister project ResidualVM. The 2.0 version of ScummVM was released in December 2017, adding support for several full motion video games and some very obscure titles, such as Full Pipe and Plumbers Don't Wear Ties. With this release, ScummVM has support for 64 different game engines. Since around December 2017, ScummVM had been working support for Macromedia Director in coordination with some of the original developers. Macromedia Director was used for many mid-1990s video games such as The Journeyman Project. By August 2021, the first versions of ScummVM with Director support was released, with the team continuing to work on improving performance. An attempt to bring in Another World by Éric Chahi brought some internal stress within the project in 2004. Another World was not a point-and-click adventure game, and used polygon-based graphics instead of pixel-based ones most adventure games employ, and thus was considered a serious departure from the focus of ScummVM. Though the project was scrapped in a few days after Chahi requested its removal as he was preparing a 15th anniversary remastered for sale, the current leads of the project had to refocus the group and define the ideals that ScummVM should meet. ScummVM has also had difficulty in bringing games using the Adventure Game Studio (AGS), which is used frequently in indie adventure games, such as the Blackwell series. While the source code for AGS had been put into the open by its developer Chris Jones in 2010, the ScummVM team was met with a large backlash of complaints from developers using the AGS engine for their games, stating they did not want to see their games run in ScummVM. Yet eventually a couple of years later AGS was tested in the development build, with a request to the public to beta test thousands of newly supported games, until all AGS v2.5+ games were officially added to the program, coinciding with its 20th anniversary in October 2021. ScummVM has been a participant in the Google Summer of Code every year since 2007 except for 2015. A sister project, ResidualVM, was started to implement engines for three-dimensional adventure games, such as Grim Fandango and Myst III: Exile, named as such as these games reflect the residual of those not already covered by ScummVM. By late 2020, it was announced ResidualVM is officially merging with ScummVM. This was completed with the version 2.5 release, coinciding with the program's 20th anniversary in October 2021. Developer support According to Sandulenko "there is no typical process" when it comes to collaboration with developers, "Everything is ad-hoc. What we do, we try to search for contact info of people who were working on the titles some developer is interested in, and we’re inquiring access to their original source code, if it still exists somewhere. Then we start working on it at our own pace." With increased attention, ScummVM has entered into favorable agreements with adventure game developers to help bring their titles into the engine, or in some cases, being given source code and other assets to work from. Revolution Software helped the developers with source code and technical advice for its games, and once ScummVM supported the company's Virtual Theatre engine, Revolution released Lure of the Temptress and Beneath a Steel Sky as freeware and provided assets from its first two Broken Sword games in an open media format. The renewed interest in these games from younger players enabled Revolution to work on two more Broken Sword games. Other developers that have worked closely with ScummVM include: Adventure Soft: provided the original source code of their adventure games, Simon the Sorcerer, The Feeble Files and Elvira series. Alcachofa Soft: Emilio de Paz Aragón released the original source code of the adventure game Drascula: The Vampire Strikes Back as freeware. Creative Reality: Neil Dodwell and David Dew from Creative Reality released the original source code for their adventure Dreamweb, and the CD-ROM and floppy disk versions of the game as freeware, available for download on the ScummVM website. Gray Design Associates: David P. Gray provided the original source code of the Hugo trilogy Interactive Binary Illusions: released both the CD-ROM and the floppy disk version of their adventure game, Flight of the Amazon Queen as freeware available for download on the ScummVM website. Laboratorium Komputerowe Avalon: Janusz Wiśniewski and Miroslaw Liminowicz released the original source code of their adventure game Sołtys as freeware, available for download on the ScummVM website. Perfect Entertainment: John Young, Colin Smythe and Terry Pratchett provided the original source code of their adventure games, Discworld and Discworld II: Missing Presumed...!?. Wyrmkeep Entertainment: Joe Pearce provided the original source code of their adventure game, Inherit the Earth: Quest for the Orb. The digital storefront GOG.com which specializes in selling digital copies of older games, provides support to ScummVM, and sells titles that include the ScummVM engine as part of their distribution. Disney, which owns the rights to LucasArts adventure games, released Maniac Mansion on Steam running off ScummVM. Development Operation Stealth and Future Wars support was added by integrating another stand-alone recreation of their engine: cinE. TrollVM has also been integrated into ScummVM adding support for three pre-AGI games: Mickey's Space Adventure, Troll's Tale, and Winnie the Pooh in the Hundred Acre Wood. Mistic's GPL violations ScummVM is distributed as free software under the GPL-2.0-or-later license, enabling anyone to use the project as an engine for a game. For example, Revolution Software repackaged their Broken Sword games for a DVD release, using ScummVM with the included sword1 and sword2 engines to support modern computers. In December 2008, the ScummVM team learned that the recently released Wii ports of three Humongous Entertainment Junior Adventure titles, Freddi Fish and the Case of the Missing Kelp Seeds, Pajama Sam: No Need to Hide When It's Dark Outside, and Spy Fox: Dry Cereal, have all used the ScummVM engine without proper attribution. The games were published on request of Atari through Majesco Entertainment, who turned to Mistic Software to port the games. Mistic had used ScummVM for these, but failed to credit the developers. While the ScummVM team contacted gpl-violations.org for legal advice, Atari instead threatened to sue the ScummVM team, as the terms of Nintendo Wii development kit heavily restricted the use of open source software, including the GPL. A settlement was made in 2009, in which ScummVM would drop the investigation of the GPL violation, on the condition that Mistic would sell or destroy all GPL-violating copies of the games, make a donation to the Free Software Foundation, and pay the legal fees. As a result, this legal dispute significantly limited the availability of the Wii ports of these three titles. ResidualVM ResidualVM (formerly Residual) was a cross-platform computer program comprising 3D game engine recreations with a common graphical user interface. It supports Grim Fandango, Myst III: Exile, and The Longest Journey. It merged with ScummVM in October 2020. ResidualVM was originally designed to play LucasArts adventure games that use the GrimE game engine, and was later adapted to support other ones. Like ScummVM, the VM in ResidualVM stood for virtual machine. ResidualVM is a reimplementation of the part of the software used to interpret the scripting languages by conducting reverse engineering on the original game rather than emulating the hardware on which the games ran. As such, ResidualVM allows the games it supports to be played on platforms other than those for which they were originally released. The name of the project comes from the fact that it was originally started to support the residual LucasArts adventure games not supported by ScummVM. The original Lua-based engine used by LucasArts in their 3D adventure games was called GrimE (as opposed to SCUMM), so ResidualVM's title is also a word pun as grime is a type of residue. The project was started by former ScummVM team leader James Brown, and was first publicly available on August 15, 2003. Progress on the project was initially slow, and as a result the project's main goal of supporting Grim Fandango did not occur until April 25, 2011, when the compatibility of Grim Fandango was upgraded to "completable with a few minor glitches". The project obtained a domain separate from ScummVM on December 6, 2011. As a result of the new domain name, the project name was changed from Residual to ResidualVM. The logo was changed to reflect the new name on January 25, 2012. The first stable release of ResidualVM was released 9 years after the project started, on December 21, 2012. It merged with ScummVM in October 2021. Support ResidualVM was officially available on multiple platforms including Windows, Linux, Mac OS X, AmigaOS 4, and IRIX. In addition, an Android port is available in the source code, and unofficial builds have been made with that source. There is also a port available for the Pandora console, and for FreeBSD, but they are not official as they have not been added to the main branch. With increased attention, ResidualVM entered into favorable agreements with adventure game developers to help bring their titles into the engine. Cyan Worlds partnered with ResidualVM to release Myst III: Exile on digital platforms. The digital storefront GOG.com which specialized in selling digital copies of older games, sells Myst III: Exile with the ResidualVM engine as part of its distribution. ResidualVM Supported games The stable release supports Grim Fandango and Myst III: Exile, which are completable with a few minor glitches. In the development branch, there is also support for Escape from Monkey Island, which is completable with a few glitches, and The Longest Journey, which is completable with missing features. Like ScummVM, ResidualVM contains fixes for bugs present in the original executable. The ResidualVM team discovered a workaround for a bug that causes a critical dialog not to play in Grim Fandango. In addition, the Grim Fandango engine in ResidualVM has fixes for over a dozen other bugs present in the original. There is also a branch of ResidualVM called Grim Mouse, which allows Grim Fandango to be played completely with a mouse as a traditional point and click adventure game. Supported games The following games have support built into the current release of ScummVM. LucasArts games In order of the games' original release dates: Maniac Mansion Zak McKracken and the Alien Mindbenders Indiana Jones and the Last Crusade: The Graphic Adventure Loom The Secret of Monkey Island Monkey Island 2: LeChuck's Revenge Indiana Jones and the Fate of Atlantis Day of the Tentacle Sam & Max Hit the Road Full Throttle The Dig The Curse of Monkey Island Grim Fandango Sierra On-Line games The Beast Within: A Gabriel Knight Mystery The Black Cauldron Castle of Dr. Brain Codename: ICEMAN The Colonel's Bequest Conquests of Camelot: The Search for the Grail Conquests of the Longbow: The Legend of Robin Hood The Dagger of Amon Ra EcoQuest: The Search for Cetus EcoQuest II: Lost Secret of the Rainforest Freddy Pharkas: Frontier Pharmacist Gabriel Knight: Sins of the Fathers Gold Rush! Hi-Res Adventure #0: Mission Asteroid Hi-Res Adventure #1: Mystery House Hi-Res Adventure #2: Wizard and the Princess Hi-Res Adventure #3: Cranston Manor Hi-Res Adventure #4: Ulysses and the Golden Fleece Hi-Res Adventure #5: Time Zone Hi-Res Adventure #6: The Dark Crystal Hoyle's Official Book of Games: Volume 1, Volume 2 and Volume 3 The Island of Dr. Brain Jones in the Fast Lane King's Quest: Quest for the Crown King's Quest II: Romancing the Throne King's Quest III: To Heir Is Human King's Quest IV: The Perils of Rosella King's Quest V: Absence Makes the Heart Go Yonder! King's Quest VI: Heir Today, Gone Tomorrow King's Quest VII: The Princeless Bride King's Questions Leisure Suit Larry in the Land of the Lounge Lizards Leisure Suit Larry Goes Looking for Love (in Several Wrong Places) Leisure Suit Larry III: Passionate Patti in Pursuit of the Pulsating Pectorals Leisure Suit Larry 5: Passionate Patti Does a Little Undercover Work Leisure Suit Larry 6: Shape Up or Slip Out! Leisure Suit Larry: Love for Sail! Lighthouse: The Dark Being Manhunter: New York Manhunter 2: San Francisco Mickey's Space Adventure Mixed-Up Fairy Tales Mixed-Up Mother Goose Pepper's Adventures in Time Phantasmagoria Phantasmagoria II: A Puzzle of Flesh Police Quest: In Pursuit of the Death Angel Police Quest II: The Vengeance Police Quest III: The Kindred Police Quest IV: Open Season Police Quest: SWAT Quest for Glory: So You Want to Be a Hero Quest for Glory II: Trial by Fire Quest for Glory III: Wages of War Quest for Glory IV: Shadows of Darkness Rama Shivers Slater & Charlie Go Camping Space Quest: The Sarien Encounter Space Quest II: Vohaul's Revenge Space Quest III: The Pirates of Pestulon Space Quest IV: Roger Wilco and The Time Rippers Space Quest V: Roger Wilco – The Next Mutation Space Quest 6: Roger Wilco in The Spinal Frontier Torin's Passage Troll's Tale Winnie the Pooh in the Hundred Acre Wood Coktel Vision games Bargon Attack The Bizarre Adventures of Woodruff and the Schnibble Fascination Geisha Gobliiins Gobliins 2: The Prince Buffoon Goblins Quest 3 Lost in Time Once Upon A Time: Little Red Riding Hood Playtoons: Bambou le Sauveur de la Jungle Urban Runner Ween: The Prophecy Adventuresoft-Horrorsoft games Elvira: Mistress of the Dark Elvira II: The Jaws of Cerberus The Feeble Files Personal Nightmare Simon the Sorcerer Simon the Sorcerer II: The Lion, the Wizard and the Wardrobe Simon the Sorcerer's Puzzle Pack Waxworks Humongous Entertainment games Various games by Humongous Entertainment use the SCUMM engine, and are therefore playable with ScummVM. Backyard Baseball Backyard Baseball 2001 Backyard Baseball 2003 Backyard Football Backyard Football 2002 Big Thinkers! First Grade Big Thinkers! Kindergarten Blue's 123 Time Activities Blue's ABC Time Activities Blue's Art Time Activities Blue's Birthday Adventure Blue's Reading Time Activities Fatty Bear's Birthday Surprise Fatty Bear's Fun Pack Freddi Fish and the Case of the Missing Kelp Seeds Freddi Fish 2: The Case of the Haunted Schoolhouse Freddi Fish 3: The Case of the Stolen Conch Shell Freddi Fish 4: The Case of the Hogfish Rustlers of Briny Gulch Freddi Fish 5: The Case of the Creature of Coral Cove Freddi Fish and Luther's Maze Madness Freddi Fish and Luther's Water Worries Let's Explore the Airport with Buzzy Let's Explore the Farm with Buzzy Let's Explore the Jungle with Buzzy Pajama Sam: No Need to Hide When It's Dark Outside Pajama Sam 2: Thunder and Lightning Aren't so Frightening Pajama Sam 3: You Are What You Eat from Your Head to Your Feet Pajama Sam's Lost & Found Pajama Sam's Sock Works Pajama Sam: Games to Play on Any Day Putt-Putt and Pep's Balloon-O-Rama Putt-Putt and Pep's Dog on a Stick Putt-Putt Enters the Race Putt-Putt Goes to the Moon Putt-Putt Joins the Circus Putt-Putt Joins the Parade Putt-Putt Saves the Zoo Putt-Putt Travels Through Time Putt-Putt's Fun Pack Spy Fox in "Dry Cereal" Spy Fox 2: "Some Assembly Required" Spy Fox 3: "Operation Ozone" Spy Fox in Cheese Chase Spy Fox in Hold the Mustard Games by other developers ScummVM also supports the following non-SCUMM games: 3 Skulls of the Toltecs The 7th Guest Amazon: Guardians of Eden Beavis and Butt-Head in Virtual Stupidity Beneath a Steel Sky Blade Runner Blazing Dragons Blue Force Broken Sword: The Shadow of the Templars Broken Sword II: The Smoking Mirror Broken Sword 2.5: The Return of the Templars Bud Tucker in Double Trouble Chivalry is Not Dead The Crimson Crown Cruise for a Corpse Crusader: No Remorse Darby the Dragon Discworld Discworld II: Missing Presumed...!? Dragon History Dráscula: The Vampire Strikes Back DreamWeb Duckman: The Graphic Adventures of a Private Dick Eye of the Beholder Eye of the Beholder II: The Legend of Darkmoon Flight of the Amazon Queen Full Pipe Future Wars Gregory and the Hot Air Balloon The Griffon Legend Hopkins FBI Hugo's House of Horrors Hugo II, Whodunit? Hugo III, Jungle of Doom! Hyperspace Delivery Boy! I Have No Mouth, and I Must Scream Inherit the Earth: Quest for the Orb The Journeyman Project: Pegasus Prime The Journeyman Project 2: Buried in Time The Labyrinth of Time Lands of Lore: The Throne of Chaos Leather Goddesses of Phobos 2 The Legend of Kyrandia: Fables and Fiends The Legend of Kyrandia: Hand of Fate The Legend of Kyrandia: Malcolm's Revenge Little Big Adventure Living Books series The Longest Journey The Lost Files of Sherlock Holmes: The Case of the Rose Tattoo The Lost Files of Sherlock Holmes: The Case of the Serrated Scalpel Lure of the Temptress L-Zone Magic Tales: Liam Finds a Story Magic Tales: The Princess and the Crab Magic Tales: Sleeping Cub's Test of Courage The Manhole Might and Magic IV: Clouds of Xeen Might and Magic V: Darkside of Xeen Might and Magic: Swords of Xeen Mission Supernova Part 1 and Part 2 Mortville Manor Myst Myst III: Exile The Neverhood Nightlong: Union City Conspiracy Nippon Safes Inc. Oo-Topos Operation Stealth Plumbers Don't Wear Ties The Prince and the Coward Private Eye Red Comrades Save the Galaxy Red Comrades 2: For the Great Justice Return to Ringworld Return to Zork Rex Nebular and the Cosmic Gender Bender Ringworld: Revenge of the Patriarch Riven Rodney's Funscreen Sfinx Sołtys Spaceship Warlock Starship Titanic Teenagent Tony Tough and the Night of Roasted Moths Toonstruck Touché: The Adventures of the Fifth Musketeer Transylvania U.F.O.s Ultima IV: Quest of the Avatar Ultima VI: The False Prophet Ultima VIII: Pagan Versailles 1685 Voyeur Zork: Grand Inquisitor Zork Nemesis Several Adventure Game Studio games Several interactive fiction games Games in development The following games are only available in unstable daily builds, and are planned for official support in an upcoming version. The 11th Hour Clandestiny Escape from Monkey Island Sanitarium Spider-Man: The Sinister Six Tender Loving Care Uncle Henry's Playhouse Notes See also Game engine recreation Z-machine :Category:ScummVM-supported games References External links 2001 software Adventure game engines Amiga software AmigaOS 4 software BeOS software BSD software Cross-platform software Free and open-source Android software Free software Free software programmed in C++ Free software projects Free virtualization software Linux software MacOS games MorphOS software OS/2 software Palm OS software Pocket PC software RISC OS software Solaris software Unix software Windows games
1420680
https://en.wikipedia.org/wiki/VSI%20BASIC%20for%20OpenVMS
VSI BASIC for OpenVMS
VSI BASIC for OpenVMS is the latest name for a dialect of the BASIC programming language created by Digital Equipment Corporation (DEC) and now owned by VMS Software Incorporated (VSI). It was originally developed as BASIC-PLUS in the 1970s for the RSTS-11 operating system on the PDP-11 minicomputer. It was later ported to OpenVMS, first on VAX, then Alpha, and most recently Integrity. Past names for the product include: BASIC-PLUS, Basic Plus 2 (BP2 or BASIC-Plus-2), VAX BASIC, DEC BASIC, Compaq BASIC for OpenVMS and HP BASIC for OpenVMS. Multiple variations of the titles noting the hardware platform (VAX, AlphaServer, etc.) also exist. Notable features VSI BASIC has many FORTRAN-like extensions, as well as supporting the original Dartmouth BASIC matrix operators. Line numbers are optional, unless the "ERL" function is present. It allows you to write "WHEN ERROR" error handlers around protected statements. The more traditional but less elegant "ON ERROR" statement lacks such context or scope. One of VSI BASIC's more noteworthy features is built-in support for OpenVMS's powerful Record Management Services (RMS). Before the release VAX BASIC, native RMS support was only available in DEC's COBOL compiler. History The VSI BASIC for OpenVMS product history spans a period of more than 30 years, and it has gone through many name and ownership changes in that time. It has also been ported to a succession of new platforms as they were developed by DEC, Compaq, HP and VSI. The company and/or platform name has often been included in the product name, contributing to the proliferation of names. BASIC-PLUS VSI BASIC began as BASIC-PLUS, created by DEC for their RSTS-11 operating system and PDP-11 minicomputer. Programming language statements could either be typed into the command interpreter directly, or entered into a text editor, saved to a file, and then loaded into the command interpreter from the file. Errors in source code were reported to the user immediately after the line was entered. Programs were stored as a source file, using the "SAVE" command. It could be "compiled" into a non-editable binary file, using the "COMPILE" command. This command did not produce true machine language programs, but rather a byte code called "tokens". The tokens were interpreted upon execution, in a manner similar to the more modern Java. Programs were entered into the command interpreter starting with line numbers, integers from 1 to 32767. Lines could be continued onto multiple lines by using a line feed character. For ease of external editing of the source file, later versions of BASIC-PLUS also allowed the & character as a line-continuation character. Multiple statements could be placed on a single line using as the statement separator. For PDP-11 systems with virtual memory (RSTS/E), address space was limited to about 64 KB. With BASIC-PLUS, about half of this was used by the combined command interpreter and run-time library. This limited user programs to about 32 KB of memory. Older RSTS-11 systems lacked virtual memory, so the user program had to fit into whatever was left of physical memory after RSTS and BASIC-PLUS took up their share. For example, on a PDP-11/35 with 32K of physical memory, running RSTS-11 V04B-17, user programs were limited to 7 KB. Large programs could be broken up into various pieces by use of the "CHAIN" instruction. Programs could chain to specific line numbers in a secondary program. The use of a shared memory section called core common also allowed programs to pass data among each other as needed; disk files could also be used but were slower. The interpreter included a garbage collecting memory manager, used for both string data and byte-code. A running program could be interrupted, have variables examined and modified, and then be resumed. Many of the control structures used in other high-level languages existed in BASIC-PLUS, including WHILE and UNTIL. The language also supported the use of conditional modifiers on a single line. For example, the line "" would print the value of "I" unless I was less than 10. BASIC Plus 2 Basic Plus 2 (BP2 or BASIC-Plus-2) was later developed by DEC to add additional features and increase performance. It used true compilation into threaded code, and wrote its output to machine language object files. These were compatible with other object files on the system, and could be assembled into libraries. A linker (the TKB taskbuilder) then created executable files from them. TKB also supported overlays; this allowed individual routines to be swapped into the main memory space as needed. BP2 programs ran under RSX-11 or RSTS/E's RSX Run Time System. This RTS only occupied 8KB (later, 2KB) of the user's address space, leaving 56KB for the user's program. These two factors allowed individual BP2 programs to be much larger than BASIC-PLUS programs, often eliminating the need for CHAINing. Unlike BASIC-PLUS (which was only available on RSTS-11), BP2 allowed use on the RSX-11 operating system as well. VAX BASIC and DEC BASIC With the creation of the VAX minicomputer, DEC ported BASIC-PLUS-2 to the new VMS operating system, and called it VAX BASIC. VAX BASIC used the standard VMS calling standards, so object code produced by VAX BASIC could be linked with object code produced by any of the other VMS languages. Source code for BASIC Plus 2 would usually run without major changes on VAX BASIC. When DEC created their Alpha microprocessor, VMS was ported to it and renamed OpenVMS. VAX BASIC was likewise ported to Alpha and renamed DEC BASIC. The BASIC interpreter was permanently dropped at this point, which meant that DEC BASIC programs could only be run as OpenVMS executables, produced by a compile followed by a link. Compaq, HP and VSI When DEC was purchased by Compaq in 1997/98, the products were renamed Compaq BASIC for OpenVMS VAX and Compaq BASIC for OpenVMS Alpha. Likewise, when Compaq merged with HP in 2001/02, the products were renamed HP BASIC for OpenVMS on VAX and HP BASIC for OpenVMS on AlphaServer. HP later released HP BASIC for OpenVMS on Integrity for their Integrity server platforms based upon Intel's Itanium processors. In mid 2014, HP sold the whole OpenVMS ecosystem to VSI who renamed the product VSI BASIC for OpenVMS. Sample code Hello, world PRINT "Hello, world!" Celsius to Fahrenheit conversion 10 PRINT "Enter a temperature in Celsius "; INPUT C when error in X = REAL(C) PRINT "Temperature in degrees Fahrenheit is "; (X * 1.8) + 32 use PRINT "Error: Enter a valid numeric value." end when 40 END Note: VSI Basic does not require line numbers. References External links Official BASIC documentation at HP HP BASIC for OpenVMS "source code demos and examples" Articles with example BASIC code OpenVMS software BASIC compilers BASIC programming language family
2001751
https://en.wikipedia.org/wiki/MSDOS.SYS
MSDOS.SYS
MSDOS.SYS is a system file in MS-DOS and Windows 9x operating systems. In versions of MS-DOS from 1.1x through 6.22, the file comprises the MS-DOS kernel and is responsible for file access and program management. MSDOS.SYS is loaded by the DOS BIOS IO.SYS as part of the boot procedure. In some OEM versions of MS-DOS, the file is named MSDOS.COM. In Windows 95 (MS-DOS 7.0) through Windows ME (MS-DOS 8.0), the DOS kernel has been combined with the DOS BIOS into a single file, IO.SYS (aka WINBOOT.SYS), while MSDOS.SYS became a plain text file containing boot configuration directives instead. If a WINBOOT.INI file exists, the system will retrieve these configuration directives from WINBOOT.INI rather than from MSDOS.SYS. When Windows 9x is installed over a preexisting DOS install, the Windows file may be temporarily named MSDOS.W40 for as long as Windows' dual-boot feature has booted the previous OS. Likewise, the MSDOS.SYS of the older system is named MSDOS.DOS for as long as Windows 9x is active. Some DOS utilities expect the MSDOS.SYS file to have a minimal file size of at least 1 KB. This is the reason why a large dummy comment is typically found in the MSDOS.SYS configuration file since Windows 95. By default, the file is located in the root directory of the bootable drive/partition (normally C:\ for hard disks) and has the hidden, read-only, and system file attributes set. The MS-DOS derivative (DCP) by the former East-German VEB Robotron used a filename instead. IBM PC DOS as well as DR DOS since 5.0 (with the exception of DR-DOS 7.06) used the file IBMDOS.COM for the same purpose, whereas DR DOS 3.31 to 3.41 used DRBDOS.SYS instead. FreeDOS uses the file KERNEL.SYS for the same purpose. Windows NT-based operating systems (NT 3.1–4, 2000, XP, and 2003) use the NTLDR file and NT 6+ operating systems (Vista, 2008, 7, 8, 8.1, and 10) use bootmgr instead, as they have a different boot sequence. See also IO.SYS IBMDOS.COM DRBDOS.SYS COMMAND.COM List of DOS system files Architecture of Windows 9x Notes References External links MSDOS.SYS in Windows 9x (95/98/ME): Microsoft Knowledge Base (MSKB): List of MSDOS.SYS articles MDGx: Windows 95/98/ME Complete MSDOS.SYS Reference UKT Support: Contents of the MSDOS.SYS File Computer Hope: Information about Window MSDOS.SYS file MDGx: WINBOOT.INI DOS kernel DOS files DOS configuration files
1641481
https://en.wikipedia.org/wiki/Sandvine
Sandvine
Sandvine Incorporated is an application and network intelligence company based in Waterloo, Ontario. Sandvine markets network policy control products that are designed to implement broad network policies, including Internet censorship, congestion management, and security. Sandvine's products target Tier 1 and Tier 2 networks for consumers, including cable, DSL, and mobile. Operation Sandvine classifies application traffic across mobile and local networks by user, device, network type, location and other parameters. The company then applies machine learning-based analytics to real-time data and makes policy changes to optimize, secure, and monetize applications. As of 2021, Sandvine has over 500 customers globally that span 2.5 billion network users across more than 100 countries. Company history Sandvine was formed in August, 2001 in Waterloo, Ontario, by a team of approximately 30 people from PixStream, a then-recently closed company acquired by Cisco. An initial round of VC funding launched the company with $20 million CDN. A subsequent round of financing of $19 million (CDN) was completed in May 2005. In March 2006 Sandvine completed an initial public offering on the London AIM exchange under the ticker 'SAND'. In October 2006 Sandvine completed an initial public offering on the Toronto Stock Exchange under the ticker 'SVC'. Initial product sales focused on congestion management and fair usage as service providers struggled with the rapid growth in broadband traffic. As fiber rollouts and 4G networks became more prevalent, the company's application optimization and monetization use cases were adopted by many customers. This allowed service providers to deliver usage and application-based plans, zero-rate applications, reduce fraud, and introduce security and parental controls as a way to generate new revenues. In June 2007 Sandvine acquired CableMatrix Technologies for its PacketCable Multimedia (PCMM)-based PCRF that enable broadband operators to increase subscriber satisfaction while delivering media-rich IP applications and services such as SIP telephony, video streaming, on-line gaming, and videoconferencing. In July 2017 Sandvine shareholders accepted a $562 million (CDN) takeover bid from PNI Acquireco Corp., an affiliate of Francisco Partners and Procera Networks. The acquisition was completed on September 2017 when Sandvine shares ceased to be listed in the Toronto Stock Exchange. The acquisition was completed despite concerns raised by Ronald Deibert, the director of the Citizen Lab at the Munk School of Global Affairs at the University of Toronto who argued that the takeover required “closer scrutiny” by the federal government, largely in light of some of the activities done by two of Francisco’s portfolio companies. Most notably Procera Networks was part of a controversy where its technology is alleged to have been used to spy on Turkish citizens. The P2P throttling focuses on Gnutella, and uses a path cost algorithm to reduce speeds while still delivering the same content. Sandvine uses stateful deep packet inspection and packet spoofing to allow the networking device to determine the details of the P2P conversation, including the hash requested. The device can then determine the optimal peer to use, and substitute it for the one selected by the P2P algorithm by "[sitting] in the middle, imitating both ends of the connection, and sending reset packets to both client and server." In March 2018, Citizen Lab published a report showing evidence that PacketLogic devices from Sandvine could have been used to deploy government spyware in Turkey and redirect Egyptian users to affiliate ads. Internet throttling and censorship Sandvine products were used by Comcast in the United States to limit number of sessions of Internet traffic generated by peer-to-peer file sharing software. Sandvine's current traffic discrimination product, Fairshare, is described in detail in an RFC. According to independent testing, Comcast injected reset packets into peer-to-peer connections, which effectively caused a certain limited number of outbound connections to immediately terminate. According to research by Citizen Lab, products sold by Sandvine are being used to facilitate censorship of the Internet in Egypt, an allegation the company denies. Support for Internet shutdowns during Belarus protests During the 2020–21 Belarusian protests, Belarusian officials shut down internet access with technology made by Sandvine. Peter Micek, general counsel at the human rights group Access Now, called on federal authorities to investigate Sandvine and the private equity firm Francisco Partners and questioned the effectiveness of Sandvine’s business ethics committee. “Their services appear to have been used in Belarus to silence people and to cover up egregious human rights violations”, Micek said. According to media reports, there was unease among Sandvine employees about the role of their company in the repression of political protests in Belarus since August 2020. In a conference call with employees on September 10, 2020, Sandvine's management, however, seems to have been unapologetic about their role in Belarus: "Alexander Haväng, Sandvine’s chief technology officer, ... said that Sandvine had concluded that the internet, and access to specific material on websites, wasn’t 'a part of human rights'. 'We don’t want to play world police', he said. 'We believe that each sovereign country should be allowed to set their own policy on what is allowed and what is not allowed in that country.'" On September 15, 2020, Sandvine cancelled its deal with Belarus, citing that the government "used its product to violate human rights". However, its hardware was left at two locations near Minsk, which allows the government to control approximately 40% of internet traffic in the country. References External links Official website Companies based in Waterloo, Ontario Companies established in 2001 Electronics companies of Canada Networking hardware companies Canadian brands Companies formerly listed on the Toronto Stock Exchange
10942529
https://en.wikipedia.org/wiki/Mykonos%20vase
Mykonos vase
The Mykonos vase, a pithos, is one of the earliest dated objects (Archaic period, c. 675 BC) to depict the Trojan Horse from Homer's telling of the Trojan war in the Iliad. It was found in 1961 (with human bones inside) on Mykonos, the Greek island for which it is named, by a local inhabitant. Ìt is on display at the Archaeological Museum of Mykonos. Description The neck of the pithos portrays the moment when the Trojan Horse is surrounded by (Greek) warriors, with additional warriors seen in the portholes. Beneath it there are three lines of metopes, each containing figures poised in battle. The lower part of the vase is blank. The warriors that surround the horse are represented in a formulaic manner. Their heads and legs appear behind rounded bossed shields and they carry spears. Those presented on the upper metope are portrayed in a similar fashion. However, the warriors on the main body of the pithos do not carry shields and can be seen assailing women and children who face them. The women have thick manes of hair and expressive hand gestures. Two factors confirm that the Mykonos vase depicts the sack of Troy: the depiction of the wooden horse on the neck of the vase, and the individual scenes of slaughter that accompany it. They dominate the pithos, showing warriors in the portholes of the Trojan Horse – a preview of what the horse holds in store – as well as warriors in action on the ground. In this piece, the artist chose to present the battle in the city of Troy as a battle against defenseless women and children, there are no Trojan warriors present on the pithos. The artist's decision to separate the scenes of slaughter and link them with the cunning trick of the Trojan Horse serves as a way of focusing the attention of the viewer on the cold bloodedness of the sack of the city and the way the slaughter in the city differed from the fighting on the front lines. There are three metopes that stand out in the pithos. At the far right of the middle row there is a lone warrior drawing his sword and advancing. At the far left panel is a lone woman, clasping her hands to her breast. The third metope, located directly below the horse in the center of the panels, depicts a single warrior that has been stabbed in the neck, crumpled over his shield while his right hand grasps for his scabbard. Amid the violence, we are presented with three single figures, two of which are not (yet) caught up in action but able to contemplate, and one lone dead warrior post-action. The depiction of single figures emphasizes the personal experience of violence. It is unknown whether the dead warrior is Greek or Trojan, but perhaps the artist's placement of the figure in the center was his/her way of pointing out that it did not matter which side he was on – his fate was still the same. The Mykonos vase alludes to stories within stories, but precludes any easy stringing of the episodes together. The vase also plays with viewpoint: it entices the viewer to look through the portholes of the horse and presents the sack of Troy as slaughter of the defenseless, evoking sympathy for victims of war, in spite of the Trojan theft that was the reason for the war. References Bibliography Michael John Anderson, The Fall of Troy in Early Greek Poetry and Art, 1997. Miriam Ervin Caskey, "Notes on Relief Pithoi of the Tenian-Boiotian Group", AJA, 80, 1976, pp. 19–41. M. Ervin, "A Relief Pithos from Mykonos", Deltion, 18, 1963, 37-75. J.M. Hurwit, The Art and Culture of Early Greece, 1100-480 B.C, 1985 M. Wood, In Search of the Trojan War, 1985. http://www.uwm.edu/Course/mythology/1200/twar2.htm Individual ancient Greek vases Archaic Greek art Mykonos 7th-century BC works Archaeological discoveries in Greece 1961 archaeological discoveries
21103176
https://en.wikipedia.org/wiki/FUDforum
FUDforum
FUDforum is a free and open-source Internet forum software, originally produced by Advanced Internet Designs Inc., that is now maintained by the user community. The name "FUDforum" is an abbreviation of Fast Uncompromising Discussion forum. It is comparable to other forum software. FUDforum is customizable and has a large feature set relative to other forum packages. FUDforum runs on a number of operating systems that are able to support the PHP programming language, including Unix, Linux and Windows systems. To store its data, FUDforum relies on either IBM DB2, Firebird, MS-SQL, MySQL, Oracle, PostgreSQL or SQLite. The interface is based on HTML5 with CSS, jQuery and AJAX to provide a more flexible user interface. The code is released under the GNU General Public License and Internet sites can use the software royalty-free. History FUDforum was originally developed by Ilia Alshanetsky. The first version of FUDforum was released in 2001. Versions 2.8.0 and above are developed and supported by the community. The 10 year anniversary release, FUDforum 3.0.3, was released on 10 September 2011. Requirements FUDforum requires the following components to function: A web server that will run PHP like the Apache Web Server or Microsoft's Internet Information Services; PHP versions 7.0 or higher; A database like IBM DB2, Firebird, MS-SQL, MySQL, Oracle, PostgreSQL or SQLite. SQLite is supported via PHP's PDO driver. Preconfigured versions Two special pre-configured versions of FUDforum are available (doesn't require manual installation): FUDforum2Go, a small-footprint version of FUDforum for Microsoft Windows that can run from a USB stick, CD-Rom or from any folder on a PC's hard disk. FUDforum2Go is based on Server2Go. Turnkey FUDforum is a virtual appliance based on the TurnKey Linux Virtual Appliance Library, that can be deployed in the cloud or on a virtual machine infrastructure like VMWare, Xen or VirtualBox. Features Some of FUDforum's features include: The software supports an unlimited number of members, forums, posts, threads and attachments. Ability to load USENET and E-mail list messages and sync forum replies back to these groups and lists. Customizable theme system based on templates. Translated into over 50 languages on translatewiki.net (including German, French, Dutch, Spanish, Portuguese and several other languages). Search engine friendly URLs Flood control and Captcha spam protection. Blacklisting of users and IP blocking. Messages can be stored within the database or on filesystem for extra performance. Topics can be listed in flat or threaded mode. User Avatars. Private messaging system. Poll creation. Built-in search engine. RSS-feed syndication. Message and Quick Reply editors that support BBCode, HTML, plain text and Smilies. User, Moderator and Admin Control Panels. A permission based user/group management system. A plugin system that can be used to extend the forum's functionality. A forum calendar. Custom profile fields. Integration A MediaWiki extension and Drupal CMS module is available to provide FUDforum integration. FUDforum can also be integrated with systems like DokuWiki and eGroupWare (not with eGroupware version 1.6). Custom integration of third party tools can be achieved via FUDforum's provided API, called FUDAPI. Conversion scripts Several conversion scripts are provided to convert other bulletin board and forum software to FUDforum. They include: IkonBoard; Invision Board; OpenBB; Phorum; phpBB; punBB; Sporum; VBulletin; WoltLab Burning Board; WWWBoard; XMB; Yabb and Yabb DC. See also Comparison of Internet forum software Integrated Content Management Systems: Drupal, MediaWiki and eGroupWare. References Further reading Nicholas Petreley (April 22, 2002) No FUD about FUDForum, SYS-CON Belgium Nicholas Petreley (October 23, 2002) Stop your BBS shopping & try FUDforum, SYS-CON Belgium Peter B. Macintyre, "Discussion Forums Made Easy: FUDforum 2.7.1", September 2005 (PDF) issue of php|architect, pages 59–62. External links Free Internet forum software Free email software Free Usenet clients Free groupware
57070965
https://en.wikipedia.org/wiki/Quad9
Quad9
Quad9 is a global public recursive DNS resolver which aims to protect users from malware and phishing. Quad9 is operated by the Quad9 Foundation, a Swiss public-benefit, not-for-profit foundation with the purpose of improving the privacy and cybersecurity of Internet users, headquartered in Zurich. It is the only global public resolver which is operated not-for-profit, in the public benefit. Quad9 is entirely subject to Swiss privacy law, and the Swiss government extends that protection of law to Quad9's users throughout the world, regardless of citizenship or country of residence. Quad9 is currently the only global recursive resolver which is not subject to United States law, as the others are each domiciled in the San Francisco Bay Area and governed by the Northern District of California US Federal Court. Security and privacy Several independent evaluations have found Quad9 to be the most effective (97%) at blocking malware and phishing domains. As of June, 2021, Quad9 was blocking more than 100 million malware infections and phishing attacks per day. Quad9's malware filtering is a user-selectable option. The domains which are filtered are not determined by Quad9, but instead supplied to Quad9 by a variety of independent threat-intelligence analysts, using different methodologies. Quad9 uses a reputation-scoring system to aggregate these sources, and removes "false positive" domains from the filter list, but does not itself add domains to the filter list. Quad9 was the first to use standards-based strong cryptography to protect the privacy of its users' DNS queries, and the first to use DNSSEC cryptographic validation to protect users from domain name hijacking. Quad9 protects users' privacy by not retaining or processing the IP address of its users, and is consequently GDPR-compliant. Locations As of August 2021, the Quad9 recursive resolver was operating from server clusters in 224 locations on six continents and 106 countries. Sony Music injunction On June 18, 2021, Quad9 was notified of a first-of-its-kind injunction by the District Court of Hamburg, in which Sony Music demanded that Quad9 block DNS resolution of a domain name used by a web site which did not contain copyright-infringing material, but contained links to other sites which did. This is the first instance in which the copyright-holder industry has sought to compel a recursive DNS operator to block access to Internet domain names, so this is a novel interpretation of German law and is thought to be a precedent-setting case with far-reaching consequences. Quad9's General Manager, John Todd, was quoted in the press as saying "Our donors support us to protect the public from cyber-threats, not to further enrich Sony," and "If this precedent holds, it will appear again in similar injunctions against other uninvolved third parties, such as anti-virus software, web browsers, operating systems and firewalls." Legal expert Thomas Rickert of eco, the German Internet association, commented "I cannot imagine a provider who is further removed from responsibility for any illegal domains than a public resolver operator." Quad9 immediately announced that it would contest the injunction and, as of June 24, announced that it had retained German counsel and would be filing an objection to the injunction. Clemens Rasch, the attorney leading Sony's team, has not clearly stated whether any attempts were made to contact canna.to, the site widely suspected by the press to be behind the redactions in the court documents, saying only that Sony would have done so "if they could have been identified," while confirming that the site has been operating continuously for the past twenty two years. A court spokesperson said that "only the statements presented by the applicant side were used as a basis for the injunction" and that the court "took it on faith that the notifications which the applicant claimed to have sent were not only sent but also arrived at their recipient." At the close of the first week of the conflict, the press noted that donations to Quad9 were up by 900% relative to the prior week, and as of June 27, canna.to was still resolvable through Quad9's servers. On August 31, 2021, Quad9 filed an objection to the injunction, citing a number of flaws in the legal arguments made by Sony, but principally hinging on the fact that ISPs (which actually have a business relationship with infringing parties) are exempted from third-party liability, despite the fact that they also operate DNS recursive resolvers, and that it's a misinterpretation of the law to exclude independent recursive resolvers from that exemption. Addresses Quad9 operates recursive name servers for public use at the following addresses. These addresses are routed to the nearest operational server using IP anycast routing. Quad9 offers DNS over TLS over port 853, DNS over HTTPS over port 443, and DNSCrypt over port 443. See also Response policy zone References External links Official website Quad9 Data and Privacy Policy Quad9 Compliance and Applicable Law Quad9 Human Rights Considerations Quad9 Transparency Report Quad9 Foundation Council Quad9 Connect now available on Google Play Zurich Cantonal organization registration Alternative_Internet_DNS_services Non-profit organisations based in Switzerland
38165448
https://en.wikipedia.org/wiki/Code%20Rebel
Code Rebel
Code Rebel Corporation was an American technology company founded by Arben Kane and headquartered in Kahului, Hawaii, United States. The company developed and sold computer software and was best known for its terminal services and virtualization software principally for Apple Inc. products. Customers included Fortune 500 companies by late 2014, including AT&T, Microsoft, Cisco, IBM, Bloomberg, and the University of California. Code Rebel went public in May 2015, and in early 2016, Code Rebel announced an upcoming merger with Aegis Identity Software, Inc. Code Rebel's shares doubled in market value after the announcement, with the merger made official on March 11, 2016. The company filed for bankruptcy in May 2016. History Founding and early years (2006-2014) The software technology company Code Rebel was founded by software engineer Arben Kane in 2006, with headquarters in Kahului, Hawaii, United States. Alex Kukhar and Volodymyr Bykov, who became part of the core engineering team, also co-founded the company. Kane became CEO and chairman. The initial idea behind Code Rebel was to create a new object oriented remote access protocol that would allow the user to access a specific application and its active state. The company went on to develop, manufacture, license, support and sell computer software typically related to terminal services and virtualization software for Apple Inc. products. In particular, the company is known for its remote access software application called iRAPP, and a Mac terminal services application called iRAPP Terminal Server (iRAPP TS). As the company grew, it began catering software to companies such as Intuit, Bloomberg and Wells Fargo. Code Rebel later relocated to the United States mainland, setting up an office in New York City. In October 2010, University of Alabama’s Management Information Systems program announced a partnership with Code Rebel, LLC to create Apple iPod Touch, iPhone, and iPad applications. Kane was supervising around 50 software engineers and designers by 2014, largely in the United States and Europe. Customers included Fortune 500 companies by late 2014, including AT&T, Microsoft, Cisco, IBM, Bloomberg, Lloyds Bank, Merck, Panasonic and IKEA, as well as organizations such as the University of California, University of Texas and University of Missouri. Code Rebel markets and distributes its software products through both direct sales and a reseller program. The company had a network of 17 resellers in nine countries by 2015. Code Rebel went public in May 2015, in the first IPO for a Hawaii technology company since 2000, and is listed on the Nasdaq stock exchange. By its second day of trading, the company's stock was up over 200% from its initial offering price. Dr. James Canton joined Code Rebel as director in 2015. On July 28, 2015, Code Rebel announced it had acquired ThinOps Resources, for $9.25 million. On January 14, 2016, Code Rebel announced that it would likely merge with Aegis Identity Software, Inc., which is a private software company in Colorado that provides "on-premise [sic] and cloud-based identity and access management products and services for the K-12 and higher education markets." After the announcement, Code Rebel's shares doubled in market value. A definitive merger agreement between the two companies was signed on March 11, 2016. Aegis Identity's CEO stated that Aegis would continue to maintain its branding, with the two companies working as a joint operation. Technology The company developed a remote access software application called iRAPP and a Mac terminal services application called iRAPP Terminal Server (iRAPP TS). iRAPP allows users to remotely access their Mac desktop through the iRAPP protocol, which allows the user to work simultaneously on both PC and Mac or they can use any RDP (Microsoft's Remote Desktop Protocol) compliant application for the remote access. iRAPP TS allows the user to access multiple virtual desktops on one or multiple Mac machines concurrently, comparable to the Citrix solution for Mac. This focus on Apple solutions contrasts with most terminal services and virtualization providers such as VMware, Red Hat, Microsoft, and Citrix Systems, which have historically offered Microsoft Windows-based solutions. Legal In 2005, the owner of Code Rebel, Arben Kane, was associated with the controversial Cherry OS project, but under the name of Kryeziu. Critics alleged that CherryOS contained code grafted from PearPC, which would have been a violation of the GNU General Public License. In March 2011, Code Rebel's competitor Aqua Connect, Inc. filed suit against Code Rebel claiming misappropriation of trade secrets. The Cherry OS controversy was used by Aqua Connect's lawyers in an attempt to discredit Code Rebel and Kane. After partial dismissal, first amendment, and second dismissal, the case was referred to binding arbitration by agreement of the parties. Code Rebel prevailed in the arbitration, and on August 19, 2014, the United States District Court for the Central District of California entered judgment in favor of Code Rebel on all claims, stating “Aqua Connect has failed to establish any act of reverse engineering by Code Rebel or any other illegal act.” On May 6, 2016, the Securities and Exchange Commission suspended trading of Code Rebel stock until May 19, 2016, so that their previous filings could be examined. On May 18, 2016 it filed for insolvency according to Chapter 7 of the Federal Bankruptcy Act. See also Remote Desktop Services Remote desktop software Comparison of remote desktop software References External links Microsoft: Code Rebel Customer Case Study Software companies established in 2006 Software companies based in Hawaii Macintosh software companies 2006 establishments in Hawaii Companies formerly listed on the Nasdaq Software companies of the United States
1254837
https://en.wikipedia.org/wiki/Battle%20of%20Delville%20Wood
Battle of Delville Wood
The Battle of Delville Wood was a series of engagements in the 1916 Battle of the Somme in the First World War, between the armies of the German Empire and the British Empire. Delville Wood , was a thick tangle of trees, chiefly beech and hornbeam (the wood has been replanted with oak and birch by the South African government), with dense hazel thickets, intersected by grassy rides, to the east of Longueval. As part of a general offensive starting on 14 July, which became known as the Battle of Bazentin Ridge General Douglas Haig, Commander of the British Expeditionary Force, intended to capture the German second position between Delville Wood and Bazentin le Petit. The attack achieved this objective and was a considerable though costly success. British attacks and German counter-attacks on the wood continued for the next seven weeks, until just before the Battle of Flers–Courcelette the third British general attack in the Battle of the Somme. The 1st South African Infantry Brigade made its Western Front début as part of the 9th (Scottish) Division and captured Delville Wood on 15 July. The South Africans held the wood until 19 July, at a cost in casualties similar to those of many British brigades on 1 July. The village and wood formed a salient, which could be fired on by German artillery from three sides. The ground was a rise from Bernafay and Trônes woods, to the middle of the village, neither village or wood could be held without possession of the other. After the Battle of Bazentin Ridge, the British tried to advance on both flanks to straighten the salient at Delville Wood, to reach good jumping off positions for a general attack. The Germans tried to eliminate the salient and to retain the ground, which shielded German positions from view and overlooked British positions. For the rest of July and August, both sides fought for control of the wood and village but struggled to maintain the tempo of operations. Wet weather reduced visibility and made the movement of troops and supplies much more difficult; ammunition shortages and high casualties reduced both sides to piecemeal attacks and piecemeal defence on narrow fronts, except for a small number of bigger and wider-front attacks. Most attacks were defeated by defensive firepower and the effects of inclement weather, which frequently turned the battlefield into a mud slough. Delville Wood is well preserved with the remains of trenches, a museum and a monument to the South African Brigade at the Delville Wood South African National Memorial. Background Strategic developments In 1916, the Franco-British had absorbed the lessons of the failed breakthrough offensives of 1915 and abandoned attempts to break the German front in a sudden attack, as the increased depth of German defences had made this impossible. Attacks were to be limited, conducted over a wide front, preceded by artillery "preparation" and made by fresh troops. (nibbling) was expected to lead to the "crumbling" of German defences. The offensive was split between British and Dominion forces in the north (from Gommecourt to Maricourt) and the French in the south (from the River Somme to the village of Frey). After two weeks of battle, the German defenders were holding firm in the north and centre of the British sector, where the advance had stopped except at Ovillers and Contalmaison. There had been substantial Entente gains from the Albert–Bapaume road southwards. The British attacks after 1 July and the rapid French advance on the south bank, led Falkenhayn on 2 July, to order that the next day, General Fritz von Below issued an order of the day forbidding voluntary withdrawals, after his Chief of Staff, General Paul Grünert and the corps commander General Günther von Pannewitz, were sacked for ordering the XVII Corps to withdraw to the third position. Falkenhayn ordered a "strict defensive" at Verdun on 12 July and the transfer of more troops and artillery to the Somme front, which was the first strategic success of the Anglo-French offensive. By the end of July, finding reserves for the German defence of the Somme caused serious difficulties for Falkenhayn, who ordered an attack at Verdun intended to pin down French troops. The Brusilov Offensive continued and the German eastern armies had to take over more of the front from the Austro-Hungarians when Brody fell on 28 July, to cover Lemberg. Russian attacks were imminent along the Stochod river and the Austro-Hungarian armies were in a state of disarray. Conrad von Hötzendorf, the Austro-Hungarian Chief of the General Staff, was reluctant to take troops from the Italian front, when the Italian army was preparing the Sixth Battle of the Isonzo, which began on 6 August. Tactical developments On 19 July, the German 2nd Army was split and a new 1st Army was established, to command the German divisions north of the Somme. The 2nd Army kept the south bank, under General Max von Gallwitz, transferred from Verdun, who was also made commander of with authority over Below and the 1st Army. Lossberg remained as the 1st Army Chief of Staff and Bronsart von Schellendorff took over for the 2nd Army. Schellendorff advocated a counter-offensive on the south bank, which was rejected by Falkenhayn, because forces released from the Verdun front were insufficient, five divisions having been sent to the Russian Front in July. On 21 July Falkenhayn ruled that no more divisions could be removed from quiet fronts for the Somme until exhausted divisions relieved them. He needed seven "fought out" divisions to replace those already sent to the Somme. Gallwitz began to reorganise the artillery and curtailed harassing and retaliatory fire to conserve ammunition for defensive fire during Anglo-French attacks. From gun and howitzer batteries arrived on the Somme, along with five reconnaissance flights, three artillery flights, three bombing flights and two fighter squadrons. Since 1 July, thirteen fresh divisions had arrived on the north bank of the Somme and three more were ready to join the defence. The strain on the Germans on the Somme worsened in August; unit histories make frequent reference to high losses and companies being reduced to eighty men before relief. Many German divisions came out of a period on the Somme front having suffered at least and some German commanders suggested a change to the policy of unyielding defence. The front line was lightly held, with reserves further back in a defensive zone but this had little effect on the losses caused by the Anglo-French artillery. Movement behind the German front was so dangerous that regiments carried rations and water for a four- to five-day tour with them. Work on new rear lines was constant, despite shortages of materials and rail lines being overloaded with troop trains. Supply trains were delayed and stations near the front were under bombardment by artillery and aircraft. Light railways were insufficient and lorries and carts were pressed into use, using roads which, while paved, needed constant maintenance, which was difficult to ensure with the troops available. The German artillery suffered many losses and the number of damaged guns exceeded the repair capacity of workshops behind the front. Inferior ammunition exploded prematurely, bursting gun barrels. Destruction, wear and tear from 26 June to 28 August, led to the guns and the guns in the being lost. The Anglo-French maintained air superiority but German air reinforcements began to arrive by mid-July. More artillery was sent to the Somme but until the reorganisation and centralisation of artillery control had been completed, counter-battery fire, barrage-fire and co-operation with aircraft remained inadequate. Gallwitz considered plans for the relief attack but lack of troops and ammunition made it impractical, particularly after 15 July, when Falkenhayn withheld more fresh divisions and the 1st Army had to rely on the 2nd Army for reinforcements. In early August, an attempt was made to use men over 38 years old, who proved a danger to themselves and were withdrawn. Prelude British offensive preparations British attacks south of the road between Albert and Bapaume began on 2 July, despite congested supply routes to the French XX Corps and the British XIII, XV and III Corps. La Boisselle near the road was captured on 4 July, Bernafay and Caterpillar woods were occupied from and then fighting to capture Trônes Wood, Mametz Wood and Contalmaison took place until early on 14 July. As German reinforcements reached the Somme front, they were thrown into battle piecemeal and had many casualties. Both sides were reduced to improvised operations; troops unfamiliar with the ground had little time for reconnaissance, whose artillery was poorly co-ordinated with the infantry and sometimes fired on ground occupied by friendly troops. British attacks in this period have been criticised as uncoordinated, tactically crude and wasteful of manpower, which gave the Germans an opportunity to concentrate their inferior resources on narrow fronts. The Battle of Bazentin Ridge (14–17 July) was planned as a joint attack by XV and XIII Corps, whose troops would assemble in no man's land in darkness and attack at dawn after a five-minute hurricane bombardment. Haig was sceptical of the plan but eventually accepted the views of Rawlinson and the corps commanders, Lieutenant-General Henry Horne and Lieutenant-General Walter Congreve. Preparatory artillery bombardments began on 11 July and on the night of British troops advanced stealthily across no man's land, which in parts was wide, to within of the German front line and then crept forward. At the hurricane bombardment began and the British began to run forward. On the right flank, the 18th (Eastern) Division (Major-General Ivor Maxse), captured Trônes Wood in a subsidiary operation and the 9th (Scottish) Division (Major-General William Furse) was repulsed from Waterlot Farm but on the left got into Delville Wood. The 21st, 7th and 3rd Division on the left (northern) flank, took most of their objectives. By mid-morning of the German second position had been captured, cavalry had been sent forward and the German defenders thrown into chaos. Longueval and Delville Wood The village of Longueval enclosed a cross-roads which ran south-west to Montauban, west to the two Bazentins, north to Flers and east to Ginchy. South African forces used the English place names in Longueval and Delville Wood, as they were more meaningful than French terms. Pall Mall led north from Montauban and Bernafay Wood, to the cross-roads on the southern fringe of the village, where Sloan Street branched to the west, to a junction with Clarges Street and Pont Street. Dover Street led to the south-east and met a track running north from Trônes Wood. Two roads converged on Pall Mall at the main square; North Road ran between Flers and High Wood, with a path to the west meeting Pont Street, which ran into High Wood and the second road ran south-east to Guillemont. Clarges Street ran west from the village square to Bazentin le Grand and Prince's Street ran east through the middle of Delville Wood. Parallel to Clarges Street, about further north, ran Duke Street, both bounded on the west by Pont Street and by Piccadilly on the east side. Orchards lay between Piccadilly and North Street, beyond which Flers Road forked to the right, skirting the north-west edge of Delville Wood. The wood lay north of the D20 road, west of Ginchy and the north-west edge was adjacent to the D 197 Flers road. Delville Wood was bounded on the southern edge by South Street, which was linked to Prince's Street by Buchanan Street to the west, Campbell Street in the centre and King Street to the east, three parallel rides which faced north. Running east from Buchanan Street and parallel to Prince's Street was Rotten Row. On the north side of Prince's Street ran Strand, Regent Street and Bond Street, three rides to the northern fringe of the wood. British plans General Sir Henry Rawlinson, commander of the Fourth Army ordered Congreve to use XIII Corps to capture Longueval, while the XV Corps (Lieutenant-General Horne) was to cover the left flank. Rawlinson wanted to advance across no man's land at night for a dawn attack after a hurricane bombardment to gain surprise. Haig opposed the plan because of doubts about inexperienced New Army divisions assembling on the battlefield at night but eventually deferred to Rawlinson and the corps commanders, after modifications to their plan. An advance to Longueval could not begin until Trônes Wood was in British possession as it dominated the approach from the south. The capture of Longueval would then require the occupation of Delville Wood on the north-eastern edge of the town. If Delville Wood was not captured German artillery observers could overlook the village and German infantry would have an ideal jumping-off point for attacks on Longueval. A British advance would deepen the salient already formed to the north-east of Montauban but also assist British attacks to the south on Ginchy and Guillemont and on High Wood to the north-west. The 9th (Scottish) Division was to attack Longueval and the 18th (Eastern) Division on the right was to occupy Trônes Wood. Furse, ordered that the Longueval attack be led by the 26th Brigade. The 8th (Service) Battalion, Black Watch and the 10th (Service) Battalion, Argyll and Sutherland Highlanders would lead, with the 9th (Service) Battalion, Seaforth Highlanders in support and the 5th (Service) Battalion, Queen's Own Cameron Highlanders in reserve. The 27th Brigade would follow on, to mop up any bypassed German troops and reinforce the leading battalions, once they had entered the village. When Longueval had been secured, the 27th Brigade was to pass through the 26th Brigade to take Delville Wood. The 1st South African Brigade was to be kept in reserve. German preparations Despite considerable debate among German staff officers, Falkenhayn based defensive tactics in 1916 on unyielding defence and prompt counter-attacks, when ground had been lost. On the Somme front Falkenhayn's construction plan of January 1915 had been completed by early 1916. Barbed wire obstacles had been enlarged from one belt wide to two, wide and about apart. Double and triple thickness wire was used and laid high. The front line had been increased from one line to three, apart, the first trench occupied by sentry groups, the second () for the front-trench garrison and the third trench for local reserves. The trenches were traversed and had sentry-posts in concrete recesses built into the parapet. Dugouts had been deepened from to , apart and large enough for . An intermediate line of strongpoints () about behind the front line was also built. Communication trenches ran back to the reserve line, renamed the second line, which was as well-built and wired as the first line. The second line was beyond the range of Allied field artillery, to force an attacker to stop and move field artillery forward before assaulting the line. The German front line lay along the old third position, which in this area ran from the southern edge of Bazentin le Grand to the south fringe of Longueval and then curved south-east past Waterlot Farm and Guillemont. An Intermediate Line ran roughly parallel behind Delville Wood on a reverse slope, the wood being on a slight ridge which extended east from the village. Longueval had been fortified with trenches, tunnels, concrete bunkers and had two field guns. The village was garrisoned by the divisions of IV Corps (General Sixt von Armin) and the 3rd Guard Division. The north and north-west was held by Thuringian Infantry Regiment 72 of the 8th Division. In and around Delville Wood, an area of about , which abutted the east side of Longueval and extended to within of Ginchy, were Infantry Regiment 26 of the 7th Division, Thuringian Infantry Regiment 153 and Infantry Regiment 107. A British attack would have to advance uphill from Bernafay and Trônes woods, across terrain with a similar shape to a funnel, broad in the south and narrowing towards Longueval in the north. Armin suspected that an attack would begin on Battle French Tenth and Sixth armies The French Sixth Army was pushed back from Biaches south of the Somme by a German counter-attack on 14 July, which was retaken along with Bois Blaise and La Maisonette. (Military units after the first mentioned are French unless specified.) On 20 July, I Corps attacked at Barleux, where the 16th Division took the German front trench and was then stopped short of the second objective by massed German machine-gun fire, before being counter-attacked and pushed back to the start line with Refusals of orders occurred in the 2nd Colonial Division, which led to two soldiers being court-martialled and shot. Joffre issued orders that the possibility of a rapid end to the war was to be played down. XXXV Corps had been moved from the north bank and reinforced by two divisions and attacked Soyécourt, Vermandovillers and high ground beyond, as a prelude to attacks by the Tenth Army from Chilly, northwards to the Sixth Army boundary. XXXV Corps captured the north end of Soyécourt and Bois Étoile but then bogged down against flanking machine-gun fire and counter-attacks. For the remainder of July and August, the German defence on the south bank contained the French advance. On the north side of the Somme, the Sixth Army advanced by methodical attacks against points of tactical value, to capture the German second position from Cléry to Maurepas. VII Corps was brought in on the right of XX Corps, for the attack on the German second position, which was on the opposite side of a steep ravine, behind an intermediate line and strong points in the valley. On 20 July, XX Corps attacked with the 47th and 153rd divisions; the 47th Division attack on the right was stopped by machine-gun fire in front of Monacu Farm, as the left flank advanced and took Bois Sommet, Bois de l'Observatoire and the west end of Bois de la Pépinière. The 153rd Division captured its objectives, despite the British 35th Division further north being driven back from Maltz Horn Farm. 1st South African Brigade 14–16 July The divisions of XIII Corps and XV Corps attacked on 14 July, just before dawn at on a front. The infantry moved forward over no man's land to within of the German front line and attacked after a five-minute hurricane bombardment, which gained a measure of tactical surprise. Penetrating the German second line by a sudden blow on a limited front was relatively easy but consolidating and extending the breach against alerted defenders was far more difficult. The attack on Longueval met with initial success, the thin German outpost line being rapidly overwhelmed. By mid-morning, the British troops had reached the village square, by fighting house-to-house. The effect of British artillery-fire diminished because the north end of the village was out of view on a slight north-facing slope; German reinforcements reached the village; artillery and machine–gun fire from Delville Wood and Longueval, raked the 26th Brigade. By the afternoon, the western and south-western parts of the village had been occupied. The 27th Brigade, intended for the attack on Delville Wood, had been used to reinforce the attack. At Furse ordered the 1st South African Brigade to take over the attack on the wood. Three battalions of the 1st South African Brigade were to attack Delville Wood, while the 1st Battalion continued as a reinforcement of the 26th and 27th brigades in Longueval. The attack at was postponed to and then to on 15 July, due to the slow progress in the village. Brigadier-General Henry Lukin was ordered to take the wood at all costs and that his advance was to proceed, even if the 26th and 27th Brigades had not captured the north end of the village. Lukin ordered an attack from the south-west corner of the wood on a battalion front, with the 2nd Battalion forward, the 3rd Battalion in support and the 4th Battalion in reserve. The three battalions moved forward from Montauban before first light, under command of Lieutenant–Colonel W. E. C. Tanner of the 2nd Battalion. On the approach, Tanner received instructions to detach two companies to the 26th Brigade in Longueval and sent B and C companies of the 4th Battalion. The 2nd Battalion reached a trench occupied by the 5th Camerons, which ran parallel to the wood and used this as a jumping-off line for the attack at The attack met little resistance and by the South Africans had captured the wood south of Prince's Street. Tanner sent two companies to secure the northern perimeter of the wood. Later during the morning, the 3rd Battalion advanced towards the east and north-east of the wood and by Tanner reported to Lukin that he had secured the wood except for a strong German position in the north-western corner adjoining Longueval. The South African Brigade began to dig in around the fringe of the wood, in groups forming strong–points supported by machine–guns. The brigade occupied a salient, in contact with the 26th Brigade only along the south-western edge of the wood adjoining Longueval. The troops carried spades but digging through roots and remnants of tree trunks, made it impossible to dig proper trenches and only shallow shell scrapes could be prepared before German troops began to counter-attack the wood. A battalion of the 24th Reserve Division counter-attacked from the south-east at having been given five minutes' notice but only managed to advance to within of the wood before being forced to dig in. An attack by a second battalion from the Ginchy–Flers road was also repulsed, the battalions losing In the early afternoon a battalion of the 8th Division attacked the north-eastern face of the wood and was also repulsed, after losing all its officers. At on 15 July Bavarian Reserve Infantry Regiment 6 of the 10th Bavarian Division attacked in force from the east but was partially driven back by rifle and machine-gun fire. At Tanner reported to Lukin that German forces were massing to the north of the wood and he called for reinforcements, as the South Africans had already lost a company from the 2nd (Natal and Free State) Battalion. Tanner had received one company from the 4th (Scottish) Battalion from Longueval and Lukin sent a second company forward to reinforce the 3rd (Transvaal & Rhodesia) Battalion. Lukin sent messages urging Tanner and the battalion commanders to dig in regardless of fatigue, as heavy artillery fire was expected during the night or early the next morning. As night fell German high explosive and gas shelling increased in intensity and a German counter-attack began at midnight with orders to recapture the wood at all costs. The attack was made by three battalions from the 8th and 12th Reserve divisions and managed to reach within , before being driven under cover by artillery and machine-gun fire. Later that night, fire into Delville Wood, from four German brigades, reached a rate of per minute. On 14–15 July the 18th Division had cleared Trônes Wood to the south and had established a line up to Maltz Horn Farm, adjacent to the French 153rd Division. At Lukin was ordered to capture the north-west part of Delville Wood at all costs and then to advance westwards to meet the 27th Brigade, as it attacked north and north–eastwards through Longueval. The advance began on 16 July at but the casualties of the South Africans had reduced the weight of the attack, which was repulsed by the German defenders. The 27th Brigade advance was pinned down in the village by machine-gun fire from an orchard in the north end of Longueval. Survivors of the attack fell back to their trenches in the middle of the wood and were under bombardment for the rest of the day. The situation became desperate and was made worse by an attack by Thuringian Infantry Regiment 153. 17–19 July In the evening of 16 July, the South Africans withdrew south of Prince's Street and east of Strand Street, for a bombardment on the north-west corner of the wood and the north end of Longueval. On 17 July, the 27th Brigade attacked northwards in Longueval and the 2nd South African Battalion plus two companies of the 1st Battalion, attacked westwards in the wood. The South African attack was a costly failure and the survivors were driven back to their original positions, which came under increased German artillery-fire in the afternoon. In the evening Tanner was wounded and replaced by Lieutenant-Colonel E. F. Thackeray, of the 3rd Battalion, as commander in Delville Wood. The 9th Division drew in its left flank and the 3rd Division (Major-General J. A. L. Haldane), was ordered to attack Longueval from the west during the night. Huge numbers of shells were fired into the wood and Lukin ordered the men into the north-western sector, to support the attack on Longueval due at During the night, the German 3rd Guards Division advanced behind a creeping barrage of guns and over guns. The Germans reached Buchanan and Princes streets, driving the South Africans back from their forward trenches, with many casualties. The Germans spotted the forming up of the troops in the wood and fired an unprecedented bombardment; every part of the area was searched and smothered by shells. During the barrage, German troops attacked and infiltrated the South African left flank, from the north-west corner of the wood. By the South African position had become desperate as German attacks were received from the north, north-west and east, after the failure of a second attempt to clear the north-western corner. At news was received that the South Africans were to be relieved by the 26th Brigade. The 3rd Division attack on Longueval had taken part of the north end of the village and Armin ordered an attack by the fresh 8th Division, against the Buchanan Street line from the south east, forcing Thackeray to cling to the south western corner of the wood for two days and nights, the last link to the remainder of the 9th Division (Map 4). On the morning of 18 July, the South Africans received support from the relatively fresh 76th Brigade of the 3rd Division, which attacked through Longueval into the south-western part of the wood, to join up with A Company of the 2nd South African Battalion, until the 76th Brigade was forced back by German artillery-fire. In the south, the South Africans recovered some ground because the Germans had made limited withdrawals ready for counter-attacks in other areas. A German bombardment during the night became intense at sunrise and per minute fell into Longueval and the wood, along with heavy rain, which filled shell-craters. At German infantry attacked Longueval and the wood from the east, north and north-east. Reserve Infantry Regiment 107 attacked westwards along the Ginchy–Longueval road, towards the 3rd South African Regiment, which was dug in along the eastern fringe of the wood, which commanded Ginchy. The German infantry were cut down by small-arms fire as soon as they advanced and no more attempts were made to advance beyond the intermediate line. The main German attack was made by the 8th Division and part of the 5th Division from the north and north-east. Elements of nine battalions attacked with Infantry Regiment 153 was to advance from south of Flers, to recapture Delville Wood and reach the second position along the southern edge of the wood, the leading battalion to occupy the original second line from the Longueval–Guillemont road to Waterlot Farm, the second battalion to dig in along the southern edge of the wood and the third battalion to occupy Prince's Street along the centre of the wood. At first the advance moved along the sunken Flers road, north of the wood, which was confronted by the 2nd South African Regiment along the north edge of the wood. By afternoon, the north perimeter had been pushed further south by German attacks. Hand-to-hand fighting occurred all over the wood, as the South Africans could no longer hold a consolidated and continuous line, many of them being split into small groups without mutual support. By the afternoon of 18 July, the fresh Branderberger Regiment had also engaged. A German officer wrote and by 19 July, the South African survivors were being shelled and sniped from extremely close range. In the early morning, Reserve Infantry Regiment 153 and two companies of Infantry Regiment 52, entered the wood from the north and wheeled to attack the 3rd South African Battalion from behind, capturing six officers and from the Transvaal Battalion; the rest were killed. By mid morning, Black Watch, Seaforth and Cameron Highlanders in Longueval tried to charge into the wood but were repulsed by German small-arms fire from the north-west corner of the wood. The brigade was short of water, without food and unable to evacuate wounded; many isolated groups surrendered, after they ran out of ammunition. In the afternoon, the 53rd Brigade advanced from the base of the salient to reach Thackeray at the South African headquarters but was unable to reach the forward elements of the South African brigade. This situation prevailed through the night of 20 July On 20 July, the 76th Brigade of the 3rd Division was again pushed forward to attempt to relieve the 1st South African Brigade. The Royal Welsh Fusiliers attacked towards the South Africans but by Thackeray had informed Lukin that his men were exhausted, desperate for water and could not repel a further attack. Troops of the Suffolk Regiment and the 6th Royal Berkshires broke through and joined with the last remaining South African troops, in the segment of the wood still under South African control. Thackeray marched out of the wood, leading two wounded officers and ranks, the last remnant of the South African Brigade. Piper Sandy Grieve of the Black Watch, who had fought against the South African Boers as part of the Highland Brigade, in the Battle of Magersfontein in 1899 and been wounded through the cheeks, played the South Africans out. The survivors spent the night at Talus Boise and next day withdrew to Happy Valley south of Longueval. 21 July – 20 August A British bombardment preparatory to the offensive planned for the night of began at on 22 July. The 3rd Division attacked Delville Wood and the north end of Longueval, from the west with the 9th Brigade from Pont Street, as the 95th Brigade of the 5th Division attacked German strong points in the orchards to the north. The two battalions of the 3rd Division had only recently arrived and had received their orders at the last minute. The bombardment was considered poor but the attack began at and the troops were quickly engaged by German machine-guns from the front and left flank. The advance covered a considerable distance but was forced back to Piccadilly and then to Pont Street, where the survivors were bombarded by German artillery. The two 95th Brigade battalions also had early success and threatened the German right flank. The Flers road was crossed and a strong point captured and consolidated but then a German counter-attack pushed both battalions back to Pont Street; a second attack was planned and then cancelled. Relief of the 3rd Division began on the night of 25 July by the 2nd Division, ready for another attack on most of Delville Wood, when the west end of Longueval and the rest of the wood were attacked by the 5th Division, in a larger operation by XIII Corps and XV Corps due on 27 July. German artillery fired on the routes into Longueval and sent alarm signals aloft from the front-line several times each day. On 27 July, every British gun in range, fired on the wood and village from as infantry patrols went forward through a German counter-bombardment, to study the effect of the British fire. The patrols found "a horrible scene of chaos and destruction". When the bombardment began, about sixty German soldiers surrendered to the 2nd Division and at zero hour, two battalions of the 99th Brigade advanced, with trench-mortar and machine-gun sections in support. The infantry found a shambles of shell-craters, shattered trees and débris. After a ten-minute advance the troops reached a trench along Prince's Street, full of dead and wounded German infantry, taking several prisoners. The advance was continued when the barrage lifted by the supporting companies, which moved to the final objective about inside the northern fringe of the wood around A third battalion moved forward to mop up and guard the flanks but avoided the east end of the wood. As consolidation began, German artillery fired along Prince's Street and caused far more casualties than those suffered during the attack. On the left flank, the 15th Brigade of the 5th Division, attacked with one battalion forward and one in support. German artillery-fire before zero hour was so extensive, that most of a company of the forward battalion was buried and the Stokes mortars knocked out. The support battalion was pushed forward and both advanced on time into the west end of the wood, where they linked with the 99th Brigade. The attack on Longueval was hampered by the German barrage to the south, which cut communications and by several machine-guns firing from the village. An attempt by the Germans to reinforce the garrison from Flers failed, when British artillery-fire fell between the villages but the German infantry held out at the north end of Longueval. A British line was eventually established from the north-west of Delville Wood, south-west into the village, below the orchards at Duke Street and Piccadilly. A German counter-attack began at from the east end of Delville Wood against the 99th Brigade. The German attack eventually penetrated behind Prince's Street and pushed the British line back to face north-east. Communications with the rear were cut several times and when the Brigade commander contradicted a rumour that the wood had been lost, the 2nd Division headquarters assumed that the wood was empty of Germans. Skirmishing continued and during the night, two battalions of the 6th Brigade took over from the 99th Brigade. The 15th Brigade was relieved by the 95th Brigade that night and next morning Duke Street was occupied unopposed. On 29 July, the XV Corps artillery fired a bombardment for thirty minutes and at a battalion advanced on the left flank, to a line north of Duke Street; a battalion on the right managed a small advance. On 30 July, subsidiary attacks were made at Delville Wood and Longueval, in support of a bigger attack to the south by XIII Corps and XX Corps. The 5th Division attacked with the 13th Brigade, to capture German strong points north of the village and the south-eastern end of Wood Lane. A preliminary bombardment began at but failed to suppress the German artillery, which fired on the village and the wood. British communications were cut again, as two battalions advanced at the right-hand battalion was caught by German artillery-fire, at the north-west fringe of the wood but a company pushed on and dug in beyond. The left-hand battalion crawled forward under the British barrage but as soon as it attacked, massed German small-arms fire forced the troops under cover in shell-holes. A battalion on the right with only was so badly shelled, that a battalion was sent forward and a reserve battalion of the 15th Brigade was also sent forward. Attempts were made to reorganise the line in Longueval, where many units were mixed up; German artillery-fire was continuous and after dark the 15th Brigade took over. After representations by Major-General Reginald Stephens it was agreed that the 5th Division would be relieved during 1 August. A lull occurred in early August, as the 17th Division took over from the 5th Division; the 52nd Brigade was ordered to attack Orchard Trench, which ran from Wood Lane to North Street and the Flers Road into Delville Wood. A slow bombardment by heavy artillery and then a five-minute hurricane bombardment was followed by the attack at on 4 August. Both battalions were stopped by German artillery and machine-gun fire; communications were cut and news of the costly failure was not reported until The 17th Division took over from the 2nd Division on the right and attacked again on 7 August, after a methodical bombardment, assisted by a special reconnaissance and photographic sortie by the RFC. The 51st Brigade attacked at to establish posts beyond the wood but the British were stopped by German artillery-fire while still inside. After midnight, a fresh battalion managed to establish posts north of Longueval. German defensive positions in the area appeared much improved and the 17th Division was restricted to obtaining vantage points, before it was relieved by the 14th Division on 12 August. XV Corps attacked again on 18 August; in Delville Wood, the 43rd Brigade of the 14th Division, attacked the north end of ZZ Trench, Beer Trench up to Ale Alley, Edge Trench and a sap along Prince's Street, which had been found on reconnaissance photographs. The right-hand battalion advanced close behind a creeping barrage at reached the objective with few losses where the defenders surrendered. The south of Beer Trench was obliterated but the left-hand battalion was swept by artillery and machine-gun fire before the advance and reduced to remnants. The battalion took Edge Trench and bombed along Prince's Street, when German supports bombed down Edge Trench and retook it. In hand-to-hand fighting, the British held on to Hop Alley and blocked Beer Trench; two German attacks from Pint Trench were stopped by small-arms fire. During the British attack, the German line from Prince's Street to the Flers road was bombarded by trench mortars. On the left, two battalions of the 41st Brigade attacked Orchard Trench and the south end of Wood Lane; keeping touch with an attack by the 33rd Division on High Wood. The battalion on the right advanced close up to the creeping barrage, found Orchard Trench nearly empty and dug in beyond, with the right flank on the Flers road. The left-hand battalion was enfiladed from the left flank, after the 98th Brigade of the 33rd Division was repulsed but took part of Wood Lane. 21 August – 3 September On 21 August, a battalion of the 41st Brigade attacked the German defences in the wood, obscured by smoke discharges on the flanks but the German defenders inflicted nearly by small-arms fire. At midnight, an attack by the 100th Brigade of the 33rd Division, from the Flers road to Wood Lane began but the right-hand battalion was informed too late and the left-hand battalion attacked alone and was repulsed. In a combined attack with the French from the Somme north to the XIV Corps and III Corps areas, XV Corps attacked to complete the capture of Delville Wood and consolidate from Beer Trench to Hop Alley and Wood Lane. The 14th Division operation was conducted by a battalion of the 41st Brigade and three from the 42nd Brigade. The right hand battalion was repulsed at Ale Alley but the other battalions, behind a creeping barrage moving in lifts of only , advanced through the wood until their right flank was exposed, which prevented most of Beer Trench from being occupied. On the left flank, the westernmost battalion dug in on the final objective and gained touch with the 33rd Division on the Flers road. The new line ran south, from the right of the battalion near the Flers road, into the wood and then south-east along the edge to Prince's Street. Flares were lit for contact-aeroplanes, which were able to report the new line promptly. Over and more than twelve machine-guns were captured. Early next day, a battalion of the 42nd Brigade captured Edge Trench, to a point close to the junction with Ale Alley. Amidst rain delays, the 7th Division relieved the right-hand brigade of the 14th Division on the night of and later a battalion of the 43rd Brigade made a surprise attack, took the rest of Edge Trench and barricaded Ale Alley, taking about from Infantry Regiment 118 of the 56th Division, which eliminated the last German foothold in Delville Wood. An attack on the evening of 28 August, by a battalion on the right flank and a battalion of the 7th Division to the right, from the east end of the wood, against Ale Alley to the junction with Beer Trench failed. The 14th and 33rd divisions were relieved by the 24th Division by the morning of 31 August, after the 42nd Brigade had built posts along the wreckage of Beer Trench as far as the south-east of Cocoa Lane and dug a sap from the end of Prince's Street. The last week of August had been very wet, which made patrolling even more difficult but XV Corps detected the arrival of German reinforcements. The activity of the German artillery around Delville Wood suggested another counter-attack was imminent, as the 24th Division took over the defence of the wood and Longueval. German aircraft flew low over the British front positions and then a much more intense bombardment began. The German attack began at and the 7th Division on the right of the corps, was attacked along Ale Alley and Hop Alley and replied with rapid fire. The German infantry were repulsed but a second attack at was only held after hand-to-hand fighting just east of the wood. More German aircraft reconnoitred the area and German artillery-fire greatly increased around followed by a third attack at which pushed the British back into the wood, except on the left at Edge Trench. On the right flank, the Ginchy–Longueval road was held against the German attacks and some reinforcements arrived after dark, at the east end of the wood. At the north-east side, the right-hand battalion of the 72nd Brigade had moved forward, to dig in beyond the German bombardment and was not attacked; the left-hand battalion withdrew its right flank to Inner Trench to evade the bombardment and a strong point on Cocoa Lane was captured. The left flank of the battalion and the neighbouring right-hand battalion of the 73rd Brigade, were attacked at and repulsed the German infantry and with small-arms and artillery-fire. To the west, the left-hand battalion was caught in the German bombardment and lost nearly German infantry advanced from Wood Lane and bombed along Tea Trench almost as far as North Street. Other German troops attacked south-east into Orchard Trench, before British reinforcements arrived and contained the German advance, with the help of flanking fire from the 1st Division beyond the III Corps boundary. It was not until long after dark, that the extent of the German success was communicated to the XV Corps headquarters, where plans were made to recapture the ground next day. A battalion from the 73rd Brigade of the 24th Division, counter-attacked at dawn by bombing along Orchard Trench but was repulsed by the German defenders. More bombers attacked around Pear Street at but were also repulsed. A costly frontal attack by a battalion of the 17th Brigade at overran Orchard Trench and Wood Lane up to Tea Trench. On the east side of the wood, two platoons from the 91st Brigade attacked at but were forced back by small arms fire and at a battalion of the 24th Division managed to bomb a short way down Edge Trench, which was almost invisible after the recent bombardments. On 1 September, the battalion attacked again but made little progress against German bombers and snipers. The 7th Division was due to attack Ginchy on 3 September but the Germans in Ale Alley, Hop Alley and the east end of Delville Wood commanded the ground over which the attack was to cross. A preliminary attack was arranged with the 24th Division, to begin five minutes before the main attack to recapture the ground. The 7th Division bombers used "fumite" grenades but these were too easy to see and alerted the German defenders and the 24th Division battalion received such contradictory orders that its attack north of Ale Alley failed. Attacks on 4 September by two companies at the east end of the wood also failed and next day two companies managed to reach the edge of the wood close to Hop Alley and dig in. On the night of 5 September, the 24th Division was relieved by the 55th Division and the 166th Brigade dug in beyond the north-east fringe of the wood unopposed. Air operations The first attack on Longueval and Delville Wood from was conducted under the observation of 9 Squadron, which directed counter-battery artillery-fire, photographed the area and flew contact-patrols to report the positions of infantry. During the morning a patrol of F.E. 2bs from 22 Squadron escorted the corps aircraft but no German aeroplanes were seen. The British aircraft carried new "Buckingham" tracer ammunition, which made aiming easier and began to attack German targets on the ground. The crews machine-gunned German infantry near Flers, cavalry sheltering under trees and other parties of German troops to the south-west. During the XV Corps attacks on 24 August, 3 Squadron aircraft brought back detailed information about the progress of the infantry, who had lit many red flares when called on by contact-aircraft. Fourteen flares were seen by an observer at to the north of the wood, which showed that the troops had overrun their objective and were under shrapnel fire from British artillery. The information was taken back and dropped by message-bag, which got the barrage lifted by . The crew returned to the wood, completed the contact-patrol and reported to the XV Corps headquarters by showing that the 14th Division was held up on the east side of the wood. A further attack the following morning captured the area, which was closely observed by British aircraft. German 2nd Army 14–19 July According to the German official history, Der Weltkrieg and regimental accounts, some units were not surprised. The British attack succeeded at a few points, from which the troops worked sideways to roll up the German defenders, a tactic not used on 1 July. Bavarian Infantry Regiment 16 lost and the headquarters of Infantry Regiment , Bavarian Infantry Regiment 16, I Battalion, Reserve Infantry Regiment 91 and II Battalion, Bavarian Infantry Regiment 16 were captured. Armin, who had taken over from Longueval to the Ancre that morning, ordered troops to hold their positions. The 7th Division had been relieving the 183rd Division and part was sent to Longueval and the second line further back, along with resting units from the 185th, 17th Reserve, 26th Reserve, 3rd Guard divisions and troops of the 55th Landwehr Regiment (7th Landwehr Division), equivalent to fourteen battalions. After alarmist reports of British cavalry in High Wood and the fall of Flers and Martinpuich, Below ordered the 5th, 8th, 8th Bavarian Reserve and 24th Reserve divisions to counter-attack to stop the British advance. When the true situation was discovered, the counter-stroke was cancelled and the 5th and 8th divisions returned to reserve. On 15 July, II Battalion, Reserve Infantry Regiment 107 of the 24th Reserve Division attacked from the south-east of Delville Wood at about but was stopped by small-arms and artillery-fire short of the wood and driven under cover. An attack by the III Battalion from the Flers–Ginchy road soon after, was also stopped short and the battalions lost I Battalion, Infantry Regiment 72 from the 8th Division attacked the north-eastern face of the wood and was also repulsed. Armin ordered another attack after dark by the 8th Division and 12th Reserve Division, to take back the wood at all costs. The preparations were rushed and no postponement was allowed; a bombardment began at before the advance began around midnight by I and II battalions of Infantry Regiment 153 from the 8th Division and II Battalion, Reserve Infantry Regiment 107 of the 12th Reserve Division, on the east, north-east and northern faces of the wood, which also failed against artillery and machine-gun fire, short of the wood, after which German artillery bombarded the wood all night. Further attempts to regain the wood on 16 July were also costly failures. The 8th Division planned to recapture Delville Wood on 18 July and the most advanced troops were withdrawn late on 17 July, for a bombardment which began at using the heavy guns of groups and , the field artillery of the 8th Division and three batteries of the 12th Reserve Division, about guns and guns, heavy guns and howitzers. The German bombardment turned Delville Wood into an "inferno", before slackening at around during a British attack. After German troops from I Battalion, Reserve Infantry Regiment 104 and II and III battalions, Reserve Infantry Regiment 107, attacked the wood in several waves from the north-east, as eight companies of Infantry Regiment 153 of the 8th Division attacked from the north, to reach a line from Longueval to Waterlot Farm road. Another attack from the north and north-west by five companies of Infantry Regiment 26 of the 7th Division, reached the southern edge of the village. The attacks were not co-ordinated but were led by companies of (Storm Troops) and (Flamethrower) detachments, which fell into confusion in the wood; after dark parts of II Battalion, Infantry Regiment 52 of the 5th Division reinforced the troops in the wood and the village. On 19 July, the Germans in the wood endured massed British artillery-fire; Infantry Regiment 52 and part of Grenadier Regiment 12 were sent into the wood and the village, where Infantry Regiment 26 had appealed for relief before it collapsed. A British attack early on 20 July reached the village, where two companies were overwhelmed and taken. By 20 July, Infantry Regiment 26, which had been at full strength on 13 July, was reduced to and with Infantry Regiment 153, was relieved by Grenadier Regiment 12, which held Delville Wood and Longueval with Infantry Regiment 52, under the command of the 5th Division. German 1st Army 20 July – 3 September The German defence of the Somme was reorganised in July and the troops of the Second Army north of the Somme were transferred to the command of a re-established 1st Army under the command of Below, overseen by General von Gallwitz the new commander of the 2nd Army and . During a British attack on 23 July the 5th Division had to engage nearly all of its troops to resist the attack, which threw the defence into confusion. In anticipation of more British attacks a box-barrage was fired around the village and wood. Grenadier Regiment 8 suffered the loss taken prisoner in a British attack on 27 July, one prisoner calling it the worst shelling he had endured. At German troops were seen massing for a counter-attack and managed to advance through a British protective artillery barrage, to engage the British infantry in a bombing fight. The German attack took part of the east end of the wood but the exhaustion of the 5th Division, which had been reduced to a "pitiable state", required reinforcement by three battalions, mainly from the 12th Division, from On 30 July, British artillery-fire caused many casualties and the right flank of the 5th Division was hurriedly reinforced by I Battalion, Reserve Infantry Regiment 163 of the 17th Reserve Division, sent from Ytres, which was spotted by British aircrews at Beaulencourt and shelled. During the night II Battalion, Infantry Regiment 23 of the 12th Division was relieved by the I Battalion. On 4 August, a British attack began as the Fusilier Battalion of Grenadier Regiment 12 was being relieved by I Battalion, Infantry Regiment 121 of the 26th Division, which was taking over from the 5th Division, Grenadier Regiment 119 coming into line to the east. A general relief of the German troops on the Somme front was conducted, as the British artillery kept up a steady bombardment of Delville Wood and German observation balloons began to operate between Ginchy and the wood. On 18 August, Infantry Regiment 125 of the 27th Division was surprised by an attack from the east end of Delville Wood, after its trenches were almost obliterated by British artillery. The British infantry arrived as soon as the barrage lifted and Grenadier Regiment 119 to the north was almost rolled up from its left flank but two companies of III Battalion counter-attacked through I Battalion, which had lost too many men to participate. At the north-west side of the wood, Infantry Regiment 121 of the 26th Division found that the British artillery had made Orchard Trench almost untenable. II Battalion had to advance through the shell-fire and dig a new line behind Orchard Trench, to maintain touch with the flanks, before being relieved by I Battalion overnight. The trench was occupied by Infantry Regiment 104 of the 40th Division, from North Street to the west and by Infantry Regiment 121 north of Longueval, which repulsed an attack on 21 August. Another British attack came on 24 August, as Infantry Regiment 88 from the 56th Division began to relieve Infantry Regiment 121 and Grenadier Regiment 119. Every man left in the regiment was needed to withstand the attack, which caused the loss of more than and twelve machine-guns. After the attack, Fusilier Regiment 35 of the 56th Division relieved Infantry Regiment 125, which called the days on the Somme "the worst in the war". II Battalion, Infantry Regiment 181 was sent as reinforcements and one of its companies was annihilated. An attempted counter-attack by the battalion and part of Infantry Regiment 104, was smashed by British artillery-fire and a German counter-bombardment hampered British consolidation. On 27 August, the German garrison in Edge Trench, the last foothold in Delville Wood, was driven out and Infantry Regiment 118 lost A counter-attack to recover the wood was made possible by the arrival of a wave of fresh German divisions on the Somme and in late August, German artillery preparation began for an attack on 31 August. The 4th Bavarian and 56th divisions were to make a pincer attack at on the east and north sides of the wood, with I Battalion, Bavarian Infantry Regiment 5, III Battalion, Fusilier Regiment 35 and II Battalion, Infantry Regiment 88. Each battalion attacked with two companies forward and two in support. Battalion 3 was one of the first to be trained and equipped as a specialist assault unit; training had begun in mid-June, after large numbers of unfit men had been transferred to other units. Fitness training and familiarisation with light mortars and flame-throwers had been provided and the unit arrived on the Somme on 20 August. Parts of the unit began demonstrations and training courses in the new tactics and the 1st and 2nd companies were attached to the Bavarian and Fusilier battalions, which were to retake Delville Wood. The attack began after a bombardment from which had little effect on the British defences. At the east end of the wood, Fusilier Regiment 35 attacked with the support of flame-thrower detachments but the mud was so bad that six became unusable, the artillery preparation was inadequate and the first two attacks failed. The third attempt, after a more extensive bombardment, was called "a wonderful victory". The attack from the north came from three companies of Infantry Regiment 88 and Stormtroops either side of Tea Lane. British return fire caused many casualties and forced the attackers to move from shell-hole to shell-hole, eventually being pinned down in no man's land. The survivors withdrew after dark, rallying at Flers. I Battalion, Bavarian Infantry Regiment 5, with a company of Battalion 3, attached flame-thrower and bombing detachments, attacked eastwards towards the 56th Division, along Tea and Orchard trenches, where a bomber killed a British machine-gun crew, by throwing a grenade . A second artillery bombardment was fired at and the managed to take and gain a foothold. The position in the wood was abandoned by the , because the repulse of the 56th Division units, left them isolated and under increasing artillery-fire. Aftermath Analysis Lukin had wanted to defend Delville Wood with machine-guns and small detachments of infantry but prompt German counter-attacks prevented this; Tanner had needed every man for the defence. The British had eventually secured Longueval and Delville Wood in time for the formations to their north to advance and capture High Wood ready for the Flers–Courcelette and the later Somme battles. Over the southern part of the British front, there had been for a small "tongue" of ground a few miles deep. The Allies and Germans suffered many casualties in continuous piecemeal attacks and counter-attacks. Gallwitz recorded that from guns of the the Somme had been destroyed, captured or made unserviceable, along with the guns. In 2005, Prior and Wilson wrote that an obvious British remedy to the salient at Delville Wood was to move the right flank forward, yet only twenty attacks were made in this area, against the wood and to the left. The writers held that British commanders had failed to command and had neglected the troops who were frittered away, such that the attrition of British forces was worse than the effect on the Germans. It was speculated that this was perhaps a consequence of the inexperience of Haig and Rawlinson in handling forces vastly larger than the British peacetime Army. Prior and Wilson also wrote that divisions engaged divisions, most of which suffered casualties greater than to the fired by the British from September, despite shell-shortages and problems in transporting ammunition when rain had soaked the ground. German failings were also evident, particularly in counter-attacking to regain all lost ground, even when of little tactical value, which demonstrated that commanders on both sides had failed to control the battle. In 2009, J. P. Harris wrote that during the seven weeks' battle for control of Delville Wood, the infantry on both sides endured what appeared to be a bloody and frustrating stalemate, which was even worse for the Germans. The greater amount of British artillery and ammunition was directed by RFC artillery-observers in aircraft and balloons, which increased the accuracy of fire despite the frequent rain and mist. German counter-attacks were tactically unwise and exposed German infantry to British fire power regardless of the value of the ground being attacked. In the Fourth Army sector, the Germans counter-attacked seventy times from September against ninety British attacks, many of them near Delville Wood. The British superiority in artillery was often enough to make costly failures of the German efforts and since German troops were relieved less frequently, the constant British bombardments and loss of initiative depressed German morale. By the end of July, the German defence north of the Somme had reached a point of almost permanent collapse; on 23 July, the defence of Guillemont, Delville Wood and Longueval almost failed and from 27 to 28 July, contact with the defenders of the wood was lost; on 30 July another crisis occurred between Guillemont and Longueval. Inside the flanks of the German first position, troops occupied shell-holes to evade bombardment by the British artillery, which vastly increased the strain on the health and morale of the troops, isolated them from command, made it difficult to provide supplies and to remove wounded. Corpses strewed the landscape, fouled the air and reduced men's appetites even when cooked food could be brought from the rear; troops in the most advanced positions lived on tinned food and went thirsty. From 15 to 27 July, the 7th and 8th divisions of IV Corps, from Delville Wood to Bazentin le Petit suffered The Battle for Longueval and Delville Wood, had started with a charge by the 2nd Indian Cavalry Division between Longueval and High Wood and two weeks after the wood was cleared, tanks went into action for the first time. A number of important tactical lessons were learned from the battle for the village and wood. Night assembly and advances, dawn attacks after short, concentrated artillery barrages for tactical surprise and building defensive lines on the fringes of wooded areas, to avoid tree roots in preventing digging and to keep clear of shells which were detonated by branches, showering troops with wood splinters. Troops were relieved after two days, as longer periods exhausted them and consumed their ammunition, bombs and rations. The persistence of the British attacks during July and August helped to preserve Franco-British relations, although Joffre criticised the large number of small attacks on 11 August and tried to cajole Haig into agreeing to a big combined attack. On 18 August, a larger British attack by three corps was spoilt by several days of heavy rain, which reduced artillery observation and no ground was gained at Delville Wood. Casualties Another forty-two German divisions fought on the Somme front in July and by the end of the month German losses had increased to men; the number of Anglo-French casualties was more than The battle for Delville Wood was costly for both sides and the 9th (Scottish) Division had from 1 to 20 July, of which the 1st (South African) Infantry Brigade lost . From the 3rd Division had The 5th Division lost from and the 17th Division had from The 8th Division lost from The 14th Division lost and the 33rd Division lost in August and from the end of August to 5 September, the 24th Division had Details of German losses are incomplete, particularly for Prussian divisions, due to the loss of records to Allied bombing in the Second World War. From the 7th and 8th divisions of IV Corps held the line from Delville Wood to Bazentin le Petit and suffered The 5th Division was not relieved from Delville Wood until 3 August and lost a greater loss than at Verdun in May. Infantry Regiment 26, which had been at full strength on 13 July was reduced on 20 July. The British official historian, Wilfrid Miles, wrote that many German divisions returned from a period on the Somme having suffered more than Bavarian Infantry Regiment 5 of the 4th Bavarian Division recorded "the loss of many good, irreplaceable men". Subsequent operations The Battle of Flers–Courcelette (15–22 September) was the third British general offensive during the Battle of the Somme and continued the advance from Delville Wood and Longueval. The battle was notable for the first use of tanks and the capture of the villages of Courcelette, Martinpuich and Flers. In the XV Corps area, the 14th (Light) Division on the right advanced to the area of Bull's Road between Flers and Lesbœufs, in the centre the 41st Division, the newest division in the BEF, captured Flers with the help of tank D-17 and the New Zealand Division, between Delville Wood and High Wood on the left, took the Switch Line, linking with the 41st Division in Flers, after two tanks arrived and the German defenders were overrun. The Fourth Army made a substantial advance of but failed to reach the final objectives. The Allies held the Wood until 24 March 1918, when the 47th Division received orders to retire with the rest of V Corps, after German troops broke through the junction of V Corps and VII Corps. British and German soldiers sometimes found themselves marching parallel, as the British troops fell back and formed a new line facing south between High Wood and Bazentin le Grand. On 29 August 1918, the 38th (Welsh) Division attacked at to take the high ground east of Ginchy and then capture Delville Wood and Longueval from the south. The 113th Brigade was virtually unopposed and reached the objective by and the 115th Brigade advanced north of the wood, which was mopped up by the 114th Brigade. Later in the day, the advance reached the vicinity of Morval. The Armistice with Germany ended hostilities three months later. See also Victoria Cross Private William Faulds on 18 July: 1st Battalion, 1st South African Brigade, 9th Scottish Division. Corporal Joseph Davies on 20 July: 10th Battalion Royal Welsh Fusiliers, 76th Brigade, 3rd Division Private Albert Hill on 20 July: 10th Battalion Royal Welsh Fusiliers, 76th Brigade, 3rd Division. Major William la Touche (Billy) Congreve 20 July, Brigade Major 76th Brigade, 3rd Division. Sergeant Albert Gill on 27 July: 1st Battalion King's Royal Rifle Corps, 99th Brigade, 2nd Division. Notes Footnotes References Books Journals Websites Further reading Translation of Meine Tätigkeit im Weltkriege 1914–1918 (Berlin, Verlag Ernst Siegfried Mittler und Sohn 1939) External links Situation map 19 July 1916 (Der Weltkrieg) The Battle of Delville Wood (South African Military History Society) Website of the South African National Memorial, Delville Wood Longueval, Delville Wood, Somme 1916 World War One Battlefields, Delville Wood Official website of Delville Wood Commonwealth War Graves Commission: Delville Wood Memorial The South Africa (Delville Wood) National Memorial, Longueval Conflicts in 1916 1916 in France Battles of the Western Front (World War I) Battles of World War I involving the United Kingdom Battles of World War I involving Germany Battles of World War I involving South Africa Battle of the Somme History of Somme (department) Battle honours of the Rifle Brigade Battle honours of the King's Royal Rifle Corps July 1916 events August 1916 events September 1916 events
24378043
https://en.wikipedia.org/wiki/Tech%20Coast%20Angels
Tech Coast Angels
Tech Coast Angels is the leading source of funding to early-stage companies in Southern California. TCA has over 450 members and is also one of the largest angel networks in the world. An analysis by CB Insights ranked TCA #1 out of 370 angel groups on “Network Centrality” and #5 overall in “Investor Mosaic.” History Since its inception in 1997, TCA members have focused on building valuable companies, personally invested over $255 million in over 465 companies, and helped portfolio companies attract more than $2.2 billion in additional capital, largely from venture capital firms and strategic investors. In 2020, TCA invested $20 million in 64 companies, TCA members provide companies capital, counsel, mentoring and access to a network of potential investors and partners. TCA has chapters located in Los Angeles, Orange County, San Diego, and the Inland Empire. Company exits TCA has had eleven IPOs and over 79 exits in total. Three of those (Mindbody, Greendot and Sandpiper Networks) achieved multiples between 149 and 265. TCA's successful exits include: Mindbody (wellness business services software) Green Dot Corporation (over-the-counter prepaid debit card) Sandpiper Networks (internet infrastructure) Companion Medical (smart insulin pen system paired with diabetes management app) Leaselock (sells Certificates of Guarantee promising rent when tenant defaults) BlueBeam Software (PDF collaboration software) Parcel Pending (electronic Smart Locker storage system for multi-family housing) Truecar (automotive lead generation) Green Earth Technologies (oil substitute made from waste beef tallow) Casestack (Integrated Logistics Outsourcing) One Stop Systems (manufactures Computers for Industrial Applications) Lytx (Drivecam (video event recorder for driver feedback safety) Cytom X Therapeutics (antibody therapeutics for a variety of serious diseases including cancer) WiseWindow (open qualitative content aggregation platform) Vital Therapies (liver assist device) Beam Global (portable solar EV charging station with no grid ties) Language Weaver (machine translation software) WeGoLook (dispatches in-person Lookers to verify claims made by internet sellers) Portfolium (online social portfolio network) AIRSIS (remote asset tracking & management) Althea (cGMP manufacturing, analytical development, aseptic filling) N Spine (spine stability system) OptionEase (stock option audit financial software) Retrosense Therapeutics (biologic approach to vision restoration in retinal degenerative conditions) Greenplum (Intelligent Data Routing Systems) Savara Pharmaceuticals (inhalable antibiotic for the treatment of MRSA in Cystic Fibrosis patients) ClearCare (management of home care agencies) eTeamz (B2C Amateur Athletic Community) Olive Medical (HD surgical camera) Pictage.com (online event photography services) Trius Therapeutics (antimicrobial drug) Molecular Medicine BioServices (contract clinical manufacturer) Allylix (artificial fragrance production) In addition to the 79 exits, 103 companies in the portfolio were shutdowns with no returns. Based on all outcomes to date, TCA's portfolio has returned 4.8 times invested capital, and achieved an IRR of 22% on its portfolio. Investment portfolio The TCA investment portfolio page lists investments in general categories such as Life Sciences, Internet/Apps, Software, Consumer, CleanTech/Industrials, Hardware, Financial, and Business. Active portfolio companies include: Apeel Sciences (tasteless edible coatings for fresh produce) The Bouqs Company (Online Flower Shopping) Buy It Installed (button integrated into retailer e-commerce site to include installation) Cloudbeds (SAAS Hotel Hospitality Management Software) Cognition Therapeutics (Memory Restoration for Alzheimer's) ElephantDrive (storage-as-a-service software) Kickstart (Cadence Biomedical) (helps people with severe disabilities walk) myLAB Box (testing for STDs from the comfort of home) Ninja Metrics (measures social influence) PharmaSecure (mass serialization codes for products in emerging markets) Procore Technologies (Construction Management SAAS) Ranker (UGC/ social platform for ranking things) Whistle (allows hotels to communicate with guests through Mobile Messaging and SMS) YouMail (voice messaging for cell phones) References Financial services companies established in 1997 Venture capital firms of the United States
2325810
https://en.wikipedia.org/wiki/Anno%201602
Anno 1602
Anno 1602: Creation of a New World, entitled 1602 A.D. in North America, is a 1998 construction and management video game developed by Max Design and published by Sunflowers Interactive. Set in the early modern period, it requires the player to build colonies on small islands and manage resources, exploration, diplomacy and trade. The game design is noteworthy for its attempt to implement a 'progressive' artificial intelligence, meaning that the pace of the game changes in response to how quickly players act. Anno 1602 was a commercial blockbuster that attracted buyers outside the usual demographic of German computer games, including a significant percentage of female customers. It was the German market's best-selling computer game of 1998, and remained the region's highest seller of all time by 2003, with over 1.7 million units sold in German-speaking countries. The game was less successful in international markets, but ultimately sold above 2.7 million copies worldwide by 2004. Anno 1602 began the Anno series, which led to the sequels Anno 1503, Anno 1701, Anno 1404, Anno 2070, Anno 2205 and Anno 1800. Gameplay Anno 1602 aims to be a mix of simulation and strategy gaming, giving players the chance to create a realistic and lively world, modeling it to their liking. The ultimate goal of the game is to discover chains of islands, settle them, develop on them, and then trade with other players. Players can also trade with their own colonies, and various neutral CPU controlled players such as native tribesmen. Even though the game focuses heavily on an economic standpoint, on various occasions the player will be forced (or will bring it upon others) to defend their islands against possible enemies. Anno 1602 is a colony building and trading simulation. The player controls an unnamed European nation in 1602 AD that is looking to expand their power into the New World. As the game starts, the player will need to find a nearby island, colonize it, and start building up an economy. The US release contains all 6 scenarios (in addition to the tutorial and training game) that were included in the original European release, as well as 9 new scenarios, along with a "free play role". In Anno 1602, the player can choose to play out one of the game's many scenarios or engage in a free form game. The game also features online and network play with up to 4 other players simultaneously. Because the network play is less sophisticated than that of modern games, lags and disconnections often occur. Despite this, Anno 1602 is still occasionally played by small groups of LAN PC gamers, or by players over the internet. The game is also playable via null modem connection. Civilizations Anno 1602 is designed to be as nationalistically neutral as possible. After entering a character name, the player is asked to pick one of four different colored banners to represent their country. The absence of different civilizations with different characteristics contrasts with other games such as The Settlers, and Age of Empires. Technology Unlike other games where technology plays a major role in one player defeating another, Anno 1602 instead makes technology upgrades more relevant in inner-colony affairs. Instead of buying upgrades to ships to perform better in huge naval battles, it is often the case that upgrades are made so that the ships can carry more cargo, and therefore make the colony more money. The majority of the buildings in the game also can / must be technologically upgraded throughout the game to please the colony's citizens, which produces more cash for the colony, with which the player can continue upgrading their nation and expand to other islands. Buildings Anno 1602 is about discovery. As the colony grows and spreads, the player gains access to more and more building types and citizens construct bigger and more impressive housing for themselves. The player is required to reach a certain population level before access is gained to weapons factories. Once the player has the factories, a large number of buildings are needed to produce weapons, and additional buildings to construct units. After the buildings are constructed, the player must pay a constant flow of money to keep each building running. This "line of production", though difficult, has been incorporated into newer games such as Stronghold. Custom scenarios Anno 1602 allows for the creation of user-made maps, using the Scenario Builder. This tool is simpler and easier to learn than comparable editors used in more modern games, but it has fewer capabilities. This, along with instant "Random Maps", keeps many players coming back to Anno 1602. Not all versions of Anno 1602 shipped with a map editor, therefore several fan-made editors were created. Development Anno 1602 was conceived in April 1996 at Max Design, a Schladming-based game developer founded in 1991. Following a period of financial turmoil in which the company neared bankruptcy, the team began Anno as a spiritual sequel to its earlier title 1869 – Hart am Wind!, a competitor of The Patrician. The team at Max Design numbered only four members: designer Wilfried Reiter, brothers Martin and Albert Lasser and artist Ulli Koller. Anno 1602s design occurred gradually. From December 17-22, 2018, the game was given away to Uplay users for free on PC. Distribution and commercial performance Debut Anno 1602 was a commercial hit. In the German market, the title debuted in first place on Media Control's computer game sales rankings for the second half of April 1998, and held the position after six weeks on the charts. By that time, it had sold 200,000 units. Der Spiegel reported in June that this performance made Anno the title with "the best chance to become the German number one this year". The game subsequently received a "Platinum" award from the Verband der Unterhaltungssoftware Deutschland (VUD), for sales of at least 200,000 units throughout the German-speaking world: Germany, Austria and Switzerland. Anno 1602s streak at #1 ended in the latter half of June when Commandos: Behind Enemy Lines captured the spot, which it maintained for 16 weeks. Sunflowers' game remained at second on the Media Control charts for the last two weeks of June, July, August and September. Accruing 360,000 domestic sales by September, Anno 1602 was the German market's best-selling game during the first nine months of 1998. PC Games Petra Maueröder wrote that Germany was "plagued by the Anno 1602 fever" in the half-year following its release, while Max Falkenstern likened it to a "pandemic" in . The game's sales for the period resulted in a 300% increase in Sunflowers' year-over-year revenue, to DM 20 million. Remarking on this success at the time, Eva Müller and Hans-Peter Canibol of Focus noted that Sunflowers had become "one of the few German companies that [has] asserted itself in the American-dominated [computer game] market". Sunflowers forecast sales of 600,000 units for Anno by the end of 1998, based on its performance earlier in the year. The game claimed second on Media Control's charts during the last two weeks of October, below Need for Speed 3, and then dropped to #3 for the second halves of November and December. Sales had risen to 400,000 units after six months of availability, and Anno 1602 ultimately became the German market's best-selling computer game of 1998. At the 1999 Milia festival in Cannes, it took home a "Gold" prize for revenues above €15 million in the European Union during the previous year. The high sales of Anno 1602 in German-speaking countries derived partly from Sunflowers' copy protection scheme, according to PC Games. Although the game launched as a direct competitor of StarCraft, it significantly outsold that title in the region by the end of 1998, despite the latter's success worldwide. Petra Maueröder reported that, because StarCraft had shipped without copy protection in Germany, its piracy rate there reached nine illegal copies for every legal version sold. Conversely, Sunflowers attempted to combat pirates by making Anno 1602 "one of the first games" printed on extended-capacity CD-ROMs, a writer for PC Games noted. These exceeded the storage limits of standard discs, which made Anno 1602 incompatible with most consumer CD-Rs and burners of the time. The effort resulted in faulty disc shipments, a common issue for extended-capacity CDs, but was ultimately judged by PC Games as "a complete success, despite the availability of cracks". Later years Anno 1602 became one of the construction and management simulations that "dominated the German sales charts for years", according to Der Spiegels Frank Patalong. By the end of January 1999, it had spent 38 weeks on Media Control's bestsellers list, ranking fourth that month. It soon became the first computer game to earn the VUD's "Double-Platinum" award, for 400,000 sales in German-speaking countries. By May, Anno 1602 had sold through 600,000 units in the German market and—together with its expansion pack New Islands, New Adventures—one million copies across Europe. PC Games declared it the German-speaking world's most successful computer title from April 1998–July 1999, and, as of September, the region's biggest computer game hit of all time. It continued to appear in Media Control's sales rankings through the latter half of September 1999, when it charted in 26th place and secured its 70th consecutive week in the top rankings. Sales surpassed 650,000 units by October; New Islands had sold above 200,000 units by that date. In the second part of October 1999, Anno 1602: Königs-Edition debuted at #4 on Media Control's charts. Announced in August for a fall launch, the SKU bundled Anno 1602 and New Islands with unique bonus missions. The new version had spent 10 weeks in the rankings by the end of 1999, including a seventh-place finish for December. It continued to chart in Media Control's top 10 during 2000: the Königs-Edition reached #4 for the month of February and remained at 11th by August, having spent 12 months in the top 30. The following month, Infogrames Germany announced Annos re-release as a "Soft Price" budget title, with its cost reduced to DM 40. The Soft Price edition took #6 in October and November 2000 on Media Control's charts for budget-priced games, and stayed in the top 15 for February and March 2001. Anno 1602s sales had risen above one million units in German-speaking countries by September 2000, and by April 2001 had surpassed 1.5 million units worldwide. Global sales increased to 1.7 million copies by October 2001, of which the Königs-Edition accounted for 170,000 units. The SKU was re-released that month under Electronic Arts' "Classics" label, again at a budget price, and it debuted in 11th place on Media Control's budget charts for November. Anno 1602 had sold 2 million times by December 2001, including "well over a million" in the German-speaking world, according to Sunflowers. That month, the company revealed a new partnership with "ak tronic", a rack jobber known for its budget-priced "Pyramid" displays in German stores. This type of budget line formed a significant part of the German game market, and journalist later called it a key to Annos high lifetime sales. Under the agreement, ak tronic included budget-priced (€10) copies of Anno 1602 in its Pyramid displays, starting with a shipment of 250,000 units in January 2002. Peter Schroer of ak tronic forecast 500,000 sales for Annos Pyramid edition by the end of 2002. In the German market, the new release proceeded to sell 50,000 units in its first month on shelves, and it rose to #1 on Media Control's budget charts in February 2002. Taking second in March, it finished with top-10 placements in June, July, August, September, October and November. Media Control named it Germany's best-selling budget computer game (€28<) of the year, while its sequel Anno 1503 took first place for 2002 among full-price titles. By April 2003, Anno 1602 remained the German market's biggest-ever computer game hit. Its Pyramid edition secured top-10 positions on Media Control's budget charts for the first three months of that year. After an 11th-place finish in June, the Pyramid edition had spent 17 consecutive months in Media Control's top 20. Anno 1602s global sales climbed to 2.7 million units by September 2004. In 2007, Jörg Langer wrote that the Pyramid edition alone had contributed roughly 750,000 sales to the game's lifetime total. Demographics According to Heiko Klinge of GameStar, a significant number of Anno buyers fell outside the standard demographic for German computer games. By 2000, Sunflowers noted a high "proportion of inexperienced players and women in the Anno fan base", which it attributed to the game's design goal of "play without stress". Klinge echoed this theory. Der Spiegel similarly reported Annos wide appeal among both casual and hardcore players, and the magazine's Carsten Görig argued that "many can agree on Anno because Anno is a phenomenon". Der Spiegels Richard Löwenstein cited Anno as an early computer game to draw female players; he claimed in 2002 that approximately 25% of its buyers were women. In 2011, Klinge likewise called the game's number of female players "an absolute novelty" before The Sims, and reported that women made up nearly 50% of Anno 1602 customers. Anno 1602s success was primarily contained to the German market; GameStar reported that, like Gothic, it failed to make an "international breakthrough". Sunflowers president Adi Boiko remarked: "When we wanted to distribute Anno 1602 internationally, we encountered enormous prejudices that a game from Germany couldn't be anything special". Of Anno 1602s 2.5 million worldwide sales by late 2002, the German market accounted for 1.7 million. The title's limited crossover in other markets was common among German games, particularly in Annos genre of construction and management, despite the outsize popularity of such titles in the German-speaking world. Der Spiegels Frank Patalong argued that games like Anno 1602 were a "specifically German phenomenon: nowhere else in the world are [these] simulations as successful as here at home". Stefan Schmitt of Der Spiegel and Jochen Gebauer of Games Aktuell wrote that such games were especially disliked in the United States, and a writer for GameStar stated that no Anno title had been a hit in that market by mid-2006. Boiko noted that it was "very difficult for us to find a good distributor" in the United States, which led to Anno 1602s delayed release and lack of marketing there. However, he was pleased with its 200,000 sales in the country by early 2002. This number rose to 250,000 units by that November. Reception Initial release North American version Legacy Anno 1602 began the Anno series. In 2018, PC Games named it one of the most influential games ever released. The digit sum of the year in all titles (1602, 1503, 1701, 1404, 2070, 2205, 1800) is 9. See also Unknown Horizons Video gaming in Germany Notes References External links Official Anno 1602 site "The WarGamer review" 1998 video games Age of Discovery video games City-building games GT Interactive Software games Real-time strategy video games Multiplayer null modem games Video games developed in Austria Video games with expansion packs Video games with isometric graphics Windows games Windows-only games Anno (series) Video games set in the 17th century
21272531
https://en.wikipedia.org/wiki/StoryBoard%20Quick
StoryBoard Quick
StoryBoard Quick is a storyboarding software application for creating and editing digital storyboards for non-graphic artists and for creating rapid comp boards. No drawing necessary. Used primarily in the film and TV industry by film directors, producers, writers, commercial production companies and educators to produce a visual layout of media projects for communicating with crews, producers and/or clients before commencing the main production process. History StoryBoard Quick v1.0 was the first vertical market storyboarding application created for filmmakers on the Mac OS. It combined features of page layout, text entry, layered-image manipulation and integrated artwork. It was introduced at ShowBiz Expo in 1993 in Los Angeles, and released at Macworld Conference & Expo in 1994 in San Francisco. A Microsoft Windows version followed in 1995. StoryBoard Quick is published and supported by PowerProduction Software. Co-founded by Paul Clatworthy and Sally Ann Walsh, the company is privately owned and located in Los Gatos, California. Use and features StoryBoard Quick is used to plan spatial relationships between characters and props within their locations in shots and scenes using built-in 2D storyboard (multi-angle rotatable characters, colorizable props, location backgrounds) graphics (and/or combining with imported digital art or photos). StoryBoard Quick also facilitates the planning process when starting from a screenplay with features enabling the importing of scripts (from screenplay applications like Final Draft, Movie Magic Screenwriter, Storyist, Montage and others) using import wizards. StoryBoard Quick offers numerous pre-formatted professional storyboarding templates for printing or distributing boards, along with exports formats for continuing the digital workflow into editing software or Internet distribution (HTML or SWF). References Official website for StoryBoard Quick System Requirements Best Storyboard Software Winner Gold - StoryBoard Quick Studio 2017 Best Previz Software Winner Gold - StoryBoard Artist Studio 2017 Software Review Wired Magazine, April 1994, Caleb John Clark Videomaker Videomaker Sept 2012 LifeLong Learning 2013 External links Official website for PowerProduction Movie Manual on Storyboarding Film production software
9044223
https://en.wikipedia.org/wiki/Bernie%20S
Bernie S
Bernie S. (born Edward Cummings) is a computer hacker living in Philadelphia, Pennsylvania. He was a regular panelist on the WBAI radio show Off the Hook. In 2001 he appeared in Freedom Downtime, a documentary produced by 2600 Films. Confiscation In 1995, the police department of Haverford Township, Pennsylvania happened upon what they believed to be a drug transaction. However, upon looking closer, they discovered Bernie and others were actually buying and selling crystals used in crystal radio and other technological applications. The police who responded were not knowledgeable about technology or computers, which led them to confiscate all the crystals as suspicious materials along with some reading material such as The Whole Spy Catalog. After the United States Secret Service inquired about the seized equipment, Special Agent Thomas Varney informed local police that some of the equipment was for illicit purposes only. Bernie was subsequently arrested and charged with possession of a non-working RadioShack Red box (phreaking) tone phone dialer. Additional materials were seized and never returned. Criminal complaint Charges were filed against Edward E. Cummings (case number 95-320) in the United States District Court for the Eastern District of Pennsylvania. The charges were for the possession of a speed dialer, an IBM Thinkpad laptop, and computer discs which could be used for unauthorized telecommunications access. The Grand Jury convened on March 13, 1995, and Bernie S's trial was scheduled for September 8, 1995. Varney labeled Bernie S a danger to society for having too much information, due to the publishing by 2600 Films and Bernie S of Secret Service offices locations, phone numbers, and radio frequencies, along with photos and codes. Imprisonment On September 7, 1995, Bernie S. pleaded guilty to possession of technology which could be used in a fraudulent manner. He was released on October 13, 1995. In January 1996, he was arrested for tampering with evidence, a violation of the conditions set for his probation. In March 1996, he received a sentence of 6 to 24 months. While awaiting a parole hearing, he was charged by Bucks County, Pennsylvania prison officials with misuse of the telephone system when he received a call from Rob Bernstein, a reporter for Internet Underground. The charges could have added as much as nine months to his sentence. Bernie S. appealed the decision, and he filed a grievance for harassment and intimidation against the prison. While awaiting his release on parole, he was moved to a high-security facility, where he was attacked by a fellow inmate and suffered a broken arm and jaw. After a letter writing campaign, a telephone campaign, and a physical demonstration outside the prison where he was housed, on September 13, 1996 Bernie S. was released on parole. See also 2600: The Hacker Quarterly 2600's Off The Wall Radio Program Haverford Township Police Department The Secret Service Big Brother is Watching You Notes References Place of birth missing (living people) Year of birth missing (living people) Living people 2600: The Hacker Quarterly
17332785
https://en.wikipedia.org/wiki/WaveMaker
WaveMaker
WaveMaker is an enterprise grade Java low code platform for building software applications and platforms. WaveMaker Inc. is headquartered in Mountainview, California. For enterprises, WaveMaker is a low code platform that accelerates their app development and IT modernization efforts. For ISVs, it is a consumable low code component that can sit inside their product and offer customizations. WaveMaker Platform is a licensed software that enables organizations to run their own end-to-application platform-as-a-service (aPaaS) for building and running custom apps. It also allows developers and business users to work with technologies to create apps that can be extended or customized. Those apps can consume APIs, visualize data and automatically support multi-device responsive interfaces. WaveMaker low code platform enables organizations to deploy applications on public or private cloud infrastructure, and containers can be deployed on top of virtual machines or on bare metal. The software provides a Graphical User Interface (GUI) console to manage the IT app infrastructure and capabilities based on Docker containerization. The solution provides features for app deployment automation, app lifecycle management, release management, deployment workflow and access rights, including: Apps for web, tablet, and smartphone interfaces Enterprise technologies like Java, Hibernate, Spring, AngularJS, JQuery Docker-provided APIs and CLI Software stack packaging, container provisioning, stack and app upgrading, replication, and fault tolerance WaveMaker Studio WaveMaker RAD Platform is built around WaveMaker Studio - a WYSIWYG rapid development tool that allows computer-literate business users to compose an application using a drag-and-drop method. WaveMaker Studio supports rapid application development (RAD) for the web, similar to what products like PowerBuilder and Lotus Notes provided for client server computing. WaveMaker Studio allows developers to produce an application once, then auto-adjust it for a particular target platform, whether a PC, mobile phone, or tablet. Applications created using the WaveMaker Studio follow a model–view–controller architecture. WaveMaker Studio has been downloaded more than two million times. The Studio community consists of 30,000 registered users. Applications generated by WaveMaker Studio are licensed under the Apache license. Studio 8 was released September 25, 2015. The prior version, Studio 7, has some notable development milestones. It was based on AngularJS framework, previous Studio versions (6.7, 6.6, 6.5) use the Dojo Toolkit. Some of the features of WaveMaker Studio 7 include: Automatic generation of Hibernate mapping, Hibernate queries from database schema import. Automatic creation of Enterprise Data Widgets based on schema import. Each widget can display data from a database table as a grid or edit form. Edit form implements create, update, delete functions automatically. WYSIWYG Ajax development studio runs in a browser. Deployment to Tomcat, IBM WebSphere, Weblogic, JBoss. Mashup tool to assemble web applications based on SOAP, REST and RSS web services, Java Services and databases. Supports existing CSS, HTML and Java code. Deploys a standard Java .war file. Technologies and Frameworks WaveMaker allows users to build applications that run on "Open Systems Stack" based on the following technologies and frameworks: AngularJS, Bootstrap, NVD3, HTML, CSS, Apache Cordova, Hibernate, Spring, Spring Security, Java. The various supported integrations include: Databases: Oracle, MySQL, Microsoft SQL Server, PostgreSQL, IBM DB2, HSQLDB Authentication: LDAP, Active Directory, CAS, Custom Java Service, Database Version Control: Bitbucket (or Stash), GitHub, Apache Subversion Deployment: Amazon AWS, Microsoft Azure, WaveMaker Private Cloud (Docker containerization), IBM Web Sphere, Apache Tomcat, SpringSource tcServer, Oracle WebLogic Server, JBoss(WildFly), GlassFish App Stores: Google Play, Apple App Store, Windows Store History WaveMaker was founded as ActiveGrid in 2003. In November 2007, ActiveGrid was rebranded as WaveMaker. WaveMaker was acquired by VMware, Inc in March 2011 but after two years VMWare terminated the support for the WaveMaker project in March 2013. In May 2013, Pramati Technologies acquired the assets of WaveMaker from VMWare. In February 2014, WaveMaker, Inc. released WaveMaker Studio 6.7, the last version of the open source, downloadable Studio. In September 2014, WaveMaker, Inc. launched WaveMaker RAD Platform (with WaveMaker Studio version 7), licensed software that enabled organizations to run their own end-to-end application platform as a service (aPaaS) for building and running custom apps. References External links RedMonk WaveMaker podcast JavaScript libraries Ajax (programming) Web frameworks Linux integrated development environments Java development tools Unix programming tools User interface builders Java platform software Cloud computing providers Cloud platforms Web applications Rich web application frameworks JavaScript JavaScript web frameworks Self-hosting software Web development software IOS development software Android (operating system) development software Mobile software programming tools
50421011
https://en.wikipedia.org/wiki/Clock%20Software
Clock Software
Clock Software is a private limited company developing software and hardware solutions for the hospitality industry - property management system, restaurant point of sale, online booking engine, channel manager, self-service kiosk, mobile hotel app. Its headquarters is in London, UK. History Clock Software was incorporated in 1994 in Varna, Bulgaria, under the name Clock Ltd (Клок ООД). Initially, the company specialized in selling hardware and peripherals but soon switched its focus to the development of hotel software. In 1996, Clock launched their first property management system called ClockFront for Windows. At the time, a huge privatisation was going on as part of the transition of the country to democracy and market economy. All hotels previously owned by the state were changing ownership and needed new software to replace previous, often centralised, management systems. Clock Software took the chance to offer their software to the newly privatised hotels. In 2006, Clock Software started to expand to countries outside Bulgaria and opened offices in Romania (2007) and Croatia (2008). The first markets where their products were adopted were Romania, Macedonia and Croatia. In 2010, Clock took the decision to create a completely new hospitality software suite based on cloud technology. The new product was built around the idea of consolidation of separate applications into one cloud-based platform and switching from data-focused to guest-centred software model. A new branch was registered in London, UK under the name of Clock Software Ltd. to take over the international development and the distribution of the cloud-based software products, while the Bulgarian branch remained in charge of software development and support. Clock Software launched their first cloud product, the free Internet reservation system InnHand, in 2012. In 2013, it launched the hotel management platform Clock PMS+. A year later InnHand was discontinued and the company focused entirely on the new system. In 2016, the company released its first hardware device, a self-service kiosk. Distribution and partners Its Windows-based installable software solutions Clock Software offers in Eastern Europe, while the latest cloud-based hotel management platform, Clock PMS+, is available globally. Clock PMS+ has customers in 65+ countries (as of the end of 2018) with Lark Hotels and McMillan Hotels among them. In 2015, Clock PMS+ was featured in the annual report by Software Advice "Frontrunners Report for Hotel Management System - 2018" and in 2016, Clock Software was included in the Grant Thornton's report "Emerging clouds in hotel technology | Spotlight on cloud-based PMS" Clock PMS+ is a property management system delivered as SaaS. It is hosted on Amazon Web Services (AWS) and is a certified partner of various third party providers: payment gateways - PayPal, Authorize.Net, Worldpay, Adyen; channel managers - RoomCloud and Yield Planet; door lock systems - ASSA ABLOY, hotel meta search websites - TripConnect by Tripadvisor Products Clock PMS+ is a cloud-based suite of software solutions for hotels, chains, vacation rentals and other accommodation providers. It includes the following modules: Enterprise-class cloud-native hotel PMS Online distribution autopilot Payments Autopilot Guest Engagement Autopilot Online check-in/check-out Autopilot Hotel Booking Engine Kiosk Software and Terminals MICE Hotel Bar & Restaurant POS Automated Revenue Management References External links Clock Software (PMS) Easy UI Software Company API Interface & PMS Integration What the Hyatt Hotel Hack Means for You: 3 Hotel Security Tips for Small Hotels in 2016 - Capterra Blog Clock Software launches a Booking enquiry addition to its cloud property management system Clock PMS+ to debut at TTE 2017 with Advanced Payment Processing integration Clock Software introduces Clock Kiosk as part of its cloud hotel system Winners of the 2019 HotelTechAwards Announced Software industry Business software Travel technology
21128138
https://en.wikipedia.org/wiki/Windows%207%20editions
Windows 7 editions
Windows 7, a major release of the Microsoft Windows operating system, has been released in several editions since its original release in 2009. Only Home Premium, Professional, and Ultimate were widely available at retailers. The other editions focus on other markets, such as the software development world or enterprise use. All editions support 32-bit IA-32 CPUs and all editions except Starter support 64-bit x64 CPUs. 64-bit installation media are not included in Home-Basic edition packages, but can be obtained separately from Microsoft. According to Microsoft, the features for all editions of Windows 7 are stored on the machine, regardless of which edition is in use. Users who wish to upgrade to an edition of Windows 7 with more features were able to use Windows Anytime Upgrade to purchase the upgrade and to unlock the features of those editions, until it was discontinued in 2015. Microsoft announced Windows 7 pricing information for some editions on June 25, 2009, and Windows Anytime Upgrade and Family Pack pricing on July 31, 2009. Main editions Mainstream support for all Windows 7 editions ended on January 13, 2015, and extended support ended on January 14, 2020. Professional and Enterprise volume licensed editions have paid Extended Security Updates (ESU) available until at most January 10, 2023. Since October 31, 2013, Windows 7 is no longer available in retail, except for remaining stocks of the preinstalled Professional edition, which was officially discontinued on October 31, 2016. Windows 7 Starter is the edition of Windows 7 that contains the fewest features. It is only available in a 32-bit version and does not include the Windows Aero theme. The desktop wallpaper and visual styles (Windows 7 Basic) are not user-changeable. In the release candidate versions of Windows 7, Microsoft intended to restrict users of this edition to running three simultaneous programs, but this limitation was dropped in the final release. This edition does not support more than 2 GB of RAM. This edition was available pre-installed on computers, especially netbooks or Windows Tablets, through system integrators or computer manufacturers using OEM licenses. Windows 7 Home Basic was available in "emerging markets", in 141 different countries. Some Windows Aero options are excluded along with several new features. This edition is available in both 32-bit and 64-bit versions and supports up to 8 GB of RAM. Home Basic, along with other editions sold in emerging markets, includes geographical activation restriction, which requires users to activate Windows within a certain region or country. This edition contains features aimed at the home market segment, such as Windows Media Center, Windows Aero and multi-touch support. It was available in both 32-bit and 64-bit versions. This edition is targeted towards enthusiasts, small-business users, and schools. It includes all the features of Windows 7 Home Premium, and adds the ability to participate in a Windows Server domain. Additional features include support for up to 192 GB of RAM (increased from 16 GB), operating as a Remote Desktop server, location aware printing, backup to a network location, Encrypting File System, Presentation Mode, Software Restriction Policies (but not the extra management features of AppLocker) and Windows XP Mode. It was available in both 32-bit and 64-bit versions. This edition targeted the enterprise segment of the market and was sold through volume licensing to companies which have a Software Assurance (SA) contract with Microsoft. Additional features include support for Multilingual User Interface (MUI) packages, BitLocker Drive Encryption, and UNIX application support. Not available through retail or OEM channels, this edition is distributed through SA. As a result it includes several SA-only benefits, including a license allowing the operating of diskless nodes (diskless PCs) and activation via Volume License Key (VLK). Windows 7 Ultimate contains the same features as Windows 7 Enterprise, but this edition was available to home users on an individual license basis. For a while, Windows 7 Home Premium and Windows 7 Professional users were able to upgrade to Windows 7 Ultimate for a fee using Windows Anytime Upgrade if they wished to do so, but this service was stopped in 2015. Unlike Windows Vista Ultimate, the Windows 7 Ultimate does not include the Windows Ultimate Extras feature or any exclusive features as Microsoft had stated. Special-purpose editions The main editions also can take the form of one of the following special editions: The features in the N and KN Editions are the same as their equivalent full versions, but do not include Windows Media Player or other Windows Media-related technologies, such as Windows Media Center and Windows DVD Maker due to limitations set by the European Union and South Korea, respectively. The cost of the N and KN Editions are the same as the full versions, as the Media Feature Pack for Windows 7 N or Windows 7 KN can be downloaded without charge from Microsoft. Upgrade editions In-place upgrade from Windows Vista with Service Pack 1 to Windows 7 is supported if the processor architecture and the language are the same and their editions match (see below). In-place upgrade is not supported for earlier versions of Windows; moving to Windows 7 on these machines requires a clean installation, i.e. removal of the old operating system, installing Windows 7 and reinstalling all previously installed programs. Windows Easy Transfer can assist in this process. Microsoft made upgrade SKUs of Windows 7 for selected editions of Windows XP and Windows Vista. The difference between these SKUs and full SKUs of Windows 7 is their lower price and proof of license ownership of a qualifying previous version of Windows. Same restrictions on in-place upgrading applies to these SKUs as well. In addition, Windows 7 is available as a Family Pack upgrade edition in certain markets, to upgrade to Windows 7 Home Premium only. It gives licenses to upgrade three machines from Vista or Windows XP to the Windows 7 Home Premium edition. These are not full versions, so each machine to be upgraded must have one of these qualifying previous versions of Windows for them to work. In the United States, this offer expired in early December 2009. In October 2010, to commemorate the anniversary of Windows 7, Microsoft once again made Windows 7 Home Premium Family Pack available for a limited time, while supplies lasted. Upgrade compatibility There are two possible ways to upgrade to Windows 7 from an earlier version of Windows: An in-place install (labelled "Upgrade" in the installer), where settings and programs are preserved from an older version of Windows. This option is only sometimes available, depending on the editions of Windows being used, and is not available at all unless upgrading from Windows Vista. A clean install (labelled "Custom" in the installer), where all settings including but not limited to user accounts, applications, user settings, music, photos, and programs are erased entirely and the current operating system is erased and replaced with Windows 7. This option is always available and is required for all versions of Windows XP. The table below lists which upgrade paths allow for an in-place install. Note that in-place upgrades can only be performed when the previous version of Windows is of the same architecture. If upgrading from a 32-bit installation to a 64-bit installation or downgrading from 64-bit installation to 32-bit installation, a clean install is mandatory regardless of the editions being used. Anytime Upgrade editions Until the year 2015, Microsoft also supported in-place upgrades from a lower edition of Windows 7 to a higher one, using the Windows Anytime Upgrade tool. There are currently three retail options available (though it is currently unclear whether they can be used with previous installations of the N versions). There are no family pack versions of the Anytime Upgrade editions. It was possible to use the Product Key from a Standard upgrade edition to accomplish an in-place upgrade (e.g. Home Premium to Ultimate). Starter to Home Premium Starter to Professional1 Starter to Ultimate1 Home Premium to Professional Home Premium to Ultimate Professional to Ultimate1 1 Available in retail, and at the Microsoft Store Derivatives On February 9, 2011, Microsoft announced Windows Thin PC, a branded derivative of Windows Embedded Standard 7 with Service Pack 1, designed as a lightweight version of Windows 7 for installation on low performance PCs as an alternative to using a dedicated thin client device. It succeeded Windows Fundamentals for Legacy PCs, which was based on Windows XP Embedded. Windows Thin PC was released on June 6, 2011. Windows 7 is also currently available in two forms of Windows Embedded, named as Windows Embedded Standard 7 (known as Windows Embedded Standard 2011 prior to release, the newest being Windows Embedded Standard 7 with Service Pack 1) and Windows Embedded POSReady 7. Both versions are eligible for Extended Security Updates (ESU) for up to three years after their end of extended support dates. Comparison chart See also Windows 2000 editions Windows XP editions Windows Vista editions Windows 8 editions Windows 10 editions Notes References Further reading Windows 7
4227933
https://en.wikipedia.org/wiki/Code%20reviewing%20software
Code reviewing software
Code reviewing software is computer software that helps humans find flaws in program source code. It can be divided into two categories: Automated code review software checks source code against a predefined set of rules and produces reports. Different types of browsers visualise software structure and help humans better understand its structure. Such systems are geared more to analysis because they typically do not contain a predefined set of rules to check software against. Manual code review tools allow people to collaboratively inspect and discuss changes, storing the history of the process for future reference. See also DeepCode (2016), cloud-based, AI-powered code review platform References Software review
38930963
https://en.wikipedia.org/wiki/List%20of%20video%20transcoding%20software
List of video transcoding software
The following is a list of video transcoding software. Open-source Shutter Encoder (Windows, OS X, Linux) DVD Flick (Windows) FFmpeg (Windows, OS X, Linux) HandBrake (Windows, OS X, Linux) Ingex (Linux) MEncoder (Windows, OS X, Linux) Nandub (Windows) Thoggen (Linux) VirtualDubMod (Windows) VirtualDub (Windows) VLC Media Player (Windows, Mac OS X, Linux) Arista (Linux) Avidemux (Windows, OS X, Linux) Freeware Freemake Video Converter (Windows) FormatFactory (Windows) Ingest Machine DV (Windows) MediaCoder (Windows) SUPER (Windows) Windows Media Encoder (Windows) XMedia Recode (Windows) Zamzar (Web application) ZConvert (Windows) Commercial Compressor (Mac OS X) MPEG Video Wizard DVD (Windows) ProCoder (Windows) QuickTime Pro (Mac OS X, Windows) Roxio Creator (Windows) Sorenson Squeeze Telestream Episode (Mac OS X, Windows) TMPGEnc (Windows) Wowza Streaming Engine with included Wowza Transcoder feature (Linux, Mac OS X, Windows) Zamzar - Premium service (Web application) Zencoder (Web application) Elecard CodecWorks (Windows, Linux) See also Photo slideshow software List of video editing software Video transcoding software
19885491
https://en.wikipedia.org/wiki/Ubiquity%20%28software%29
Ubiquity (software)
Ubiquity is the default installer for Ubuntu and its derivatives. It is run from the Live CD or USB and can be triggered to run from the options on the device or on the desktop of the Live mode. It was first introduced in Ubuntu 6.06 LTS "Dapper Drake". At program start, it allows the user to change the language to a local language if they prefer. It is designed to be easy to use. Features Ubiquity consists of a configuration wizard allowing the user to easily install Ubuntu and shows a slideshow showcasing many of Ubuntu's features while it is installing. Ubuntu 10.04 included in Ubiquity a slideshow, which meets users with Ubuntu. In Ubuntu 10.10 "Maverick Meerkat", the installer team made changes to simplify the tool and speed up the installation wizard. Ubiquity allows the user to choose the installer to automatically update the software while it's installing. If the user allows this, the installer will download the latest packages from the Ubuntu repository ensuring the system is up to date. The installer also allows the user to set Ubiquity to install closed source or patented third party software such as Adobe Flash and Fluendo's MP3 codec software that is commonly needed by users while Ubuntu is installing. Ubiquity can begin to format the file system and copy system files after the user completes the partition configuration wizard, while the user is inputting data such as username, password, location etc. which reduces install time. When reviewing Ubuntu 10.10, Ryan Paul from Ars Technica said “During my tests, I was able to perform a complete installation in less than 15 minutes.” Ubiquity also provides an interactive map to specify time zones. At the bottom the installer window, a progress bar is shown once the installation has started. At the end of the configuration stage, a slideshow will show up until the end of install. The slideshow display short summaries and screenshots about the applications in Ubuntu. However though, not all the software shown is in the default installation and are available to download from the Ubuntu Software Center. These are so the user becomes more aware about other applications available for the platform. Before Ubuntu 12.04 LTS, Ubiquity offered a migration assistant which brought over user accounts from Windows, OS X and other Linux distributions along with e-mail and Instant messaging accounts, Bookmarks from Firefox and Internet Explorer as well as the user's pictures, wallpapers, documents, music, photos folder although this was a Windows only feature. At the Ubuntu Developers Summit for Ubuntu 12.10, the developers agreed to remove this feature citing a lack of testing and a high number of bugs as the reason why the feature has been removed. Ubuntu created Ubiquity edition for servers named Subiquity. It is graphical installer for Ubuntu Server versions, and included from Ubuntu 18.04. In October 2018, Lubuntu switched to using Calamares instead of Ubiquity. Ports Ubiquity allows OEMs and other Ubuntu derivatives to customise aspects of it such as the slideshow and branding elements. Some Ubiquity ports include: Kubuntu Xubuntu Ubuntu MATE Linux Mint Elementary OS Peppermint Linux OS Moreover, installation steps may be skipped by changing the install scripts making it possible for OEMs and others to set special defaults or create an automated install routine. See also Anaconda Calamares Wubi Debian-Installer References External links Ubiquity Ubiquity in Launchpad Free software programmed in Python Linux installation software Ubuntu
861398
https://en.wikipedia.org/wiki/Macintosh%20Toolbox
Macintosh Toolbox
The Macintosh Toolbox implements many of the high-level features of the Classic Mac OS, including a set of application programming interfaces for software development on the platform. The Toolbox consists of a number of "managers," software components such as QuickDraw, responsible for drawing onscreen graphics, and the Menu Manager, which maintain data structures describing the menu bar. As the original Macintosh was designed without virtual memory or memory protection, it was important to classify code according to when it should be loaded into memory or kept on disk, and how it should be accessed. The Toolbox consists of subroutines essential enough to be permanently kept in memory and accessible by a two-byte machine instruction; however it excludes core "kernel" functionality such as memory management and the file system. Note that the Toolbox does not draw the menu onscreen: menus were designed to have a customizable appearance, so the drawing code was stored in a resource, which could be on a disk. Advent and implementation On 68k systems The original Motorola 68000 family implementation of the Macintosh operating system executes system calls using that processor's illegal opcode exception handling mechanism. Motorola specified that instructions beginning with 1111 and 1010 would never be used in future 68000 family processors, thus freeing them for use as such by an operating system. Further, they each had their own dedicated interrupt vector, separate from the generic illegal opcode handler. As 1111 was reserved for use by co-processors such as the 68881 FPU, Apple chose 1010 (hexadecimal A) as the prefix for operating system calls. Handling illegal instructions is known as trapping, so these special instructions were called A-traps. When the processor encounters such an instruction, it transfers control to the operating system, which looks up the appropriate task and performs it. There were two advantages to this mechanism: It results in compact programs. Only two bytes are taken by every operating system access, in contrast to four or six when using regular jump instructions. The table used to look up the appropriate function is stored in RAM. Then, even if the underlying code was stored in ROM, it could still be overridden (patched) by replacing the ROM memory address with a RAM address. The system was further optimized by allotting some bits of the A-trap instruction to store parameters to the most common functions. For example, memory allocation is a very common task, so it should be expressed in as few bytes of code as possible. Sometimes the programmer wants to clear the memory block to zeros, so either the allocation function should take a boolean parameter, or there should be two allocation functions. To pass a parameter would require an additional two-byte instruction, which would be inefficient. Having two functions would require at least an extra four bytes of RAM used for the address in the function look-up table. The most efficient solution is to map multiple A-traps to the same subroutine, which then uses the A-trap as a parameter. This is true of the most commonly used subroutines. However, the Toolbox was composed of the less commonly used subroutines. The Toolbox was defined as the set of subroutines which took no parameters within the A-trap, and were indexed from a 1024-entry, 4-kilobyte dispatch table. (Machines shipped with less than one megabyte of RAM use a single table of 512 entries, which corresponds to the 256-entry OS dispatch table of later ROM revisions.) On PowerPC systems In 1994, Apple released Macintoshes using the PowerPC architecture, which lacked hardware support for the A-trap mechanism available on 68k systems. Because of their use in applying software patches, however, the dispatch tables were retained. The API library code underlying any Toolbox routine then does nothing except reference the dispatch table. The dispatch table linked only to emulated 68000 family code. Toolbox functions implemented in native PowerPC code have to first disable the emulator using the Mixed Mode Manager. For the sake of uniformity and extensibility, new function entries even continued to be added to the Toolbox after the PowerPC transition. An alternative mechanism did exist, however, in the Code Fragment Manager, which was used to load and dynamically link native PowerPC programs. The PowerPC system call facility, analogous to the A-trap mechanism, was used to interface with the Mac OS nanokernel, which offered few services directly useful to applications. Functionality Programming interfaces The Toolbox is composed of commonly used functions, but not the most commonly used functions. As a result, it grew into a hodgepodge of different API libraries. The Toolbox encompasses most of the basic functionality which distinguished the Classic Mac OS. Apple's references “Inside Macintosh: Macintosh Toolbox Essentials” and “Inside Macintosh: More Macintosh Toolbox”, similarly vague in scope, also document most of the Toolbox. Use in booting Because much of the Toolbox is implemented in ROM, alongside the computer's firmware, it was convenient to use as a bootloader environment. In conjunction with resources stored on the ROM chip, the Toolbox can turn the screen gray, show a dialog box with the signature "Welcome to Macintosh" greeting, and display the mouse cursor. By using Toolbox to help boot the machine, a rudimentary Mac-like environment can be initialized before ever loading the System suitcase from disk (in fact before ROMs on NuBus cards were executed), which is when the decision to use 24-bit or 32-bit addressing has to be made. (System 7's support for 32-bit addressing requires 32-bit clean ROMs, as older Mac ROMs do not have support for this). The need for diagnostics as in the BIOS resident for IBM PC compatibles' boards is not necessary since the Macintosh has most of its diagnostics in POST and automatically reports errors via the "Sad Mac" codes. The similarity between the boot-up environment and the actual operating system should not be confused with being identical, however. Although the "Classic Mac OS" boot process is convoluted and largely undocumented, it is not more limited than an IBM PC compatible BIOS. Like a PC's master boot record, a ROM-based Mac reads and executes code from the first blocks ("boot blocks") of the disk partition selected as the boot device. The boot blocks then verify that a suitable rudimentary environment exists, and use it to load the System suitcase. A different operating system with a different file system can boot by simply using its own code in the boot blocks. This system was not used for PowerPC Linux, however, because Open Firmware in New World ROM machines requires a bootloader within an HFS filesystem—a reason having nothing to do with the Toolbox or "old-fashioned" Macs in general. More narrowly, the Startup Disk control panel in the Classic Mac OS and macOS only allows the user to select a mounted filesystem with very particular constraints. Legacy In Mac OS X, the Toolbox is not used at all, though the Classic Environment loads the Toolbox ROM file into its virtual machine. Much of the Toolbox was restructured and implemented as part of Apple's Carbon programming API, allowing programmers familiar with the Toolbox to port their program code more easily to Mac OS X. See also Mac OS memory management References External links (PDF) Apple's Inside Macintosh: Macintosh Toolbox Essentials developer's guide Classic Mac OS Macintosh firmware
16288590
https://en.wikipedia.org/wiki/GanttProject
GanttProject
GanttProject is GPL-licensed (free software) Java based, project management software that runs under the Microsoft Windows, Linux and Mac OS X operating systems. This project was initiated in January 2003, at University of Marne-la-Vallée (France) and managed, at first, by Alexandre Thomas, now replaced by Dmitry Barashev. Features Comparing to other full-fledged project management software, one could say that GanttProject is designed considering the KISS principle. It features most basic project management functions like a Gantt chart for project scheduling of tasks, and doing resource management using resource load charts. It can only handle days not hours. It does not have features like cash flow, message, document control, and resource leveling. It has a number of reporting options (MS Project, HTML, PDF, spreadsheets). The major features include: Create Work Breakdown Structure Task Hierarchy and Dependencies Gantt Chart Resource Load Chart Baselines saving and comparing Generation of PERT Chart PDF and HTML Reports MS Project import/export with file format MPX (*.mpx) and MSPDI (*.xml) (XML-based data interchange format since Microsoft Project 2002) Exchange data with spreadsheet applications via CSV and Excel formats WebDAV based groupwork Project file format is XML Vacation and Holidays management Available in more than 20 languages Reception A number of mostly positive reviews on Capterra Number of download: as of January 2014, there are 1,600,000 downloads of GanttProject 2.0.10 for Microsoft Windows from Google Code. Average daily download count is about 1,500. InfoWorld reviewed GanttProject favorably. As of June 2011, the number of weekly downloads of GanttProject (ver 2.0.9) at SourceForge was third among such programs: first was OpenProj (ver 1.4), second was JFreeChart. Note: Since GanttProject ver 2.0.10 is no longer posted at SourceForge, this download ranking is not relevant. User rating at cnet/Download is 3.5 stars (MS Project is 4.0 stars). Gallery See also List of project management software Project management software Project management Project planning Project Portfolio Management Resource Management Ganttproject Resources References External links GanttProject Official Website Project home page at GitHub hosted from ver 2.7.1 Project home page at Google Code hosted from ver 2.0.10 Source Forge Entry hosted through ver 2.0.9 GanttProjectAPI Free project management software Business software for Linux