id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
27481
https://en.wikipedia.org/wiki/Section%20508%20Amendment%20to%20the%20Rehabilitation%20Act%20of%201973
Section 508 Amendment to the Rehabilitation Act of 1973
In 1998 the US Congress amended the Rehabilitation Act to require Federal agencies to make their electronic and information technology accessible to people with disabilities. Section 508 was enacted to eliminate barriers in information technology, to make available new opportunities for people with disabilities, and to encourage development of technologies that will help achieve these goals. The law applies to all Federal agencies when they develop, procure, maintain, or use electronic and information technology. Under Section 508 (), agencies must give employees with disabilities and members of the public access to information that is comparable to the access available to others. History Section 508 was originally added as an amendment to the Rehabilitation Act of 1973 in 1986. The original section 508 dealt with electronic and information technologies, in recognition of the growth of this field. In 1997, The Federal Electronic and Information Technology Accessibility and Compliance Act was proposed in the U.S. legislature to correct the shortcomings of the original section 508; the original Section 508 had turned out to be mostly ineffective, in part due to the lack of enforcement mechanisms. In the end, this Federal Electronic and Information Technology Accessibility and Compliance Act, with revisions, was enacted as the new Section 508 of the Rehabilitation Act of 1973, in 1998. Section 508 addresses legal compliance through the process of market research and government procurement and also has technical standards against which products can be evaluated to determine if they meet the technical compliance. Because technology can meet the legal provisions and be legally compliant (e.g., no such product exists at time of purchase) but may not meet the United States Access Board's technical accessibility standards, users are often confused between these two issues. Additionally, evaluation of compliance can be done only when reviewing the procurement process and documentation used when making a purchase or contracting for development, the changes in technologies and standards themselves, it requires a more detailed understanding of the law and technology than at first seems necessary. There is nothing in Section 508 that requires private web sites to comply unless they are receiving federal funds or under contract with a federal agency. Commercial best practices include voluntary standards and guidelines as the World Wide Web Consortium's (W3C) Web Accessibility Initiative (WAI). Automatic accessibility checkers (engines) such as "IBM Rational Policy Tester" and AccVerify, refer to Section 508 guidelines but have difficulty in accurately testing content for accessibility. In 2006, the United States Access Board organized the Telecommunications and Electronic and Information Technology Advisory Committee (TEITAC) to review and recommend updates to its Section 508 standards and Telecommunications Act Accessibility Guidelines. TEITAC issued its report to the Board in April 2008. The Board released drafts of proposed rules based on the committee's recommendations in 2010 and 2011 for public comment. In February 2015, the Board released a notice of proposed rulemaking for the Section 508 standards. In 2017 the Section 508 Refresh came into effect. This was then updated a year later in January of 2018 to restore TTY access provisions. This refresh essentially aligned the web elements with the W3C's WCAG 2.0 AA criteria. The law Qualifications Federal agencies can be in legal compliance and still not meet the technical standards. Section 508 §1194.3 General exceptions describe exceptions for national security (e.g., most of the primary systems used by the National Security Agency (NSA)), incidental items not procured as work products, individual requests for non-public access, fundamental alteration of a product's key requirements, or maintenance access. In the case that implementation of such standards causes undue hardship to the Federal agency or department involved, such Federal agencies or departments are required to supply the data and information to covered disabled persons by alternative means that allow them to make use of such information and data. Section 508 requires that all Federal information that is accessible electronically must be accessible for those with disabilities. This information must be accessible in a variety of ways, which are specific to each disability. The Rehabilitation Act of 1973 requires that all federal agencies provide individuals with disabilities with reasonable accommodation, which falls into three categories: (1) modifications and adjustments must be made for a person with a disability to be considered for a job, (2) modifications and adjustments must be made in order for an individual to execute essential functions of the job, and (3) modifications or adjustments must be made in order to enable employees to have equal benefits and privileges Some users may need certain software in order to be able to access certain information. People with disabilities are not required to use specific wording when putting in a reasonable accommodation request when applying for a job. An agency must be flexible in processing all requests. This means that agencies cannot adopt a "one-size fits all" approach. Each process should be handled on a case-by-case basis. Provisions The original legislation mandated that the Architectural and Transportation Barriers Compliance Board, known as the Access Board, establish a draft for their Final Standards for accessibility for such electronic and information technologies in December 2001. The final standards were approved in April 2001 and became enforceable on June 25, 2001. The latest information about these standards and about support available from the Access Board in implementing them, as well as the results of surveys conducted to assess compliance, is available from the Board's newsletter Access Currents. The Section 508 standards, tools, and resources are available from the Center for Information Technology Accommodation (CITA), in the U.S. General Services Administration's Office of Government-wide Policy. Summary of Section 508 technical standards Software Applications and Operating Systems: includes accessibility to software, e.g. keyboard navigation & focus is supplied by a web browser. Web-based Intranet and Internet Information and Applications: assures accessibility to web content, e.g., text description for any visuals such that users with a disability or users that need assistive technology such as screen readers and refreshable Braille displays, can access the content. Telecommunications Products: addresses accessibility for telecommunications products such as cell phones or voice mail systems. It includes addressing technology compatibility with hearing aids, assistive listening devices, and telecommunications devices for the deaf (TTYs). Videos or Multimedia Products: includes requirements for captioning and audio description of multimedia products such as training or informational multimedia productions. Self Contained, Closed Products: products where end users cannot typically add or connect their own assistive technologies, such as information kiosks, copiers, and fax machines. This standard links to the other standards and generally requires that access features be built into these systems. Desktop and Portable Computers: discusses accessibility related to standardized ports, and mechanically operated controls such as keyboards and touch screens. Practice When evaluating a computer hardware or software product which could be used in a U.S. government agency, information technology managers now look to see if the vendor has provided an Accessibility Conformance Report (ACR). The most common ACR is known as Voluntary Product Accessibility Template® (VPAT®) although some departments historically promoted a Government Product Accessibility Template (GPAT). The VPAT template was created by the Information Technology Industry Council (ITI). A VPAT lists potential attributes of the product that affect the degree to which it is accessible. One issue is whether a software's functions can be executed from the keyboard, or whether they require the use of a mouse, because keyboards are usable by a wider spectrum of people. Because colorblindness is common, another issue is whether the device or software communicates necessary information only by differences in displayed color. Because not all users can hear, another issue is whether the device or software communicates necessary information in an auditory way. If the product can be configured to the user's preferences on these dimensions, that is usually considered a satisfactory adaptation to the Section 508 requirements. One challenge to the adoption of open-source software in the U.S. government has been that there is no vendor to provide support or write a VPAT, but a VPAT can be written by volunteers if they can find the necessary information. See also Computer accessibility Section 504 of the Rehabilitation Act Universal usability Web Content Accessibility Guidelines Web accessibility References Reuters, T. (2011). Rehabilitation act. Retrieved from Castro, I. The U.S. Equal Employment Opportunity Commission, (2000). Policy guidance on executive order 13164: establishing procedures to facilitate the provision of reasonable accommodation Weaver, T. Office of Governmentwide Policy, (2011). Section 508 laws External links CITES/DRES Web Accessibility Best practices United States Access Board the Federal Agency responsible for Section 508 technical standards Section 508 Checklist from WebAIM.org USPS AS-508-A, Section 508 Technical Reference Guide in HTML Format Federal Register December 2000 Section 508 Electronic and information technology accessibility standards Federal Register April 2005 Section 508 related Federal Acquisition Rules Section508.gov - free 508 training. buyaccessible.gov/ - the place for Section 508 procurement assistance WCAG & Section 508 accessibility - The basic Section508 and WCAG requirements [https://www.boia.org/ultimate-guide-to-web-accessibility/ The ultimate guide to website accessibility for Section 508 and WCAG 2.1 A/AA. Web accessibility United States federal civil rights legislation Disability legislation 1973 in law Accessible Procurement
55531097
https://en.wikipedia.org/wiki/Vector%20General
Vector General
Vector General (VG) was a series of graphics terminals and the name of the Californian company that produced them. They were first introduced in 1969 and were used in computer labs until the early 1980s. The terminals were based on a common platform that read vectors provided by a host minicomputer and included hardware that could perform basic mathematical transformations in the terminal. This greatly improved the performance of operations like rotating an object or zooming in. The transformed vectors were then displayed on the terminal's built-in vector monitor. In contrast to similar terminals from other vendors, the Vector General systems included little internal memory. Instead, they stored vectors on the host computer's memory and accessed them via direct memory access (DMA). Fully equipped VG3D terminals ran at about $31,000 including a low-end PDP-11 computer, compared to machines like the IBM 2250 which cost $100,000 for just the terminal. Among a number of famous uses known within the computer graphics field, it was a VG3D terminal connected to a PDP-11/45 that was used to produce the "attacking the Death Star will not be easy" animations in Star Wars. Description Hardware A common attempt in the late 1960s to improve performance of graphics display, especially in 3D, was to use special terminals that held a list of vectors in internal memory and then used hardware or software running in the display controller to provide basic transformations like rotation and scaling. As these transformations were relatively simple, they could be implemented in the terminal for relatively low cost, and thereby avoid spending time on the host CPU to perform these operations. Systems performing at least some of these operations included the IDI, Adage, and Imlac PDS-1. A key innovation in the VG series terminals was the use of direct memory access (DMA) to allow it to access a host computer's memory. This meant that the terminals did not need much storage of their own, and gave them the ability to rapidly access the data without it being copied over a slower link like the serial-based Tektronix 4010 or similar systems. The downside to this approach is that it could only be used on machines that offered DMA, and only through a relatively expensive adaptor. The basic concept was that the host computer would run calculations to produce a series of points for the 2D or 3D model and express that as 12-bit values, normally stored in 16-bit words with extra stuffed status bits. The terminal would then periodically interrupt the computer, 30 to 60 times a second, and quickly read out and display the data. Each point was read one-by-one into the local memory registers for temporary storage while mathematical functions were applied to them to scale, translate and (optionally) rotate, and when the final values were calculated, those points were sent to the cathode ray tube (CRT) for display. There were three different models of the coordinate transformation hardware. The most basic system included the hardware needed to pan and zoom 2D images, in which case the terminal containing it would be known as a Vector General 2D. Another version added the ability to rotate the 2D image around an arbitrary point, known as the 2DR (for Rotate). The most expensive option was the 3D, which provided rotation, pan and zoom on 3D vectors. Another option that could be added to any of these models and was not reflected in the name added a character generator. The square CRTs were driven directly from the output of the transformation hardware, as opposed to being displayed using a traditional raster scanning method. The company referred to this type of operation as "random scan", although it is universally referred to as a vector monitor in modern references. Two basic CRT models were available, measuring and diagonally. The 21-inch model was also available in a special "high speed" version which improved the drawing rates. The CRTs used electromagnetic deflection, not magnetic as in televisions, to provide high-speed scanning performance. Several different input devices could be connected to the system. The most common was a 70-key keyboard, while others included a bank of momentary pushbuttons switches with internal register-controlled lights, a graphics tablet, a light pen, a dial box, and a joystick. The system as a whole was quite large, about the size of a small refrigerator. Drawing concepts Vectors were represented logically by two end points in space. Each point was defined by two or three 12-bit values, thereby representing a space from 0 to 4,095 in X, Y and (optionally) Z. The terminal had three 12-bit registers to hold the values while they were being manipulated. The system allowed vectors to be represented in a number of ways in memory. The most basic mode, "absolute", required two points, one for each end of the vector. "Relative" vectors were expressed as offsets from the last set of values, so only one point was needed to define a vector, the first point being the endpoint of the last one. This could halve the number of points needed to describe a complete drawing, if the data was continuous like a line chart. "Incremental" vectors further reduced memory by using only 6-bits for each point, allowing the data to be packed into less memory in the host. The system could be set to add the values to the high- or low-order 6 bits of the last value, allowing gross or fine movement. Finally, "autoincrementing" vectors further reduced the memory requirements by requiring only one value to be stored, with the others being incremented by a preset amount as each new point was read in. These were similar to relative vectors, with one of the two axes always having the same relative offset. The system also had a separate circuit to generate circular arcs, as opposed to having to send in a series of points. The display was capable of producing 32 different intensity levels. This could be programmed directly by setting a register in the terminal, but was more commonly used in a programmed mode in 3D. In this mode the intensity was changed automatically as the vector was drawn, with items deeper in the Z dimensions being drawn less intense. This produced a depth cue that made the front of the object look brighter on the display. The rapidity of this change was set through the ISR register. A separate 12-bit PS register held the scale multiplier. When this value was not used, the coordinate system represented a physical area about twice as large as the screen, allowing it to translate the image to provide scrolling. When a value was placed in this register, the coordinates in the vector registers and the character drawing system were multiplied by this value, producing a zoom effect. The optional character generator drew characters using a set of five hardware-defined shapes, a circle, a square with a vertical line in the middle, a square with a horizontal line in the middle, and hourglass shapes oriented vertically and a similar one oriented horizontally. By turning the beam on and off as each of these shapes was being drawn by the hardware, the system could draw any required character. For instance, the letter C was drawn using the O shape and turning the beam off while it was on the right. The letter D would be drawn using the O shape and turning it off while it was on the left, and then drawing the vertical-line box with the beam turned on only when the center vertical bar was being drawn. Between one and three such "draws" are needed to produce a complete character. The system included a number of greek letters and mathematical symbols in addition to the normal ASCII characters. Programming The terminal periodically read the main memory of the host computer using DMA to refresh the display. Further communications were handled through a single bidirectional I/O port after creating an interrupt request with the request details in the PIR register. Settings and instructions were handled by sending data to and from the I/O port to one of the terminal's 85 registers. For instance, the host could set the value of the PS register, causing the image to zoom. It would do this by calling an interrupt whose 16-bit message contained the number of the register to be set, 17 in this case. The terminal would respond by sending a 16-bit message back over the I/O channel. Writes were handled using a similar process, but the terminal responded to the interrupt by reading a value instead. The base address for the start of the vector list and the offset within it were in registers 14 and 15. This allowed the display to perform a sort of "page flipping" by writing out separate sets of points within the computer's memory and then changing the display all at once by changing the value of register 14 to point to a different base address. This was limited by the amount of memory available on the host computer. The display instructions had a variety of formats that allowed the construction of not only vectors, but also various commands. For instance, there were instructions to load data into a given register, consisting of two 16-bit words the first with the register details and the next with the value. Other instructions performed logical OR or AND on register values. The display instructions themselves could be mixed with these operations, so the system could, for instance, begin displaying a selection of items, cause a lamp to light, rotate the image, and then draw more vectors. Notable uses The VG3D is historically notable for its use in Star Wars, but is also well known for its early role in the development of computer aided design. In Star Wars Larry Cuba produced two segments of computer animation for Star Wars on a PDP-11/45 with a VG3D terminal. To film the images frame-by-frame, a wire was connected between one of the lights on the pushbutton panel and the shutter trigger on the camera. This was triggered by the host computer, causing the camera to release the shutter one time and to advance the film a single frame. The first segment, which shows the exterior of the Death Star, is based entirely on the VG3D's internal display capabilities. The model consisted of a simple series of 3D points representing the outline of the station held in the PDP-11's memory, constructed algorithmically using the associated GRASS programming language's curve generation code. To move and rotate the image as seen in the film, the associated GRASS program would load new rotation and zoom figures into the terminal's registers and then trigger the camera. The second segment shows the view flying down the trench in the final attack, first from above and then from the pilot's perspective. This was much more difficult to create because the terminal did not support the calculation of perspective, which was required in this sequence. The physical model of the trench used during filming was made up of a series of six features which were duplicated many times and then assembled in different ways to produce a single long model. Cuba digitized each of these six features from photographs and then combined them in different configurations into over 50 U-shaped sections. For each frame, five of these sections were stacked in depth and then the perspective calculations were applied. The addition of new sections as the animation progresses can be seen in the film. This was then sent to the terminal as a static image and the camera was triggered. Each frame took about two minutes to render. In the US Army Mike Muuss recounts that the US Army's Ballistic Research Laboratory had purchased a Cyber 173 and three workstations consisting of a VG3D terminal and a PDP-11/34 to drive it. These were intended to be connected together, but no one was able to get this to work, and in the end, the VG workstations were left unused. He was bothered to see all of this hardware being wasted, so in 1979 he hooked up one of the workstations and created a program that produced a rotating 3D cube. Another programmer had been given a set of 3D points of the XM1 tank design and was writing code to output it to a Calcomp plotter. He asked Muuss if they could get it displayed on the VG terminals instead, so they could rotate it. He first output it as a static image on a Tektronix 4014, but the next night managed to get the display onto the VG3D where it could be easily spun around using the internal vector hardware. No one in the Army had seen anything like this before. The next day the commanding general of ARRADCOM flew in to see it live. Over the next two weeks Muuss was constantly giving demos of the system to a parade of officers. The demo became so well known that Muuss was able to begin development of BRL-CAD. Notes References Citations Bibliography External links Making of the Computer Graphics for Star Wars, short film made by Larry Cuba to illustrate the process of making the Star Wars animations. See also LDS-1 (Line Drawing System-1) Graphical terminals
36818179
https://en.wikipedia.org/wiki/Argentum%20Backup
Argentum Backup
Argentum Backup is a backup software program for Microsoft Windows, produced by Argentum. Argentum Backup copies files into Zip compressed folders, as well as provides native file copying. Backup copies can be created both manually and automatically on the schedule. The product features a number of backup task templates to back up common file locations on computers with Microsoft Windows. Argentum Backup has won PC Magazine Editors' Choice award and PC World Best Buy award. Features The company and reviewers emphasize the overall raw simplicity and ease of use of the user interface of Argentum Backup software, aimed first for beginners and novice computer users. The company also stresses the minimalistic nature of the program and very fast backup operation due to the proprietary backup engine written in highly optimized and profiled C++/STL code. Other notable features include: 64-bit extensions (Zip64) to Zip format are supported to create large, multi-gigabyte Zip backups. Argentum Backup supports strong 128-bit and 256-bit AES symmetric encryption to securely protect sensitive data when backing up into Zip files. AES 256-bit strong encryption methods provide significantly greater cryptographic security than the traditional Zip encryption. A set of built-in backup templates to recognize and back up locations of valuable data on Windows XP, Windows 7, Windows 10, and other Microsoft Windows platforms. Stacking feature which allows to have several backup copies - each backup copy for a particular point of time, to allow to get back to any of them whenever required. XHTML reports of backup activity with actions and problems logged and detailed summary statistics shown. See also Backup software Minimalism External links Argentum Backup official site References Backup software for Windows Windows software Shareware
23000404
https://en.wikipedia.org/wiki/Flight%20of%20the%20Fire%20Thief
Flight of the Fire Thief
Flight of the Fire Thief is a novel written by Terry Deary, and is the second installment of The Fire Thief Trilogy. The book continues the story of Prometheus, the Titan who stole fire from the gods. It follows on directly from The Fire Thief. Prometheus goes back to 1795 to try to find a hero who is worshipped in a temple in the first book, only to find out he did not travel back far enough. He returns to Eden City to find it under siege against Wild People, just like Troy. He helps a Dr. Dee and his daughter, Nell, to end the siege and help the Wild People get their princess back. However, the Avenger is still looking for him, and has a team of Achilles and Paris, also from Troy, and a monster from the underworld, a 50-headed, 100 armed Hecatonchires. To get past the walls, Hera attempted to repeat the Trojan Horse method, but that failed. Nell eventually sneaked in and launched the spring cannon. The Avenger's team eventually turns on him, allowing Prometheus to escape once again. Other books in the series: The Fire Thief (2005) and The Fire Thief Fight Back (2007) Characters Prometheus: Also called Theus, stole fire from the gods and was chained to a rock and had his liver eaten every day for 200 years. Heracles saved him, only to be given the task of finding a human hero. Flight of the Fire Thief is a novel written by Terry Deary, and is the second installment of The Fire Thief Trilogy. The book continues the story of Prometheus, the Titan who stole fire from the gods. Helen: Also called Nell. She helps her father scam with stunts and a flying balloon. They land in Eden City after a failed stunt. After a failed attempt to rob the city, she decides to help the Wild People get their princess out of the city. She returns to Eden City in the year 1857 to wait for Theus. Dr. Dee: A showman and conman. He and his daughter sell cheap food and show stunts to con people out of their money. He comes up with a plan to rob the people of the city, but lands in the plains of the Wild People instead. Running Bear: Young chief of the Wild People. After refusing to give up his people's land, his sister is kidnapped. He then sets up a siege to get her back. Mayor Makepiece: Mayor of Eden City. He tried to drive the Wild People off their land to sell it for a ridiculous amount of money. Sheriff Spade: Sheriff of Eden City and Mayor Makepiece's partner. Zeus: Prometheus's cousin. He punished Prometheus by chaining him to a rock to have his liver eaten every day. He then gave him a task to find a human hero so to call of The Avenger. Hera: Zeus's wife. She gets bored watching the siege of Troy, and orders Zeus to end it quickly. She then tries to end the siege of Eden City, but her plans are quickly thwarted. Achilles: A Greek hero of the Trojan War. He was killed by Paris and they both are taken to the underworld by the Avenger, only to strike a deal with him to help him find Theus. Paris: Trojan Prince. He is killed by a poison arrow soon after he killed Achilles. He helps the Avenger to hunt down Theus. Hecatonchires: Called Hec for short. Helps The Avenger find Theus, but then helps Theus and his friends get into Eden City and get the princess out. He eventually finds a planet with more creatures that look exactly like him. 2006 British novels Fiction set in 1795 Prometheus
4744024
https://en.wikipedia.org/wiki/Don%20Buford
Don Buford
Donald Alvin Buford (born February 2, 1937) is an American former professional baseball player scout, coach and manager. He played in Major League Baseball as an outfielder from through , most notably as the leadoff hitter for the Baltimore Orioles dynasty that won three consecutive American League pennants from 1969 to 1971 and, won the World Series in 1970. He also played for the Chicago White Sox and played in the Nippon Professional Baseball league from 1973 to 1976. Buford also played as an infielder and was a switch hitter who threw right-handed. In 1993, Buford was inducted into the Baltimore Orioles Hall of Fame. College career Buford was born in Linden, Texas and raised in Los Angeles, California. After graduating from Susan Miller Dorsey High School, he played college baseball for the USC Trojans baseball team under legendary coach Rod Dedeaux. In 1958, he played on the Trojans' College World Series championship team with Ron Fairly and future baseball executive Pat Gillick. Buford was also a running back on the USC football team. His sons Don Buford, Jr. and Damon Buford also played for the USC Trojans. Buford is a member of Kappa Alpha Psi fraternity. Professional career In his major league career, Buford batted .264 with 93 home runs, 418 RBIs, 718 runs scored and 200 stolen bases in 1286 games played. Primarily a leadoff hitter, he grounded into only 34 double plays during his big-league career (4553 at bats) and holds the Major League Record for the lowest GIDP rate, averaging one in every 138 at bats. His career total is two fewer than Jim Rice's single-season record of 36, set in 1984, and 316 fewer than Cal Ripken's career record mark of 350 GIDP's. Note: A leadoff hitter faces less double play situations, but Buford did not lead off every game in his career. Chicago White Sox He broke into the majors as an infielder who played both second base and third base, becoming the White Sox’ regular at the former position in 1965 (after sharing the position with Al Weis in 1964) and the latter in 1966. In the latter year, he stole a career-high 51 bases (one fewer than the American League leader, Bert Campaneris) and led the AL in sacrifice hits with 17, while establishing himself as one of the league's top leadoff hitters. In 1967 Buford and Ken Berry tied for the team lead with a .241 batting average on a White Sox team that battled the Boston Red Sox, Detroit Tigers and Minnesota Twins for the American League pennant, which the Red Sox won on the final day of the regular season. The White Sox were eliminated from pennant contention (perhaps due, in large part, to faulty offense; they led the Majors with a 2.45 earned run average, but batted only .225) in the final week of the season after losing a doubleheader to the lowly Kansas City Athletics on September 27. Baltimore Orioles Buford was traded along with Bruce Howard and Roger Nelson to the Baltimore Orioles for Luis Aparicio, Russ Snyder and John Matias on November 29, 1967. In 1968 he batted .282 with 15 home runs in a lineup that also featured the likes of Frank Robinson, Brooks Robinson, Boog Powell, Davey Johnson and Paul Blair. In 1969 Buford hit a career-high .291 as the Orioles won the American League pennant. In the first game of the World Series against the New York Mets, Buford hit a leadoff home run against fellow ex-USC Trojan Tom Seaver—the first home run to lead off a World Series. (Dustin Pedroia and Alcides Escobar are the only other players to lead off a World Series with a home run, for the Boston Red Sox in 2007 and the Kansas City Royals in 2015, respectively.) Buford also drove in another run with a double as the Orioles won 4-1. However, he went 0-for-16 over the next four games, all won by the Mets for a seemingly impossible Series victory. In 1970 Buford batted .272 with 17 home runs and a career high 109 walks. The Orioles gained redemption in the World Series, which they won over the Cincinnati Reds in five games. Buford, playing in four of those games, went 4-for-15, including a home run in Game Three, which Baltimore won 9-3. In 1971 Buford batted .290 with a career-high 19 home runs. He was also selected to the All-Star team for the only time in his career. Again the Orioles went to the World Series; this time, however, the Pittsburgh Pirates defeated them in seven games. Buford collected six hits in this Series; two of them were home runs. In each of the Orioles’ three pennant-winning seasons Buford scored 99 runs, leading the American League in that category in 1971. Buford was the first Baltimore Oriole to homer from both sides of the plate in the same game. He accomplished this feat on April 9, 1970 in a 13-1 win over the Cleveland Indians. Buford also had the dubious distinction of being the first Oriole to strike out five times in one game, on August 26, 1971. However, his Orioles defeated his former team, the Chicago White Sox, 8–7. Japan After the 1971 season the Orioles played an exhibition series in Japan. After slumping to .206 in 1972 Buford returned to Japan, where he had been known as "The Greatest Leadoff Man in the World" during the Orioles’ tour, to play professionally. In four seasons, from 1973 to 1976, he hit .270 with 65 home runs and 213 RBI. In 1973 and 1974 voted to top 9 Best Players in Japan. Played in All-Star Games receiving Honors. Post retirement In 2006, Buford was the manager of the Daytona Cubs of the Florida State League. He had also served on Frank Robinson's coaching staff with the Orioles, San Francisco Giants and Washington Nationals. Previously, he had front office and other minor league positions with the Orioles. Managed Rookie League Team (Bluefield). Managed A (Aberdeen, MD), managed high A (Frederick, MD), managed AA (Bowie), assistant farm director, farm director, all for the Orioles. Buford's son Damon Buford also played in the major leagues, playing with the Orioles, Mets, Texas Rangers, Boston Red Sox and Chicago Cubs from 1993 to 2001. Buford's oldest son Don Buford, Jr. also played professional baseball in the Baltimore Orioles organization for four years. He is now an internationally recognized orthopedic surgeon specializing in sports medicine and shoulder surgery. Buford remains one of the most respected individuals to teach the game of baseball. His number 9 was retired by the Daytona Cubs after the 2006 season. In October, 2012 Don Buford, Sr. accepted a new position managing Major League Baseball's Urban Youth Academy in Compton, California. The academy focuses on baseball and softball training and education and is free to participants. He is now working on his own Community Organization, Educational Sports Institute which is based in Watts. In 2008, Buford was inducted into the International League Triple A Hall of Fame. In 2001, Buford was inducted into the USC Athletic Hall of Fame. In 1993, Buford was inducted into the Baltimore Orioles Hall of Fame. See also List of Major League Baseball annual runs scored leaders References External links , or SABR Biography Project, or Retrosheet 1937 births Living people African-American baseball coaches African-American baseball players American expatriate baseball players in Japan American League All-Stars Baltimore Orioles coaches Baltimore Orioles executives Baltimore Orioles players Baseball players from Texas Charleston White Sox players Chicago White Sox players Indianapolis Indians players International League MVP award winners Lincoln Chiefs players Lynchburg White Sox players Major League Baseball bench coaches Major League Baseball farm directors Major League Baseball first base coaches Major League Baseball infielders Major League Baseball left fielders Milwaukee Brewers scouts Minor league baseball managers Nankai Hawks players People from Linden, Texas Rapiños de Occidente players San Diego Padres (minor league) players San Francisco Giants coaches Savannah White Sox players Baseball players from Los Angeles Taiheiyo Club Lions players USC Trojans baseball players USC Trojans football players Washington Nationals coaches Susan Miller Dorsey High School alumni 21st-century African-American people 20th-century African-American sportspeople
44235966
https://en.wikipedia.org/wiki/QPR%20Software
QPR Software
QPR Software Plc is a Finnish software firm, that provides management software products. QPR Software specializes in process mining, process and enterprise architecture modeling, and performance management. Founded in 1991 and headquartered in Helsinki, QPR Software is listed at the Helsinki Stock Exchange. QPR offers software products in more than 50 countries. According to Gartner (in Harvard Business Review), QPR ProcessAnalyzer is "one of the oldest and more comprehensive tool sets in this [process mining] space". QPR's software products: QPR ProcessAnalyzer, an enterprise-grade software product for advanced process mining. QPR EnterpriseArchitect, enterprise architecture modeling software. QPR Metrics, a tool for measuring strategy execution & performance management, also supporting balanced scorecard methodology. QPR ProcessDesigner, a tool for quality assurance, business process modelling and Six Sigma software packages References Companies listed on Nasdaq Helsinki 1991 establishments in Finland
36268255
https://en.wikipedia.org/wiki/Android%20Jelly%20Bean
Android Jelly Bean
Android Jelly Bean is the codename given to the tenth version of the Android mobile operating system developed by Google, spanning three major point releases (versions 4.1 through 4.3.1). Among the devices that run Android 4.3 are the Asus Nexus 7 (2013) and the LG Nexus 4. The first of these three releases, 4.1, was unveiled at Google's I/O developer conference in June 2012. It focused on performance improvements designed to give the operating system a smoother and more responsive feel, improvements to the notification system allowing for expandable notifications with action buttons, and other internal changes. Two more releases were made under the Jelly Bean name in October 2012 and July 2013 respectively, including 4.2—which included further optimizations, multi-user support for tablets, lock screen widgets, quick settings, and screen savers, and 4.3—which contained further improvements and updates to the underlying Android platform. Jelly Bean versions are not receiving updates by Google Play Services. , 0.46% of Android devices run Jelly Bean. Development Android 4.1 Jelly Bean was first unveiled at the Google I/O developer conference on June 27, 2012, with a focus on "delightful" improvements to the platform's user interface, along with improvements to Google's search experience on the platform (such as Knowledge Graph integration, and the then-new digital assistant Google Now), the unveiling of the Asus-produced Nexus 7 tablet, and the unveiling of the Nexus Q media player. For Jelly Bean, work was made on optimizing the operating system's visual performance and responsiveness through a series of changes referred to as "Project Butter": graphical output is now triple buffered, vsync is used across all drawing operations, and the CPU is brought to full power when touch input is detected—preventing the lag associated with inputs made while the processor is in a low-power state. These changes allow the operating system to run at a full 60 frames per second on capable hardware. Following 4.1, two more Android releases were made under the Jelly Bean codename; both of these releases focused primarily on performance improvements and changes to the Android platform itself, and contained relatively few user-facing changes. Alongside Android 4.1, Google also began to decouple APIs for its services on Android into a new system-level component known as Google Play Services, serviced through Google Play Store. This allows the addition of certain forms of functionality without having to distribute an upgrade to the operating system itself, addressing the infamous "fragmentation" problems experienced by the Android ecosystem. Release Attendees of the Google I/O conference were given Nexus 7 tablets pre-loaded with Android 4.1, and Galaxy Nexus smartphones which could be upgraded to 4.1. Google announced an intent to release 4.1 updates for existing Nexus devices and the Motorola Xoom tablet by mid-July. The Android 4.1 upgrade was released to the general public for GSM Galaxy Nexus models on July 10, 2012. In late 2012, following the official release of Jelly Bean, a number of third-party Android OEMs began to prepare and distribute updates to 4.1 for their existing smartphones and tablets, including devices from Acer, HTC, LG, Motorola, Samsung, Sony, and Toshiba. In August 2012, nightly builds of the aftermarket firmware CyanogenMod based on 4.1 (branded as CyanogenMod 10) began to be released for selected devices, including some Nexus devices (the Nexus S and Galaxy Nexus), the Samsung Galaxy S, Galaxy S II, Galaxy Tab 2 7.0, Motorola Xoom, and Asus Transformer. On October 29, 2012, Google unveiled Android 4.2, dubbed "a sweeter tasting Jelly Bean", alongside its accompanying launch devices, the Nexus 4 and Nexus 10. Firmware updates for the Nexus 7 and Galaxy Nexus were released in November 2012. Android 4.3 was subsequently released on July 24, 2013 via firmware updates to the Galaxy Nexus, 2012 Nexus 7, Nexus 4, and Nexus 10. Features User experience Visually, Jelly Bean's interface reflects a refinement of the Holo appearance introduced by Android 4.0. The default home screen of Jelly Bean received new features, such as the ability for other shortcuts and widgets on a home screen page to re-arrange themselves to fit an item being moved or resized. The notification system was also improved with the addition of expandable and actionable notifications; individual notifications can now display additional content or action buttons (such as Call back or Message on a missed call), accessible by dragging open the notification with a two-finger gesture. Notifications can also be disabled individually per app. Android 4.2 added additional features to the user interface; the lock screen can be swiped to the left to display widget pages, and swiped to the right to go to the camera. A pane of quick settings toggles (a feature often seen in OEM Android skins) was also added to the notification area— accessible by either swiping down with two fingers on phones, swiping down from the top-right edge of the screen on tablets, or pressing a button on the top-right corner of the notifications pane. The previous Browser application was officially deprecated on 4.2 in favor of Google Chrome for Android. 4.2 also adds gesture typing on the keyboard, a redesigned Clock app, and a new screensaver system known as Daydreams. On tablets, Android 4.2 also supports multiple users. To promote consistency between device classes, Android tablets now use an expanded version of the interface layout and home screen used by phones by default, with centered navigation keys and a status bar across the top. These changes took effect for small tablets (such as the Nexus 7) on 4.1, and for larger tablets on 4.2. Small tablets on Android are optimized primarily for use in a portrait (vertical) orientation, giving apps expanded versions of the layouts used by phones. When used in a "landscape" (horizontal) orientation, apps adjust themselves into the widescreen-oriented layouts seen on larger tablets. On large tablets, navigation buttons were previously placed in the bottom-left of a bar along the bottom of the screen, with the clock and notification area in the bottom-right. Platform For developers, 4.1 also added new accessibility APIs, expanded language support with bi-directional text support and user-supplied keymaps, support for managing external input devices (such as video game controllers), support for multichannel, USB, and gapless audio, a new media routing API, low-level access to hardware and software audio and video codecs, and DNS-based service discovery and pre-associated service discovery for Wi-Fi. Android Beam can now also be used to initiate Bluetooth file transfers through near-field communication. Android 4.2 added a rewritten Bluetooth stack, changing from the previous Bluez stack (GPL created by Qualcomm) to a rewritten Broadcom open source stack called BlueDroid. The new stack, initially considered "immature", promised several forward-looking benefits, including improved support for multiple displays, support for Miracast, native right-to-left support, updated developer tools, further accessibility improvements such as zooming gestures, and a number of internal security improvements such as always-on VPN support and app verification. A new NFC stack was added at the same time. Android 4.3 consisted of further low-level changes, including Bluetooth low energy and AVRCP support, SELinux, OpenGL ES 3.0, new digital rights management (DRM) APIs, the ability for apps to read notifications, a VP8 encoder, and other improvements. Android 4.3 also included a hidden privacy feature known as "App Ops", which allowed users to individually deny permissions to apps. However, the feature was later removed on Android 4.4.2; a Google spokesperson stated that the feature was experimental and could prevent certain apps from functioning correctly if used in certain ways. The concept was revisited as the basis of a redesigned permissions system for Android 6.0. See also Android version history Firefox OS iOS 6 Windows Phone 7 Windows 7 References External links Android (operating system) 2012 software
251716
https://en.wikipedia.org/wiki/Be%20Inc.
Be Inc.
Be Inc. was an American computer company founded in 1990. It is best known for the development and release of BeOS, and the BeBox personal computer. Be was founded by former Apple Computer executive Jean-Louis Gassée with capital from Seymour Cray. Be's corporate offices were located in Menlo Park, California, with regional sales offices in France and Japan. The company later relocated to Mountain View, California for the duration of its dissolution. The company's main intent was to develop a new operating system using the C++ programming language on a proprietary hardware platform. BeOS was initially exclusive to the BeBox, and was later ported to Apple Computer's Power Macs despite resistance from Apple, due to the hardware specifications assistance of Power Computing. In 1998, BeOS was ported to the Intel x86 architecture, and PowerPC support was reduced and finally dropped after BeOS R5. It inspired the open source operating system, Haiku. History Be was founded by former Apple Computer executive Jean-Louis Gassée in 1990 with Steve Sakoman after being ousted by Apple CEO John Sculley. According to several sources including Macworld UK, the company name "Be" originated in a conversation between Gassée and Sakoman. Gassée originally thought the company should be called "United Technoids Inc.", but Sakoman disagreed and said he would start looking through the dictionary for a better name. A few days later, when Gassée asked if he had made any progress, Sakoman replied that he had got tired and stopped at "B." Gassée said, " 'Be' is nice. End of story." Be aimed to create a modern computer operating system written in C++ on a proprietary hardware platform. In 1995, the BeBox personal computer was released by Be, with its distinctive strips of lights along the front that indicate the activity of each PowerPC CPU, and the combined analogue/digital, 37-pin GeekPort. In addition to BeOS and BeBox, Be also produced BeIA, an OS for internet appliances. Its commercial deployments included the Sony eVilla and DT Research, during its short lifespan. In 1996, Apple was searching for a new operating system to replace the classic Mac OS. Eventually, the two final options were BeOS and NeXTSTEP. NeXT was chosen and acquired due to the persuasive influence of Steve Jobs and the incomplete state of the BeOS product, criticized at the time for lacking such features as printing capability. It was rumoured that the deal fell apart because of money, with Be Inc allegedly wanting US$500M and a high-level post in the company, when the NeXT deal closed at US$400M. The rumours were dismissed by Gassée. Dissolution and litigation Ultimately the assets of the Be, Inc. were bought for US$11 million in 2001 by Palm, Inc., where Gassée served on the board of directors, at which point the company entered dissolution. The company then initiated litigation against Microsoft for aggressively anti-competitive and monopolistic business practices. Joining a long history of antitrust lawsuits against Microsoft, Be specifically contested Microsoft's prohibition of OEMs to allow dual-boot systems containing both Microsoft and non-Microsoft operating systems. The suit was settled in September 2003 with a US$23.25 million payout to Be, Inc. Palm subsequently spun off a wholly owned subsidiary PalmSource to develop its Palm OS and related software, with the Be assets being transferred to PalmSource which was subsequently acquired by Japanese-based ACCESS. Legacy The open source operating system Haiku resumed BeOS's legacy in the form of a complete reimplementation. Beta 1 of Haiku was released in September 2018. As of then, there is an active development team with nightly releases. References Defunct computer companies based in California Defunct computer hardware companies Defunct software companies of the United States Home computer hardware companies Software companies based in the San Francisco Bay Area Companies based in Menlo Park, California Computer companies established in 1990 Companies disestablished in 2001 1990 establishments in California 2001 disestablishments in California Defunct companies based in the San Francisco Bay Area
36000414
https://en.wikipedia.org/wiki/Carrie%20Mathison
Carrie Mathison
Carrie Anne Mathison, played by actress Claire Danes, is a fictional character and the protagonist of the American television drama/thriller series Homeland on Showtime, created by Alex Gansa and Howard Gordon. Carrie is a CIA officer who, while on assignment in Iraq, learned from a CIA asset that an American prisoner of war had been turned by al-Qaeda. After a U.S. Marine sergeant named Nicholas Brody is rescued from captivity, Carrie believes that he is the POW described to her. Carrie's investigation of Brody is complicated by her bipolar disorder and results in an obsession with her suspect. For her performance as Carrie, Claire Danes has received several major acting awards, including the Primetime Emmy Award for Outstanding Lead Actress in a Drama Series, the Golden Globe Award for Best Actress – Television Series Drama, the Screen Actors Guild Award for Outstanding Performance by a Female Actor in a Drama Series, the Satellite Award for Best Actress – Television Series Drama, and the TCA Award for Individual Achievement in Drama. She is the second actress to win all the five main TV acting awards for her performance in the Lead Drama Actress categories. Character biography Background and personality Carrie Anne Mathison was an Arabic language student at Princeton University, where she was recruited into the CIA by veteran officer Saul Berenson (Mandy Patinkin). Carrie developed a close working relationship with Saul, and is implied to have had a sexual relationship with CTC Director David Estes (David Harewood), her future boss, which contributed to the breakup of his marriage. In college, Carrie was diagnosed with bipolar disorder, for which she secretly began taking clozapine supplied by her older sister, Maggie (Amy Hargreaves). As a field operative in Iraq, Carrie infiltrated a prison to meet with an imprisoned CIA asset named Hasan Ibrahim, who claimed that he had information regarding an imminent terrorist attack in the United States. Moments before his execution, Hasan told Carrie that an American prisoner of war was turned by al-Qaeda figure Abu Nazir (Navid Negahban). Carrie's unauthorized dealings with Hasan led to an international incident, causing Estes to have her reassigned to the CIA's Counterterrorism Center in Langley, Virginia. Season 1 Ten months after her reassignment, Carrie attends an emergency staff meeting and learns that Nicholas Brody (Damian Lewis), a U.S. Marine sergeant, has been rescued after eight years in terrorist captivity. Carrie tells Saul about Hasan's claims, and expresses concern that Brody is the POW he was describing. With the CIA having no cause to investigate Brody, Carrie conducts her own unauthorized surveillance using a one-month FISA warrant delivered by Saul. Initially, Carrie finds no evidence of Brody's involvement with terrorism. When her FISA warrant expires, Carrie takes to making personal contact with Brody instead. She bumps into Brody at a veterans' support group, where they strike up a conversation and immediately bond over their mutual experiences in the Middle East. Brody asks Carrie to have a drink with him one night, culminating in a drunken sexual encounter in Carrie's car. The next day, he is brought in to Langley for a polygraph test over the apparent suicide of Afsal Hamid (Waleed Zuaiter), a detained terrorist with whom Brody had a violent confrontation. Carrie, suspicious of Brody's replies, orders the interviewer to ask if he has ever been unfaithful to his wife. Brody says "no", beating the polygraph. Afterwards, Carrie and Brody drive to her family's secluded cabin to spend the weekend together. However, after Brody realizes Carrie has been spying on him, she forces him to admit to his conversion to Islam, his meeting with and personal affection for Nazir, and his murder of a fellow POW named Tom Walker under duress by the terrorists. As Brody leaves, Saul contacts Carrie and informs her that Walker (Chris Chalk) is alive and was the POW who was turned. Carrie tries to apologize to Brody, but he rebuffs her and goes back to his family. The investigation into Walker leads Carrie and Saul to Mansour al-Zahrani (Ramsey Faragallah), a Saudi diplomat who acts as Nazir's intermediary. Carrie blackmails al-Zahrani into arranging a meeting with Walker at Farragut Square. However, the meet ends in disaster when Walker detonates a briefcase bomb carried by a double, killing al-Zahrani and three bystanders. Carrie is injured in the explosion, leading to a severe manic episode that causes her to be hospitalized. Upon learning of her affair with Brody, Estes — already under pressure by Vice President William Walden (Jamey Sheridan) to find a scapegoat for the bombing — dismisses Carrie from the CIA. Carrie deduces the target of Walker and Nazir's upcoming attack: Walden's upcoming policy summit at the State Department. When Walker stages a sniper attack on the dignitaries, Brody, Walden, and Estes are led to an underground bunker. Carrie realizes that the shooting is a diversion from the actual attack, in which Brody will bomb the bunker with a suicide vest and kill everyone inside. Carrie appears at Brody's house and pleads with his daughter, Dana (Morgan Saylor), to contact her father and stop him from carrying out the attack. An alarmed Dana calls 911, leading the police to arrest Carrie. Brody relents from the attack at the last minute following a sudden phone call from Dana. The following day, as Carrie is being released into Maggie's custody, Brody confronts her and tells her to leave him and his family alone. Carrie, now discredited and doubting her own sanity, asks to be taken to a hospital for electroconvulsive therapy. Saul tries to stop the procedure, but she is undeterred. When Saul mentions that Nazir's son, Issa, was killed in a drone strike, Carrie — remembering that Brody cried out Issa's name during a nightmare — fleetingly ponders this connection before being given a seizure by the ECT treatment. Season 2 Six months later, Carrie is working as an English as a Second Language teacher and living with her sister. When one of her former CIA assets, Fatima Ali (Clara Khoury), demands to talk with Carrie, Saul and Estes persuade her to fly to Lebanon for a meeting. Fatima gives the time and location of a planned meeting between her husband and Nazir in exchange for her defection. Saul and Estes' lack of trust in Carrie's judgment causes her to briefly have another breakdown. Carrie, Saul, and Estes set up an operation to capture Nazir, but Brody — who has been elected to Congress and is observing the operation with Walden in a situation room — tips him off and allows him to escape. Carrie, her obsession with the case renewed, ransacks Fatima's apartment and comes out with a satchel full of documents. After a pursuit by a Lebanese mob, Saul finds a hidden compartment in the satchel that it contains a memory card with Brody confessing to the aborted State Department bombing. When the mission is over, however, she realizes that she will not be permitted back into the CIA, and attempts suicide by overdosing on her medication. At the last minute, she changes her mind and vomits up the pills. Moments later, Saul shows up at her door and shows her Brody's confession. The video convinces Estes to let Carrie watch Brody, and to assign a CIA analyst named Peter Quinn (Rupert Friend) to run the operation. Quinn has Carrie meet with Brody as part of a sting operation. During the meeting, Carrie intuits that he is on to her and blows her cover by confronting Brody about his treason, forcing Saul and Quinn to arrest him. During her interrogation of Brody, Carrie catches him by surprise by admitting that she wanted him to leave his family to be with her. After Carrie systematically breaks Brody down and correctly surmises that Dana's phone call prevented the State Department bombing, he tearfully admits to his collaboration with Nazir and other al-Qaeda associates, and reveals that Nazir is planning an attack. Carrie gives Brody an ultimatum: either be exposed and sent to prison, or help the CIA in exchange for immunity. Left with no other options, Brody agrees to help the CIA. The pressures arising from both his family's needs and his espionage work lead Brody to break off contact with al-Qaeda. Carrie takes Brody to a hotel to convince him to go back to al-Qaeda; she has hot sex with him while Saul and Quinn uncomfortably listen in. After Brody helps foil the attack, Carrie is captured by Nazir, who threatens to kill her unless Brody aids him in assassinating Walden. After Brody kills Walden at Nazir's instruction, Nazir releases Carrie, who leads the search of the abandoned mill where she was held. Realizing that Nazir is still hiding in the building, she leads a SWAT team inside and witnesses his death. Estes offers to reinstate Carrie and promote her to Station Chief. When Brody leaves his wife Jessica (Morena Baccarin) to be with her, however, Carrie finds herself torn between her career and her love for him. She goes with Brody to a memorial service for Walden at Langley, and tells him she wants to be with him. During the service, al-Qaeda operatives plant a bomb in Brody's car and detonate it, in an attack planned by Nazir in advance of his death. The blast kills Estes, Walden's family, and numerous senior government officials. Al-Qaeda also leaks Brody's confession video, thus framing him for the attack. Believing that Brody is innocent yet knowing that no one else will, Carrie drives him over the Canada–US border and sets out to clear his name. Season 3 Fifty-eight days after the Langley bombing, Carrie has been reinstated to the CIA and answers questioning by the Senate Select Committee on Intelligence. During her testimony, she states that Brody had nothing to do with the attack. Information is leaked to the public about her previous immunity deal with Brody, as well as her sexual relationship with him. Saul acknowledges Carrie's bipolar disorder when he appears before the committee. In retaliation, Carrie leaks classified information to a reporter, leading Saul to have her temporarily committed to a mental hospital. She appears before a hospital committee to ask for her release, which her father and sister attend. When they tell her to begin taking lithium again, she flies into a rage and has to be forcibly medicated and restrained. She is then formally committed. Weeks later, lawyer Paul Franklin (Jason Butler Harner) visits Carrie in the hospital and offers to help her retaliate against the CIA, but she refuses. Franklin nevertheless secures her release and persuades her to meet with Leland Bennett (Martin Donovan), a lawyer representing the Iranian bank that financed the Langley bombing. Bennett makes a deal with Carrie: his clients will protect her from further reprisals in exchange for intelligence on the CIA's inner workings. The episode's final scene reveals that her entire ordeal is part of a secret operation she has been working on with Saul. Jessica Brody begs Carrie to help find Dana, who has run off with her boyfriend Leo (Sam Underwood). Carrie runs a ploy to slip her surveillance and tell FBI Special Agent Hall (Billy Smith), who is assigned to the Brody family detail, to find Dana. That night, Carrie is kidnapped and brought before Majid Javadi (Shaun Toub), an Iranian intelligence operative and one of the masterminds of the Langley attack. Carrie offers Javadi the agency's protection in exchange for information on the other bombers. Javadi agrees to help them, but evades his surveillance detail to murder his ex-wife and daughter-in-law. Carrie and Quinn arrive moments later and take him into custody. Meanwhile, Carrie learns that she is pregnant. During questioning, Javadi says that one of Nazir's men detonated the Langley bomb, and that Brody had nothing to do with it. As Carrie delivers Javadi to the airport, he tells her that Bennett knows the bomber's identity. Carrie enlists Quinn to help clear Brody's name. To track down the bomber, she tells Franklin that Leland's firm has been linked to the bombing, provoking Leland into ordering the bomber moved out of the country. She takes part in a stakeout of the bomber's hotel room and watches as Franklin approaches with a silenced pistol. Carrie insists on intervening, as she cannot prove Brody's innocence without the bomber's testimony. Black ops agent Dar Adal (F. Murray Abraham) orders her to stand down, but she ignores him and starts running toward the hotel. Quinn shoots her in the shoulder to stop her. Adal's team then drives her to the hospital. Saul visits Carrie in the hospital and reveals the full scope of the operation: Brody will seek political asylum in Iran and assassinate the leader of the Revolutionary Guard so that Javadi can take his place. Carrie goes to see Brody, who is suffering through heroin withdrawal, and takes him to the motel where Dana is working as a maid. Brody tries to get out of the car and see her, but is subdued by the soldiers flanking him; Carrie then tells him that going through the operation is his only chance for redemption. After Brody weans off the drugs and regains his strength, Carrie takes him to see Dana, who says that she never wants to see Carrie or her father again. Carrie and Brody say goodbye as he ships out for Tehran. Carrie watches via satellite as a Special Operations team United States Army Special Forces transports Brody to the Iranian border. The team encounters Iraqi Police officers who recognize Brody and open fire on them. The operation is endangered when the team's van hits a land mine, severely injuring the team leader and attracting enemy fire. Saul calls off the operation and orders Brody to turn back, but Brody refuses and runs toward the border. Carrie tells Brody that he will die on his own, but he insists that she find a way to get him back safely. Fortunately, Iranian border guards take Brody in, saving the operation. Carrie travels to Tehran to make sure the assassination goes smoothly, and watches as Brody is taken to meet with the head of the Revolutionary Guard, General Deshan Akbari (Houshang Touzie). Brody does not get close enough to Akbari to inject him with cyanide as planned, putting the mission in jeopardy. The situation worsens when Brody starts giving interviews to Iranian television denouncing the U.S.; over Carrie's objections, Saul orders Brody to be killed. Carrie calls Brody to warn him, and pleads with him to come with her to safety. Brody refuses, however, and manages to enter the Revolutionary Guard headquarters and kill Akbari. After informing Carrie of his success, Brody goes with her to a safe house 100 miles out of Tehran. There, she tells him she is pregnant with his child. Saul assures her that Brody will be safely extracted, but his successor as CIA Director, Senator Andrew Lockhart (Tracy Letts), betrays their location to Javadi, who arranges their capture. Brody is convicted of treason and sentenced to death. Carrie tries desperately to secure his release, but to no avail. When she calls him in his cell, he asks her not to come to his execution. She goes anyway, however, and calls out his name as he is hanged in a public square. Four months later, Lockhart promotes a heavily pregnant Carrie to Station Chief in Istanbul, but refuses her request to give Brody a star on the CIA Memorial Wall. Carrie accepts the job, but later tells her father and sister that she won't be taking the baby – a girl – with her to Istanbul. Her father tells her that he will take the child. Later, Carrie draws a star on the wall for Brody after a memorial ceremony for the victims of the Langley attack. Season 4 Carrie, now CIA Station Chief in Afghanistan, authorizes a drone strike on a Pakistani farmhouse where terrorist Haissam Haqqani (Numan Acar) is in hiding. The strike occurs while Haqqani is attending a wedding, resulting in his apparent death along with those of 40 civilians. Carrie travels to Islamabad to learn that Sandy Bachman (Corey Stoll), the Station Chief in Pakistan, had been outed. Carrie and Quinn attempt to rescue Bachman, but they are spotted by an angry mob that kills Bachman. Carrie and Quinn manage to escape. In Washington, Lockhart permanently "recalls" Carrie from her post in Afghanistan. Meanwhile, Carrie is struggling with raising her infant daughter, Frannie. She sees no point in being a mother with Brody dead, and interacts with the child as little as possible, often leaving her in the care of her sister Maggie. While bathing Frannie, Carrie is momentarily tempted to drown her. She leaves Frannie with Maggie, reasoning that her daughter is better off without her. Carrie and Quinn meet Jordan Harris (Adam Godley), a former CIA case officer who reveals that Bachman had leaked intelligence and was protected by Lockhart. Carrie blackmails Lockhart into promoting her to Bachman's former post as the CIA's Station Chief in Pakistan. In Islamabad, Carrie attempts to turn Haqqani's nephew Aayan Ibrahim (Suraj Sharma), a Pakistani medical student whose family was killed in the drone strike, and who may hold valuable information. Quinn requests dismissal from the CIA, but Carrie convinces him to come to Islamabad when he finds evidence that Bachman's death was a setup. Carrie learns that Haqqani is still alive, and sees him together with Aayan. Suspecting that Aayan is involved in terrorism, she sleeps with him in an attempt to recruit him as an asset. Carrie stages her own kidnapping to scare Aayan into leaving the country. She then watches via drone as he meets with Haqqani, who produces Saul, bound and gagged, from his car. Haqqani accuses Aayan of spying on him and shoots him in the head, killing him. Enraged, Carrie orders a drone strike, even though it would kill Saul. Quinn stops her, however. Meanwhile, Dennis Boyd (Mark Moses), the husband of the U.S. ambassador to Pakistan, breaks into Carrie's apartment and photographs her medication and family photos. Boyd switches her medication with a hallucinogen. She has a delusional episode and attacks a security guard, and Inter-Services Intelligence (ISI) officers take her into custody. Lieutenant colonel Aasar Khan (Raza Jaffrey) questions her, but she hallucinates that he is Brody and breaks down crying in his arms. She receives a call from Saul, who has escaped from his captors. She helps him evade the local Taliban, and talks him out of committing suicide when they find him. Ultimately, however, she delivers him to the Taliban so they will spare his life. Later that night, she meets with Khan, who tells her that Boyd is working with the ISI. Carrie sets a trap for Boyd with his wife's help, and he is taken into custody. Carrie oversees a prisoner exchange: Saul for five Taliban members. When Saul comes into view, however, Carrie sees a young boy behind him wearing a suicide vest. Carrie goes herself to retrieve Saul, and persuades him to let the exchange go forward and come with her. As they go back to the U.S. Embassy, their car is hit by a rocket-propelled grenade. Marines pull Carrie and Saul from the wreckage, and take them back the embassy, which has been invaded by Haqqani's forces. The White House orders U.S. forces out of Pakistan and relieves Carrie of her command. When Quinn kidnaps Farhad Ghazi (Tamer Burjaq), the ISI agent who kidnapped Saul, Carrie asks Lockhart to let her stay in Islamabad so she can bring him in. She finds Quinn and pleads with him to leave the country, but he refuses and escapes. She later stops him from assassinating Haqqani, and sees that Haqqani has Adal in his entourage. Meanwhile, she receives a phone call from Maggie, who tells her that their father has died. Carrie finds some solace in talking to Frannie, and decides to be a better mother to her. Quinn shows up at her father's funeral, and the two share a kiss. She also sees her estranged mother Ellen Mathison (Victoria Clark), and finds out that she has a half-brother. She confronts Adal, and learns that Quinn has accepted a mission in Syria and that Saul and Adal are negotiating with Haqqani. Season 5 The fifth season begins two years later. Carrie has left the CIA, and is living in Berlin with Franny and her boyfriend Jonas (Alexander Fehling). She now works as head of security for the Düring Foundation, and is tasked by her boss Otto Düring (Sebastian Koch) to prepare him for a trip to Lebanon. She meets with her former colleague Allison Carr (Miranda Otto) to assess the security situation there; Allison refuses to help unless Carrie gives her inside information on the Foundation, a request Carrie refuses. She arranges safe passage for Düring to Lebanon - where an attempt is made on his life. Carrie meets with a contact named Behruz (Mousa Kraish), who tells her that she, not Düring, was the target of the attack. To find out who wants her dead, Carrie goes off her medication, claiming that her manic phases make her mind sharper. She and Jonas get into an argument when he finds out how many drone strikes she ordered while at the CIA, and he storms out. Soon after, he learns that Quinn kidnapped his son and then released him as a ploy to draw Carrie out. The following night, Carrie finds Quinn sneaking up on Jonas' house and shoots him in the back. Quinn was wearing body armor, however, and incapacitates her. Quinn tells her that Saul gave him an order to kill her. They stage a crime scene and take photos to fake Carrie's death, and Carrie prepares to go into hiding. Before she does, she insists on scouting the post office where Quinn gets his assignments in order to confirm whether it really was Saul who wanted her killed. After Quinn drops off his "proof" of Carrie's death in the P.O. box, a hitman arrives, targeting Quinn, and a gunfight ensues in which Quinn is wounded. Carrie leaves Quinn with Jonas, and goes to investigate. She learns that the hitman worked for Russian Foreign Intelligence Service (SVR), and realizes that Russia wants CIA intelligence that had recently been stolen by hackers. She meets with Saul and asks him to give her copies of the hacked documents, but he refuses. She decides to go into hiding, and asks Düring to smuggle her out of the country. Just before she is about to leave, however, Düring gives her the documents from Saul, who now believes her. One of the documents concerns an operation she worked on in Baghdad with Allison 10 years earlier, and she finds out that Ahmed Nazari (Darwin Shaw), a former asset who had been presumed dead, is still alive. She meets with Allison, who promises to help her. Carrie then asks her asset Numan (Atheer Adal) to hack into Allison's computer to find the case files about Nazari, and discovers a screensaver photo of Nazari at Allison's favorite bar. She deduces from the picture that Allison and Nazari are romantically involved, and that Allison is the traitor. Carrie convinces Saul to put Allison under surveillance, despite the fact that he and Allison are in a relationship. They then lead her to believe that an SVR chief has defected to the U.S., and that he has documents revealing how the CIA had been infiltrated. Allison panics and goes to her handler, Ivan Krupin (Mark Ivanir), which allows the CIA to put her under arrest. Carrie then sees a news report showing video of Islamic State terrorists poisoning Quinn with sarin, and threatening to unleash a chemical attack on a major European city in 24 hours. Carrie traces the video to an address in Berlin, where she finds Quinn, barely alive. Quinn ultimately lives, but is left in a coma. Carrie asks her contacts for information on an underground doctor in the area where Quinn went missing. Following a tip from former Hezbollah commander Al-Amin (George Georgiou), she tracks down one of the cell's supporters, Dr. Hussein (Mehdi Nebbou). Under threat of arrest, he brings her to an apartment belonging to Qasim (Alireza Bayram), one of the terrorists. In the apartment, Carrie finds extensive research on the Hauptbahnhof train station and heads there to investigate. She finds Qasim in the subway, and pleads with him to stop the plan to bomb the station. Qasim tries to reason with his cousin Bibi (René Ifrah), the ringleader, but Bibi kills him. Carrie then shoots and kills Bibi, neutralizing the threat. Back home, Jonas breaks up with Carrie, saying that neither he nor his son will ever be safe around her. Saul asks Carrie to rejoin the CIA, but she declines his offer. Düring essentially proposes to her, giving her a chance to co-head his Foundation. She does not answer him, and he gives her time to think about it. Meanwhile, Quinn suffers a massive brain hemorrhage. Adal gives Carrie a letter Quinn wrote declaring his love for her. She visits him in the hospital, locks the door to his room, and removes his pulse monitor, implying a mercy killing. The episode ends with Carrie pausing before anything happens to Quinn. Season 6 Carrie has returned to the United States and is living in Brooklyn, New York with Frannie. She is working for a foundation that provides legal aid to Muslims accused of terrorism, and serving as an informal advisor to President-Elect Elizabeth Keane (Elizabeth Marvel). She is also caring for Quinn, who emerged from the coma with neurological damage and post-traumatic stress disorder (PTSD). When her client Sekou Bah (J. Mallory Cree) is killed in and framed for a terrorist attack, Carrie begins to investigate. She goes to talk with Sekou's family, leaving Quinn to watch Frannie. A group of protestors, angered by leaked reports that Carrie was Sekou's lawyer, gather outside Carrie's house; this triggers Quinn's PTSD, and he shoots one of the protestors and takes a reporter hostage. Carrie manages to talk Quinn down so he will be taken in alive. Dar Adal - the mastermind of the attack - exploits the situation by telling Children's Protective Services (CPS) that Carrie put Frannie in danger, in order to drive Carrie over the edge and discredit her in Keane's eyes. The plan works: CPS put Frannie in a foster home, and Carrie falls into an alcoholic despair that motivates Keane to question her judgment. Saul raises Carrie's spirits by taking her to see Frannie, and asks for her help in setting up a meeting between Keane and Majid Javadi, who claims to have evidence that Adal is manipulating her. At the meeting, however, Javadi lies that Iran is not complying with its nuclear deal with the United States. Carrie discovers that Adal is part of a conspiracy of disgruntled undercover operatives who want to undermine Keane's antiwar foreign policy. She also finds out that Javadi had been working with Adal, and that Adal is the brains behind alt-right media personality Brett O'Keefe (Jake Weber), who is peddling a false conspiracy theory that Keane's son Andrew, who died fighting in Iraq, was deserting his post when he was killed. Adal betrays Javadi to Mossad, who want to arrest him on terrorism charges. As he is taken away, Javadi leaves evidence of Adal's treachery for Carrie, who takes it to Keane. Keane asks Carrie to testify that Adal covered up the infiltration by a Russian agent of the CIA Berlin station. Carrie is reluctant to do so, as that would ruin Saul's career, but ultimately agrees that it is the best course of action. She refuses to testify at the last minute, however, after her driver makes suspicious comments about her upcoming appointment with CPS. After she refuses, she gets an ominous phone call informing her that her upcoming supervised visit with Frannie has been cancelled. She realizes that Adal is blackmailing her, and calls the CIA to tell them that he has "won". Quinn informs her that he found evidence that Sekou was set up, and that Porteous Belli (C. J. Wilson), an assassin working for Adal, has been spying on her. They go to the conspirators' safe house, where Belli attacks them. Quinn kills Belli, and finds evidence that the conspirators are planning to assassinate Keane. Moments later, however, the house is destroyed by a hidden bomb, taking the evidence with it. Carrie and Quinn rush to Keane's headquarters, which has just received a bomb threat. During the evacuation, Adal calls Carrie and tells her that the bomb threat is a ruse to get Keane out of the building, where she can be assassinated, while Quinn is to be framed as the killer. Carrie stops Keane's vehicle from leaving, seconds before the decoy vehicle is destroyed. Quinn smuggles Carrie and Keane out of the building in his car and sacrifices his life to save them when a special ops team, answering to Adal, opens fire on them. Adal is arrested and Keane is inaugurated as President of the United States. Weeks later, Keane - who has become hawkish and paranoid since the attempt on her life - asks Carrie to work for her administration. Carrie agrees, but has second thoughts when Keane orders dozens of intelligence operatives, including Saul, arrested on suspicion of being involved in the conspiracy. Carrie tries to meet with Keane to dissuade her, but Keane's Chief of Staff David Wellington (Linus Roache) has her banned from the White House. Season 7 A few months later, Carrie is doing covert work for the CIA, and she and Frannie are living with Maggie and her family - a tense living situation, as Maggie's husband Bill (Mackenzie Astin) works for the Keane administration. Carrie sets up a meeting between Senator Sam Paley (Dylan Baker) and her old FBI contact Dante Allen (Morgan Spector), who has evidence of corruption within the administration. To escape a man she thinks is following her, Carrie changes the meeting place and asks Maggie's daughter Josie (Courtney Grosbeck) to bring her the keys. The meeting falls apart when Dante refuses to testify in court, and frays her relationship with Maggie, who is angry that Carrie put Josie in danger. The rift worsens when Maggie finds out that Carrie has stopped taking her lithium in favor of black market Seroquel. While watching her surveillance feeds, Carrie sees an unknown woman (Sandrine Holt) going into Wellington's house. Desperate to identify her, Carrie posts a screen capture of the woman on 4chan, asking if anyone can identify her. A hacker responds to the post, and lures Carrie into downloading a file which infects her laptop with ransomware. The hard drive is encrypted with a demand for a $5,000 payment in bitcoin. The hacker then threatens to reveal Carrie's spying and raises his price to $20,000. Carrie tries to seduce the hacker by performing a striptease on her webcam, which entices the hacker to meet with her in person. At their meeting, Carrie beats the hacker with a baton, reveals that she is CIA, and threatens to kill him if he doesn't leave her alone. Carrie learns from Dante that the woman in Wellington's house is Simone Martin (Sandrine Holt), who is in a sexual relationship with Wellington as part of a plan to get U.S. intelligence for the Russians. She also appears to be connected to the mysterious death of Gen. Jamie MacLendon (Robert Knepper), one of the conspirators in the assassination attempt on Keane. Carrie breaks into Martin's house to gather information, but is arrested for trespassing. Dante intervenes and manages to get her out of jail. Carrie proposes to Dante a "completely illegal" plan to connect Wellington to Martin's dealings, and Dante agrees to help. Dante, Max, and former CIA officer Thomas Anson (James D'Arcy) kidnap Martin and get her to confess her complicity in MacLendon's death, which leads to her testifying in Paley's hearings. Carrie has her doubts, however, and meets with Saul to discuss the matter; the two of them figure out that Dante is in fact a Russian agent and had orchestrated the entire affair in order to bring down Keane. Carrie seduces Dante and then drugs him so her team can surveil his apartment; they discover that Dante had been spying on Carrie as well. That night, Dante and Carrie have sex, when are interrupted by a team of Saul's agents, who take Dante into custody. Carrie and Saul have a CIA agent pose as Dante's court-appointed lawyer and non-lethally poison him, and promise to give him the antidote if he gives them the details of his plan. He tells them of a secret code that would signal GRU Operator Yevgeny Gromov (Costa Ronin) to dissolve his forces. Saul persuades Keane to broadcast the code by compromising Twitter servers and violate the privacy of U.S. citizens by tracking who posts confirmations in response to the tweet. Carrie goes to Maggie's house to try and reconcile, but instead learns from Bill that Maggie is meeting a lawyer to seek custody of Franny. When Dante accuses Gromov of poisoning him, he denies it and tells Dante to call Carrie and ask her if she was responsible. Dante calls Carrie, who is picking up Franny from school early. He realizes that Carrie is lying to him, but nonetheless he tells Carrie that Gromov is in the room with him; Gromov then kills him. Carrie opts to leave Franny at school in her rush to the hospital, but Franny runs after her and is almost accidentally run over by her mother in the parking lot. After learning that Dante is dead, Carrie has a psychotic break at the hospital. As Carrie prepares for a custody hearing, Saul asks her to lead an operation in Russia to exfiltrate Martin, but she refuses. She then has Anson break into Maggie's office to get proof that she illegally medicated Carrie for years in order to undermine her custody petition. She ultimately decides not to use them, however, after Maggie helps her realize that she will never leave the CIA and thus cannot give Frannie the attention she needs. She agrees to give Maggie custody of Frannie, and arranges to see her every two weeks. She then accepts Saul's offer. In Moscow, Carrie and Saul meet with GRU representatives, including Gromov, in order to distract them while Carrie's team extracts Martin. The plan fails, however, as the team is ambushed by guards and forced to retreat. Carrie salvages the mission by personally infiltrating the safe house where Martin is being kept, and persuading her that the Russians now consider her a liability and only the CIA can keep her alive. Gromov arrives to murder Simone, so Carrie tricks them by disguising herself as Martin and taking them away while the real Martin escapes. Carrie is ultimately captured and taken into Russian custody. Gromov wants Carrie to film a confession video stating that Martin was a CIA operative and that the U.S. orchestrated everything, while threatening to withhold her medication if she doesn't cooperate. Carrie refuses, and has sex with a guard after he promises to get her medication in secret. It turns out that the guard was lying, however, and reveals her duplicity to Gromov, who punishes her by refusing her medication. Seven months later, Keane resigns from the presidency, and her vice president, Ralph Warner (Beau Bridges), takes over. He and Saul negotiate Carrie's release in a prisoner exchange; after months without being medicated, however, Carrie is barely lucid and doesn't even recognize Saul. Season 8 Bibliography Carrie's Run: A Homeland Novel (2013) "(Andrew Kaplan)". Saul's Game: A Homeland Novel (2014) "(Andrew Kaplan)". Homeland Revealed Hardcover (2014) "(Matt Hurwitz)" (Author), (Alex Gansa) "(Foreword)". Matt Hurwitz is a writer for the reference book The Complete X-Files: Behind the Series, the Myths and the Movies. Homeland: The Unofficial Guide to Season One and Two (2013) Tvcaps (Author). TVcaps is an imprint of BookCaps™ Study guide. Development Homeland co-creators Howard Gordon and Alex Gansa initially pitched the show to networks with the Carrie Mathison character being a rather straight-laced CIA officer. Once it wasn't picked up and they moved on to the cable channels, they were able to experiment with more complex and flawed main characters. Carrie was given bipolar disorder and made more of an unreliable narrator. Showtime eventually secured the rights to the show and embraced the more unstable version of the character. From the initial conception of the character, Gordon and Gansa targeted Claire Danes to play the lead role of Carrie. The pair were very impressed with her acting prowess, especially in My So-Called Life and Temple Grandin, but were skeptical as to whether she would accept a television role. Indeed, Danes was not necessarily looking to return to television, but she found the script and the character to be very compelling. In addition, the opportunity to be a part of the "renaissance" of high-quality dramas on cable television appealed to her. To prepare for the role, Danes had to learn much about the CIA, as well as the nuances of playing someone who has bipolar disorder. Danes' personal research into the CIA touched on such topics as its internal culture, agency politics, and the implications of being a female agent. She also was granted access to CIA Headquarters in Langley, Virginia, and was able to personally consult with the female CIA officer whom the Carrie Mathison character was loosely based on. Reception Reviews Hank Stuever of The Washington Post in his 2011 Fall TV roundup said that Carrie Mathison was "easily this season's strongest female character". The A.V. Clubs Emily VanDerWerff called Carrie "my favorite new character of this TV season", noting the way she attacks everything with reckless abandon. In November 2011, The Atlantic named Carrie Mathison as one of the best characters on TV, calling her "the thinking man's Jack Bauer", and going on to say "We both root for Carrie's assuredness and are turned off by her brash, erratic, and occasionally reckless behavior". In Digital Spy's list of the top 25 best TV characters of 2012, Carrie Mathison was ranked #2. Awards For her portrayal of Carrie Mathison in the premiere season of Homeland, Claire Danes received the Primetime Emmy Award for Outstanding Lead Actress in a Drama Series. She also won the Golden Globe Award for Best Actress – Television Series Drama, TCA Award for Individual Achievement in Drama, Critics' Choice Television Award for Best Drama Actress, and Satellite Award for Best Actress – Television Series Drama. For the second season of Homeland, Danes repeated her wins for the Primetime Emmy Award for Outstanding Lead Actress in a Drama Series, Golden Globe Award for Best Actress – Television Series Drama and the Satellite Award for Best Actress – Television Series Drama. Additionally, she won the Screen Actors Guild Award for Outstanding Performance by a Female Actor in a Drama Series. References External links Carrie Mathison at Showtime Homeland (TV series) characters Television characters introduced in 2011 Fictional Central Intelligence Agency personnel Fictional characters with bipolar disorder Fictional characters from Virginia Fictional attempted suicides Fictional female secret agents and spies Fictional human rights activists Fictional agent handlers and case officers Fictional spymasters Drama television characters Female characters in television Fictional prisoners and detainees Fictional Princeton University students Fictional fugitives
1363291
https://en.wikipedia.org/wiki/Medical%20device
Medical device
A medical device is any device intended to be used for medical purposes. Significant potential for hazards are inherent when using a device for medical purposes and thus medical devices must be proved safe and effective with reasonable assurance before regulating governments allow marketing of the device in their country. As a general rule, as the associated risk of the device increases the amount of testing required to establish safety and efficacy also increases. Further, as associated risk increases the potential benefit to the patient must also increase. Discovery of what would be considered a medical device by modern standards dates as far back as c. 7000 BC in Baluchistan where Neolithic dentists used flint-tipped drills and bowstrings. Study of archeology and Roman medical literature also indicate that many types of medical devices were in widespread use during the time of ancient Rome. In the United States it wasn't until the Federal Food, Drug, and Cosmetic Act (FD&C Act) in 1938 that medical devices were regulated. Later in 1976, the Medical Device Amendments to the FD&C Act established medical device regulation and oversight as we know it today in the United States. Medical device regulation in Europe as we know it today came into effect in the 1993 by what is collectively known as the Medical Device Directive (MDD). On May 26, 2017 the Medical Device Regulation (MDR) replaced the MDD. Medical devices vary in both their intended use and indications for use. Examples range from simple, low-risk devices such as tongue depressors, medical thermometers, disposable gloves, and bedpans to complex, high-risk devices that are implanted and sustain life. One example of high-risk devices are those with embedded software such as pacemakers, and which assist in the conduct of medical testing, implants, and prostheses. The design of medical devices constitutes a major segment of the field of biomedical engineering. The global medical device market reached roughly US$209 billion in 2006 and was estimated to be between $220 and US$250 billion in 2013. The United States controls ~40% of the global market followed by Europe (25%), Japan (15%), and the rest of the world (20%). Although collectively Europe has a larger share, Japan has the second largest country market share. The largest market shares in Europe (in order of market share size) belong to Germany, Italy, France, and the United Kingdom. The rest of the world comprises regions like (in no particular order) Australia, Canada, China, India, and Iran. This article discusses what constitutes a medical device in these different regions and throughout the article these regions will be discussed in order of their global market share. History Definition A global definition for medical device is difficult to establish because there are numerous regulatory bodies worldwide overseeing the marketing of medical devices. Although these bodies often collaborate and discuss the definition in general, there are subtle differences in wording that prevent a global harmonization of the definition of a medical device, thus the appropriate definition of a medical device depends on the region. Often a portion of the definition of a medical device is intended to differentiate between medical devices and drugs, as the regulatory requirements of the two are different. Definitions also often recognize In vitro diagnostics as a subclass of medical devices and establish accessories as medical devices. Definitions by region United States (Food and Drug Administration) Section 201(h) of the Federal Food Drug & Cosmetic (FD&C) Act defines a device as an "instrument, apparatus, implement, machine, contrivance, implant, in vitro reagent, or other similar or related article, including a component part, or accessory which is: recognized in the official National Formulary, or the United States Pharmacopoeia, or any supplement to them Intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease, in man or other animals, or Intended to affect the structure or any function of the body of man or other animals, and which does not achieve its primary intended purposes through chemical action within or on the body of man or other animals and which is not dependent upon being metabolized for the achievement of its primary intended purposes. The term 'device' does not include software functions excluded pursuant to section 520(o)." European Union According to Article 1 of Council Directive 93/42/EEC, ‘medical device’ means any "instrument, apparatus, appliance, software, material or other article, whether used alone or in combination, including the software intended by its manufacturer to be used specifically for diagnostic and/or therapeutic purposes and necessary for its proper application, intended by the manufacturer to be used for human beings for the purpose of: diagnosis, prevention, monitoring, treatment or alleviation of disease, diagnosis, monitoring, treatment, alleviation of or compensation for an injury or handicap, investigation, replacement or modification of the anatomy or of a physiological process, control of conception, and which does not achieve its principal intended action in or on the human body by pharmacological, immunological or metabolic means, but which may be assisted in its function by such means;" EU Legal framework Based on the New Approach, rules that relate to safety and performance of medical devices were harmonised in the EU in the 1990s. The New Approach, defined in a European Council Resolution of May 1985, represents an innovative way of technical harmonisation. It aims to remove technical barriers to trade and dispel the consequent uncertainty for economic operators, to facilitate free movement of goods inside the EU. The previous core legal framework consisted of three directives: Directive 90/385/EEC regarding active implantable medical devices Directive 93/42/EEC regarding medical devices Directive 98/79/EC regarding in vitro diagnostic medical devices (Until 2022, the In Vitro Diagnosis Regulation (IVDR) will replace the EU's current Directive on In-Vitro Diagnostic (98/79/EC)). They aim at ensuring a high level of protection of human health and safety and the good functioning of the Single Market. These three main directives have been supplemented over time by several modifying and implementing directives, including the last technical revision brought about by Directive 2007/47 EC. The government of each Member State must appoint a competent authority responsible for medical devices. The competent authority (CA) is a body with authority to act on behalf of the member state to ensure that member state government transposes requirements of medical device directives into national law and applies them. The CA reports to the minister of health in the member state. The CA in one Member State has no jurisdiction in any other member state, but exchanges information and tries to reach common positions. In the UK, for example, the Medicines and Healthcare products Regulatory Agency (MHRA) acted as a CA. In Italy it is the Ministero Salute (Ministry of Health) Medical devices must not be mistaken with medicinal products. In the EU, all medical devices must be identified with the CE mark. The conformity of a medium or high risk medical device with relevant regulations is also assessed by an external entity, the Notified Body, before it can be placed on the market. In September 2012, the European Commission proposed new legislation aimed at enhancing safety, traceability, and transparency. The regulation was adopted in 2017. The future core legal framework consists of two regulations, replacing the previous three directives: The Medical Devices Regulation (MDR (EU) 2017/745) The In Vitro Diagnostic medical devices regulation (IVDR (EU) 2017/746) Japan Article 2, Paragraph 4, of the Pharmaceutical Affairs Law (PAL) defines medical devices as "instruments and apparatus intended for use in diagnosis, cure or prevention of diseases in humans or other animals; intended to affect the structure or functions of the body of man or other animals." Rest of the world Canada The term medical device, as defined in the Food and Drugs Act, is "any article, instrument, apparatus or contrivance, including any component, part or accessory thereof, manufactured, sold or represented for use in: the diagnosis, treatment, mitigation or prevention of a disease, disorder or abnormal physical state, or its symptoms, in a human being; the restoration, correction or modification of a body function or the body structure of a human being; the diagnosis of pregnancy in a human being; or the care of a human being during pregnancy and at and after the birth of a child, including the care of the child. It also includes a contraceptive device but does not include a drug." The term covers a wide range of health or medical instruments used in the treatment, mitigation, diagnosis or prevention of a disease or abnormal physical condition. Health Canada reviews medical devices to assess their safety, effectiveness, and quality before authorizing their sale in Canada. According to the Act, medical device does not include any device that is intended for use in relation to animals." Regulation and oversight Risk classification The regulatory authorities recognize different classes of medical devices based on their potential for harm if misused, design complexity, and their use characteristics. Each country or region defines these categories in different ways. The authorities also recognize that some devices are provided in combination with drugs, and regulation of these combination products takes this factor into consideration. Classifying medical devices based on their risk is essential for maintaining patient and staff safety while simultaneously facilitating the marketing of medical products. By establishing different risk classifications, lower risk devices, for example, a stethoscope or tongue depressor, are not required to undergo the same level of testing that higher risk devices such as artificial pacemakers undergo. Establishing a hierarchy of risk classification allows regulatory bodies to provide flexibility when reviewing medical devices. Classification by region United States Under the Food, Drug, and Cosmetic Act, the U.S. Food and Drug Administration recognizes three classes of medical devices, based on the level of control necessary to assure safety and effectiveness. Class I Class II Class III The classification procedures are described in the Code of Federal Regulations, Title 21, part 860 (usually known as 21 CFR 860). Class I devices are subject to the least regulatory control and are not intended to help support or sustain life or be substantially important in preventing impairment to human health, and may not present an unreasonable risk of illness or injury. Examples of Class I devices include elastic bandages, examination gloves, and hand-held surgical instruments. Class II devices are subject to special labeling requirements, mandatory performance standards and postmarket surveillance. Examples of Class II devices include acupuncture needles, powered wheelchairs, infusion pumps, air purifiers, surgical drapes, stereotaxic navigation systems, and surgical robots. Class III devices are usually those that support or sustain human life, are of substantial importance in preventing impairment of human health, or present a potential, unreasonable risk of illness or injury and require premarket approval. Examples of Class III devices include implantable pacemakers, pulse generators, HIV diagnostic tests, automated external defibrillators, and endosseous implants. European Union (EU) and European Free Trade Association (EFTA) The classification of medical devices in the European Union is outlined in Article IX of the Council Directive 93/42/EEC and Annex VIII of the EU medical device regulation. There are basically four classes, ranging from low risk to high risk, Classes I, IIa, IIb, and III (this excludes in vitro diagnostics including software, which fall in four classes: from A (lowest risk) to D (highest risk)): Class I (including I sterile, I with measurement function, and class I reusable surgical instruments) Class IIa Class IIb Class III The authorization of medical devices is guaranteed by a Declaration of Conformity. This declaration is issued by the manufacturer itself, but for products in Class Is, Im, Ir, IIa, IIb or III, it must be verified by a Certificate of Conformity issued by a Notified Body. A Notified Body is a public or private organisation that has been accredited to validate the compliance of the device to the European Directive. Medical devices that pertain to class I (on condition they do not require sterilization or do not measure a function) can be marketed purely by self-certification. The European classification depends on rules that involve the medical device's duration of body contact, invasive character, use of an energy source, effect on the central circulation or nervous system, diagnostic impact, or incorporation of a medicinal product. Certified medical devices should have the CE mark on the packaging, insert leaflets, etc.. These packagings should also show harmonised pictograms and EN standardised logos to indicate essential features such as instructions for use, expiry date, manufacturer, sterile, don't reuse, etc. In November 2018 the Federal Administrative Court of Switzerland decided that the "Sympto" app, used to analyze a woman's menstrual cycle, was a medical device because it calculates a fertility window for each woman using personal data. The manufacturer, Sympto-Therm Foundation, argued that this was a didactic, not a medical process. the court laid down that an app is a medical device if it is to be used for any of the medical purposes provided by law, and creates or modifies health information by calculations or comparison, providing information about an individual patient. Japan Medical devices (excluding in vitro diagnostics) in Japan are classified into four classes based on risk: Class I Class II Class III Class IV Classes I and II distinguish between extremely low and low risk devices. Classes III and IV, moderate and high risk respectively, are highly and specially controlled medical devices. In vitro diagnostics have three risk classifications. Rest of the world For the remaining regions in the world the risk classifications are generally similar to the United States, European Union, and Japan or are a variant combining two or more of the three countries' risk classifications. Australia The classification of medical devices in Australia is outlined in section 41BD of the Therapeutic Goods Act 1989 and Regulation 3.2 of the Therapeutic Goods Regulations 2002, under control of the Therapeutic Goods Administration. Similarly to the EU classification, they rank in several categories, by order of increasing risk and associated required level of control. Various rules identify the device's category Canada The Medical Devices Bureau of Health Canada recognizes four classes of medical devices based on the level of control necessary to assure the safety and effectiveness of the device. Class I devices present the lowest potential risk and do not require a licence. Class II devices require the manufacturer's declaration of device safety and effectiveness, whereas Class III and IV devices present a greater potential risk and are subject to in-depth scrutiny. A guidance document for device classification is published by Health Canada. Canadian classes of medical devices correspond to the European Council Directive 93/42/EEC (MDD) devices: Class I (Canada) generally corresponds to Class I (ECD) Class II (Canada) generally corresponds to Class IIa (ECD) Class III (Canada) generally corresponds to Class IIb (ECD) Class IV (Canada) generally corresponds to Class III (ECD) Examples include surgical instruments (Class I), contact lenses and ultrasound scanners (Class II), orthopedic implants and hemodialysis machines (Class III), and cardiac pacemakers (Class IV). Iran Iran produces about 2,000 types of medical devices and medical supplies, such as appliances, dental supplies, disposable sterile medical items, laboratory machines, various biomaterials and dental implants. 400 Medical products are produced at the C and D risk class with all of them licensed by the Iranian Health Ministry in terms of safety and performance based on EU-standards. Some Iranian medical devices are produced according to the European Union standards. Some producers in Iran export medical devices and supplies which adhere to European Union standards to applicant countries, including 40 Asian and European countries. Some Iranian producers export their products to foreign countries. Standardization and regulatory concerns The ISO standards for medical devices are covered by ICS 11.100.20 and 11.040.01. The quality and risk management regarding the topic for regulatory purposes is convened by ISO 13485 and ISO 14971. ISO 13485:2016 is applicable to all providers and manufacturers of medical devices, components, contract services and distributors of medical devices. The standard is the basis for regulatory compliance in local markets, and most export markets. Additionally, ISO 9001:2008 sets precedence because it signifies that a company engages in the creation of new products. It requires that the development of manufactured products have an approval process and a set of rigorous quality standards and development records before the product is distributed. Further standards are IEC 60601-1 which is for electrical devices (mains-powered as well as battery powered), EN 45502-1 which is for Active implantable medical devices, and IEC 62304 for medical software. The US FDA also published a series of guidances for industry regarding this topic against 21 CFR 820 Subchapter H—Medical Devices. Subpart B includes quality system requirements, an important component of which are design controls (21 CFR 820.30). To meet the demands of these industry regulation standards, a growing number of medical device distributors are putting the complaint management process at the forefront of their quality management practices. This approach further mitigates risks and increases visibility of quality issues. Starting in the late 1980s the FDA increased its involvement in reviewing the development of medical device software. The precipitant for change was a radiation therapy device (Therac-25) that overdosed patients because of software coding errors. FDA is now focused on regulatory oversight on medical device software development process and system-level testing. A 2011 study by Dr. Diana Zuckerman and Paul Brown of the National Center for Health Research, and Dr. Steven Nissen of the Cleveland Clinic, published in the Archives of Internal Medicine, showed that most medical devices recalled in the last five years for "serious health problems or death" had been previously approved by the FDA using the less stringent, and cheaper, 510(k) process. In a few cases, the devices had been deemed so low-risk that they did not they did not undergo any FDA regulatory review. Of the 113 devices recalled, 35 were for cardiovascular issues. This study was the topic of Congressional hearings re-evaluating FDA procedures and oversight. A 2014 study by Dr. Diana Zuckerman, Paul Brown, and Dr. Aditi Das of the National Center for Health Research, published in JAMA Internal Medicine, examined the scientific evidence that is publicly available about medical implants that were cleared by the FDA 510(k) process from 2008 to 2012. They found that scientific evidence supporting “substantial equivalence” to other devices already on the market was required by law to be publicly available, but the information was available for only 16% of the randomly selected implants, and only 10% provided clinical data. Of the more than 1,100 predicate implants that the new implants were substantially equivalent to, only 3% had any publicly available scientific evidence, and only 1% had clinical evidence of safety or effectiveness. The researchers concluded that publicly available scientific evidence on implants was needed to protect the public health. In 2014-2015 a new international agreement, the Medical Device Single Audit Program (MDSAP), was put in place with five participant countries: Australia, Brazil, Canada, Japan, and the United States. The aim of this program was to "develop a process that allows a single audit, or inspection to ensure the medical device regulatory requirements for all five countries are satisfied". In 2017, a study by Dr. Jay Ronquillo and Dr. Diana Zuckerman published in the peer-reviewed policy journal Milbank Quarterly found that electronic health records and other device software were recalled due to life-threatening flaws. The article pointed out the lack of safeguards against hacking and other cybersecurity threats, stating “current regulations are necessary but not sufficient for ensuring patient safety by identifying and eliminating dangerous defects in software currently on the market”. They added that legislative changes resulting from the law entitled the 21st Century Cures Act “will further deregulate health IT, reducing safeguards that facilitate the reporting and timely recall of flawed medical software that could harm patients". A study by Dr. Stephanie Fox-Rawlings and colleagues at the National Center for Health Research, published in 2018 in the policy journal Milbank Quarterly, investigated whether studies reviewed by the FDA for high-risk medical devices are proven safe and effective for women, minorities, or patients over 65 years of age. The law encourages patient diversity in clinical trials submitted to the FDA for review, but does not require it. The study determined that most high-risk medical devices are not tested and analyzed to ensure that they are safe and effective for all major demographic groups, particularly racial and ethnic minorities and people over 65. Therefore, they do not provide information about safety or effectiveness that would help patients and physicians make well informed decisions. In 2018, an investigation involving journalists across 36 countries coordinated by the International Consortium of Investigative Journalists (ICIJ) prompted calls for reform in the United States, particularly around the 510(k) substantial equivalence process; the investigation prompted similar calls in the UK and Europe Union. Packaging standards Medical device packaging is highly regulated. Often medical devices and products are sterilized in the package. Sterility must be maintained throughout distribution to allow immediate use by physicians. A series of special packaging tests measure the ability of the package to maintain sterility. Relevant standards include: ASTM F2097 – Standard Guide for Design and Evaluation of Primary Flexible Packaging for Medical Products ASTM F2475-11 – Standard Guide for Biocompatibility Evaluation of Medical Device Packaging Materials EN 868 Packaging materials and systems for medical devices to be sterilized, General requirements and test methods ISO 11607 Packaging for terminally sterilized medical devices Package testing is part of a quality management system including verification and validation. It is important to document and ensure that packages meet regulations and end-use requirements. Manufacturing processes must be controlled and validated to ensure consistent performance. EN ISO 15223-1 defines symbols that can be used to convey important information on packaging and labeling. Biocompatibility standards ISO 10993 - Biological Evaluation of Medical Devices Cleanliness standards Medical device cleanliness has come under greater scrutiny since 2000, when Sulzer Orthopedics recalled several thousand metal hip implants that contained a manufacturing residue. Based on this event, ASTM established a new task group (F04.15.17) for established test methods, guidance documents, and other standards to address cleanliness of medical devices. This task group has issued two standards for permanent implants to date: 1. ASTM F2459: Standard test method for extracting residue from metallic medical components and quantifying via gravimetric analysis 2. ASTM F2847: Standard Practice for Reporting and Assessment of Residues on Single Use Implants 3. ASTM F3172: Standard Guide for Validating Cleaning Processes Used During the Manufacture of Medical Devices In addition, the cleanliness of re-usable devices has led to a series of standards, including: ASTM E2314: Standard Test Method for Determination of Effectiveness of Cleaning Processes for Reusable Medical Instruments Using a Microbiologic Method (Simulated Use Test)" ASTM D7225: Standard Guide for Blood Cleaning Efficiency of Detergents and Washer-Disinfectors ASTM F3208: Standard Guide for Selecting Test Soils for Validation of Cleaning Methods for Reusable Medical Devices The ASTM F04.15.17 task group is working on several new standards that involve designing implants for cleaning, selection and testing of brushes for cleaning reusable devices, and cleaning assessment of medical devices made by additive manufacturing. Additionally, the FDA is establishing new guidelines for reprocessing reusable medical devices, such as orthoscopic shavers, endoscopes, and suction tubes. New research was published in ACS Applied Interfaces and Material to keep Medical Tools pathogen free. Safety standards Design, prototyping, and product development Medical device manufacturing requires a level of process control according to the classification of the device. Higher risk; more controls. When in the initial R&D phase, manufacturers are now beginning to design for manufacturability. This means products can be more precision-engineered to for production to result in shorter lead times, tighter tolerances and more advanced specifications and prototypes. These days, with the aid of CAD or modelling platforms, the work is now much faster, and this can act also as a tool for strategic design generation as well as a marketing tool. Failure to meet cost targets will lead to substantial losses for an organisation. In addition, with global competition, the R&D of new devices is not just a necessity, it is an imperative for medical device manufacturers. The realisation of a new design can be very costly, especially with the shorter product life cycle. As technology advances, there is typically a level of quality, safety and reliability that increases exponentially with time. For example, initial models of the artificial cardiac pacemaker were external support devices that transmits pulses of electricity to the heart muscles via electrode leads on the chest. The electrodes contact the heart directly through the chest, allowing stimulation pulses to pass through the body. Recipients of this typically suffered infection at the entrance of the electrodes, which led to the subsequent trial of the first internal pacemaker, with electrodes attached to the myocardium by thoracotomy. Future developments led to the isotope-power source that would last for the lifespan of the patient. Software Mobile medical applications With the rise of smartphone usage in the medical space, in 2013, the FDA issued to regulate mobile medical applications and protect users from their unintended use, soon followed by European and other regulatory agencies. This guidance distinguishes the apps subjected to regulation based on the marketing claims of the apps. Incorporation of the guidelines during the development phase of such apps can be considered as developing a medical device; the regulations have to adapt and propositions for expedite approval may be required due to the nature of 'versions' of mobile application development. On September 25, 2013 the FDA released a draft guidance document for regulation of mobile medical applications, to clarify what kind of mobile apps related to health would not be regulated, and which would be. Cybersecurity Medical devices such as pacemakers, insulin pumps, operating room monitors, defibrillators, and surgical instruments, including deep-brain stimulators, can incorporate the ability to transmit vital health information from a patient's body to medical professionals. Some of these devices can be remotely controlled. This has engendered concern about privacy and security issues, human error, and technical glitches with this technology. While only a few studies have looked at the susceptibility of medical devices to hacking, there is a risk. In 2008, computer scientists proved that pacemakers and defibrillators can be hacked wirelessly via radio hardware, an antenna, and a personal computer. These researchers showed they could shut down a combination heart defibrillator and pacemaker and reprogram it to deliver potentially lethal shocks or run out its battery. Jay Radcliff, a security researcher interested in the security of medical devices, raised fears about the safety of these devices. He shared his concerns at the Black Hat security conference. Radcliff fears that the devices are vulnerable and has found that a lethal attack is possible against those with insulin pumps and glucose monitors. Some medical device makers downplay the threat from such attacks and argue that the demonstrated attacks have been performed by skilled security researchers and are unlikely to occur in the real world. At the same time, other makers have asked software security experts to investigate the safety of their devices. As recently as June 2011, security experts showed that by using readily available hardware and a user manual, a scientist could both tap into the information on the system of a wireless insulin pump in combination with a glucose monitor. With the PIN of the device, the scientist could wirelessly control the dosage of the insulin. Anand Raghunathan, a researcher in this study, explains that medical devices are getting smaller and lighter so that they can be easily worn. The downside is that additional security features would put an extra strain on the battery and size and drive up prices. Dr. William Maisel offered some thoughts on the motivation to engage in this activity. Motivation to do this hacking might include acquisition of private information for financial gain or competitive advantage; damage to a device manufacturer's reputation; sabotage; intent to inflict financial or personal injury or just satisfaction for the attacker. Researchers suggest a few safeguards. One would be to use rolling codes. Another solution is to use a technology called "body-coupled communication" that uses the human skin as a wave guide for wireless communication. On 28 December 2016 the US Food and Drug Administration released its recommendations that are not legally enforceable for how medical device manufacturers should maintain the security of Internet-connected devices. Similar to hazards, cybersecurity threats and vulnerabilities cannot be eliminated entirely but must be managed and reduced to a reasonable level. When designing medical devices, the tier of cybersecurity risk should be determined early in the process in order to establish a cybersecurity vulnerability and management approach (including a set of cybersecurity design controls). The medical device design approach employed should be consistent with the NIST Cybersecurity Framework for managing cybersecurity-related risks. In August 2013, the FDA released over 20 regulations aiming to improve the security of data in medical devices, in response to the growing risks of limited cybersecurity. Artificial intelligence The number of approved medical devices using artificial intelligence or machine learning (AI/ML) is increasing. As of 2020, there were several hundred AI/ML medical devices approved by the US FDA or CE-marked devices in Europe. Most AI/ML devices focus upon radiology. As of 2020, there was no specific regulatory pathway for AI/ML-based medical devices in the US or Europe. However, in January 2021, the FDA published a proposed regulatory framework for AI/ML-based software, and the EU medical device regulation which replaces the EU Medical Device Directive in May 2021, defines regulatory requirements for medical devices, including AI/ML software. Medical equipment Medical equipment (also known as armamentarium) is designed to aid in the diagnosis, monitoring or treatment of medical conditions. Types There are several basic types: Diagnostic equipment includes medical imaging machines, used to aid in diagnosis. Examples are ultrasound and MRI machines, PET and CT scanners, and x-ray machines. Treatment equipment includes infusion pumps, medical lasers and LASIK surgical machines. Life support equipment is used to maintain a patient's bodily function. This includes medical ventilators, incubators, anaesthetic machines, heart-lung machines, ECMO, and dialysis machines. Medical monitors allow medical staff to measure a patient's medical state. Monitors may measure patient vital signs and other parameters including ECG, EEG, and blood pressure. Medical laboratory equipment automates or helps analyze blood, urine, genes, and dissolved gases in the blood. Diagnostic medical equipment may also be used in the home for certain purposes, e.g. for the control of diabetes mellitus Therapeutic: physical therapy machines like continuous passive range of motion (CPM) machines The identification of medical devices has been recently improved by the introduction of Unique Device Identification (UDI) and standardised naming using the Global Medical Device Nomenclature (GMDN) which have been endorsed by the International Medical Device Regulatory Forum (IMDRF). A biomedical equipment technician (BMET) is a vital component of the healthcare delivery system. Employed primarily by hospitals, BMETs are the people responsible for maintaining a facility's medical equipment. BMET mainly act as an interface between doctor and equipment. Medical equipment donation There are challenges surrounding the availability of medical equipment from a global health perspective, with low-resource countries unable to obtain or afford essential and life-saving equipment. In these settings, well-intentioned equipment donation from high- to low-resource settings is a frequently used strategy to address this through individuals, organisations, manufacturers and charities. However, issues with maintenance, availability of biomedical equipment technicians (BMET), supply chains, user education and the appropriateness of donations means these frequently fail to deliver the intended benefits. The WHO estimates that 95% of medical equipment in low- and middle-income countries (LMICs) is imported and 80% of it is funded by international donors or foreign governments. While up to 70% of medical equipment in sub-Saharan Africa is donated, only 10%–30% of donated equipment becomes operational. A review of current practice and guidelines for the donation of medical equipment for surgical and anaesthesia care in LMICs has demonstrated a high level of complexity within the donation process and numerous shortcomings. Greater collaboration and planning between donors and recipients is required together with evaluation of donation programs and concerted advocacy to educate donors and recipients on existing equipment donation guidelines and policies Academic resources Medical & Biological Engineering & Computing journal Expert Review of Medical Devices journal Journal of Clinical Engineering University-based research packaging institutes University of Minnesota - Medical Devices Center (MDC) University of Strathclyde - Strathclyde Institute of Medical Devices (SIMD) Flinders University - Medical Device Research Institute (MDRI) Michigan State University - School of Packaging (SoP) IIT Bombay - Biomedical Engineering and Technology (incubation) Centre (BETiC) References
60423388
https://en.wikipedia.org/wiki/2018%E2%80%9319%20Little%20Rock%20Trojans%20women%27s%20basketball%20team
2018–19 Little Rock Trojans women's basketball team
The 2018–19 Little Rock Trojans women's basketball team represents the University of Arkansas at Little Rock during the 2018–19 NCAA Division I women's basketball season. The Trojans, led by sixteenth year head coach Joe Foley, play their home games at the Jack Stephens Center and were members of the Sun Belt Conference. They finished the season 21–11, 15–3 in Sun Belt play to win the share the Sun Belt regular season title with Texas–Arlington and won the Sun Belt tournament title to earn an automatic trip to the NCAA Women's Tournament where they lost in the first round to Gonzaga. Roster Schedule |- !colspan=9 style=| Non-conference regular season |- !colspan=9 style=| Sun Belt Conference regular season |- !colspan=9 style=| Sun Belt Women's Tournament |- !colspan=9 style=| NCAA Women's Tournament Rankings 2018–19 NCAA Division I women's basketball rankings See also 2018–19 Little Rock Trojans men's basketball team References Little Rock Trojans women's basketball seasons Little Rock Little Rock
19649
https://en.wikipedia.org/wiki/MVS
MVS
Multiple Virtual Storage, more commonly called MVS, was the most commonly used operating system on the System/370 and System/390 IBM mainframe computers. IBM developed MVS, along with OS/VS1 and SVS, as a successor to OS/360. It is unrelated to IBM's other mainframe operating system lines, e.g., VSE, VM, TPF. Overview First released in 1974, MVS was extended by program products with new names multiple times: first to MVS/SE (MVS/System Extensions), next to MVS/SP (MVS/System Product) Version 1, next to MVS/XA (MVS/eXtended Architecture), next to MVS/ESA (MVS/Enterprise Systems Architecture), then to OS/390 and finally to z/OS (when 64-bit support was added with the zSeries models). IBM added UNIX support (originally called OpenEdition MVS) in MVS/SP V4.3 and has obtained POSIX and UNIX™ certifications at several different levels from IEEE, X/Open and The Open Group. The MVS core remains fundamentally the same operating system. By design, programs written for MVS run on z/OS without modification. At first IBM described MVS as simply a new release of OS/VS2, but it was, in fact a major rewrite. OS/VS2 release 1 was an upgrade of OS/360 MVT that retained most of the original code and, like MVT, was mainly written in assembly language. The MVS core was almost entirely written in Assembler XF, although a few modules were written in PL/S, but not the performance-sensitive ones, in particular not the Input/Output Supervisor (IOS). IBM's use of "OS/VS2" emphasized upwards compatibility: application programs that ran under MVT did not even need recompiling to run under MVS. The same Job Control Language files could be used unchanged; utilities and other non-core facilities like TSO ran unchanged. IBM and users almost unanimously called the new system MVS from the start, and IBM continued to use the term MVS in the naming of later major versions such as MVS/XA. Evolution of MVS OS/360 MFT (Multitasking with a Fixed number of Tasks) provided multitasking: several memory partitions, each of a fixed size, were set up when the operating system was installed and when the operator redefined them. For example, there could be a small partition, two medium partitions, and a large partition. If there were two large programs ready to run, one would have to wait until the other finished and vacated the large partition. OS/360 MVT (Multitasking with a Variable number of Tasks) was an enhancement that further refined memory use. Instead of using fixed-size memory partitions, MVT allocated memory to regions for job steps as needed, provided enough contiguous physical memory was available. This was a significant advance over MFT's memory management, but had some weaknesses: if a job allocated memory dynamically (as most sort programs and database management systems do), the programmers had to estimate the job's maximum memory requirement and pre-define it for MVT. A job step that contained a mix of small and large programs wasted memory while the small programs ran. Most seriously, memory could become fragmented, i.e., the memory not used by current jobs could be divided into uselessly small chunks between the areas used by current jobs, and the only remedy was to wait until some current jobs finished before starting any new ones. In the early 1970s IBM sought to mitigate these difficulties by introducing virtual memory (which IBM called "virtual storage"), which allowed programs to request address spaces larger than physical memory. The original implementations had a single virtual address space, shared by all jobs. OS/VS1 was OS/360 MFT within a single virtual address space; OS/VS2 SVS was OS/360 MVT within a single virtual address space. So OS/VS1 and SVS in principle had the same disadvantages as MFT and MVT, but the impacts were less severe because jobs could request much larger address spaces and the requests came out of a 16 MB pool even if physical storage was smaller. In the mid-1970s IBM introduced MVS, which not only supported virtual storage that was larger than the available real storage, as did SVS, but also allowed an indefinite number of applications to run in different address spaces. Two concurrent programs might try to access the same virtual memory address, but the virtual memory system redirected these requests to different areas of physical memory. Each of these address spaces consisted of three areas: an operating system (one instance shared by all jobs), an application area unique for each application, and a shared virtual area used for various purposes, including inter-job communication. IBM promised that application areas would always be at least 8 MB. This made MVS the perfect solution for business problems that resulted from the need to run more applications. MVS maximized processing potential by providing multiprogramming and multiprocessing capabilities. Like its MVT and OS/VS2 SVS predecessors, MVS supported multiprogramming; program instructions and associated data are scheduled by a control program and given processing cycles. Unlike a single-programming operating system, these systems maximize the use of the processing potential by dividing processing cycles among the instructions associated with several different concurrently running programs. This way, the control program does not have to wait for the I/O operation to complete before proceeding. By executing the instructions for multiple programs, the computer is able to switch back and forth between active and inactive programs. Early editions of MVS (mid-1970s) were among the first of the IBM OS series to support multiprocessor configurations, though the M65MP variant of OS/360 running on 360 Models 65 and 67 had provided limited multiprocessor support. The 360 Model 67 had also hosted the multiprocessor capable TSS/360, MTS and CP-67 operating systems. Because multiprocessing systems can execute instructions simultaneously, they offer greater processing power than single-processing system. As a result, MVS was able to address the business problems brought on by the need to process large amounts of data. Multiprocessing systems are either loosely coupled, which means that each computer has access to a common workload, or tightly coupled, which means that the computers share the same real storage and are controlled by a single copy of the operating system. MVS retained both the loosely coupled multiprocessing of Attached Support Processor (ASP) and the tightly coupled multiprocessing of OS/360 Model 65 Multiprocessing. In tightly coupled systems, two CPUs shared concurrent access to the same memory (and copy of the operating system) and peripherals, providing greater processing power and a degree of graceful degradation if one CPU failed. In loosely coupled configurations each of a group of processors (single and / or tightly coupled) had its own memory and operating system but shared peripherals and the operating system component JES3 allowed managing the whole group from one console. This provided greater resilience and let operators decide which processor should run which jobs from a central job queue. MVS JES3 gave users the opportunity to network together two or more data processing systems via shared disks and Channel-to-Channel Adapters (CTCA's). This capability eventually became available to JES2 users as Multi-Access SPOOL (MAS). MVS originally supported 24-bit addressing (i.e., up to 16 MB). As the underlying hardware progressed, it supported 31-bit (XA and ESA; up to 2048 MB) and now (as z/OS) 64-bit addressing. The most significant motives for the rapid upgrade to 31-bit addressing were the growth of large transaction-processing networks, mostly controlled by CICS, which ran in a single address space—and the DB2 relational database management system needed more than 8 MB of application address space to run efficiently. (Early versions were configured into two address spaces that communicated via the shared virtual area, but this imposed a significant overhead since all such communications had transmit via the operating system.) The main user interfaces to MVS are: Job Control Language (JCL), which was originally designed for batch processing but from the 1970s onwards was also used to start and allocate resources to long-running interactive jobs such as CICS; and TSO (Time Sharing Option), the interactive time-sharing interface, which was mainly used to run development tools and a few end-user information systems. ISPF is a TSO application for users on 3270-family terminals (and later, on VM as well), which allows the user to accomplish the same tasks as TSO's command line but in a menu and form oriented manner, and with a full screen editor and file browser. TSO's basic interface is command line, although facilities were added later for form-driven interfaces. MVS took a major step forward in fault-tolerance, built on the earlier STAE facility, that IBM called software recovery. IBM decided to do this after years of practical real-world experience with MVT in the business world. System failures were now having major impacts on customer businesses, and IBM decided to take a major design jump, to assume that despite the very best software development and testing techniques, that 'problems WILL occur.' This profound assumption was pivotal in adding great percentages of fault-tolerance code to the system and likely contributed to the system's success in tolerating software and hardware failures. Statistical information is hard to come by to prove the value of these design features (how can you measure 'prevented' or 'recovered' problems?), but IBM has, in many dimensions, enhanced these fault-tolerant software recovery and rapid problem resolution features, over time. This design specified a hierarchy of error-handling programs, in system (kernel/'privileged') mode, called Functional Recovery Routines, and in user ('task' or 'problem program') mode, called "ESTAE" (Extended Specified Task Abnormal Exit routines) that were invoked in case the system detected an error (hardware processor or storage error, or software error). Each recovery routine made the 'mainline' function reinvokable, captured error diagnostic data sufficient to debug the causing problem, and either 'retried' (reinvoke the mainline) or 'percolated' (escalated error processing to the next recovery routine in the hierarchy). Thus, with each error the system captured diagnostic data, and attempted to perform a repair and keep the system up. The worst thing possible was to take down a user address space (a 'job') in the case of unrepaired errors. Though it was an initial design point, it was not until the most recent MVS version (z/OS), that recovery program was not only guaranteed its own recovery routine, but each recovery routine now has its own recovery routine. This recovery structure was embedded in the basic MVS control program, and programming facilities are available and used by application program developers and 3rd party developers. Practically, the MVS software recovery made problem debugging both easier and more difficult. Software recovery requires that programs leave 'tracks' of where they are and what they are doing, thus facilitating debugging—but the fact that processing progresses despite an error can overwrite the tracks. Early data capture at the time of the error maximizes debugging, and facilities exist for the recovery routines (task and system mode, both) to do this. IBM included additional criteria for a major software problem that required IBM service. If a mainline component failed to initiate software recovery, that was considered a valid reportable failure. Also, if a recovery routine failed to collect significant diagnostic data such that the original problem was solvable by data collected by that recovery routine, IBM standards dictated that this fault was reportable and required repair. Thus, IBM standards, when rigorously applied, encouraged continuous improvement. IBM continued to support the major serviceability tool Dynamic Support System (DSS) that it had introduced in OS/VS1 and OS/VS2 Release 1. This interactive facility could be invoked to initiate a session to create diagnostic procedures, or invoke already-stored procedures. The procedures trapped special events, such as the loading of a program, device I/O, system procedure calls, and then triggered the activation of the previously defined procedures. These procedures, which could be invoked recursively, allowed for reading and writing of data, and alteration of instruction flow. Program Event Recording hardware was used. IBM dropped support for DSS with Selectable Unit 7 (SU7), an update to OS/VS2 Release 3.7 required by the program product OS/VS2 MVS/System Extensions (MVS/SE), Program Number 5740-XEl. The User group SHARE passed a requirement that IBM reinstate DSS, and IBM provided a PTF to allow use of DSS after MVS/SE was installed. IBM again dropped support for DSS with SU64, an update to OS/VS2 Release 3.8 required by Release 2 of MVS/SE. Program-Event Recording (PER) exploitation was performed by the enhancement of the diagnostic SLIP command with the introduction of the PER support (SLIP/Per) in SU 64/65 (1978). Multiple copies of MVS (or other IBM operating systems) could share the same machine if that machine was controlled by VM/370. In this case VM/370 was the real operating system, and regarded the "guest" operating systems as applications with unusually high privileges. As a result of later hardware enhancements one instance of an operating system (either MVS, or VM with guests, or other) could also occupy a Logical Partition (LPAR) instead of an entire physical system. Multiple MVS instances can be organized and collectively administered in a structure called a systems complex or sysplex, introduced in September, 1990. Instances interoperate through a software component called a Cross-system Coupling Facility (XCF) and a hardware component called a Hardware Coupling Facility (CF or Integrated Coupling Facility, ICF, if co-located on the same mainframe hardware). Multiple sysplexes can be joined via standard network protocols such as IBM's proprietary Systems Network Architecture (SNA) or, more recently, via TCP/IP. The z/OS operating system (MVS' most recent descendant) also has native support to execute POSIX and Single UNIX Specification applications. The support began with MVS/SP V4R3, and IBM has obtained UNIX 95 certification for z/OS V1R2 and later. The system is typically used in business and banking, and applications are often written in COBOL. COBOL programs were traditionally used with transaction processing systems like IMS and CICS. For a program running in CICS, special EXEC CICS statements are inserted in the COBOL source code. A preprocessor (translator) replaces those EXEC CICS statements with the appropriate COBOL code to call CICS before the program is compiled — not altogether unlike SQL used to call DB2. Applications can also be written in other languages such as C, C++, Java, assembly language, FORTRAN, BASIC, RPG, and REXX. Language support is packaged as a common component called "Language Environment" or "LE" to allow uniform debugging, tracing, profiling, and other language independent functions. MVS systems are traditionally accessed by 3270 terminals or by PCs running 3270 emulators. However, many mainframe applications these days have custom web or GUI interfaces. The z/OS operating system has built-in support for TCP/IP. System management, done in the past with a 3270 terminal, is now done through the Hardware Management Console (HMC) and, increasingly, Web interfaces. Operator consoles are provided through 2074 emulators, so you are unlikely to see any S/390 or zSeries processor with a real 3270 connected to it. The native character encoding scheme of MVS and its peripherals is EBCDIC, but the TR instruction made it easy to translate to other 7- and 8-bit codes. Over time IBM added hardware-accelerated services to perform translation to and between larger codes, hardware-specific service for Unicode transforms and software support of, e.g., ASCII, ISO/IEC 8859, UTF-8, UTF-16, and UTF-32. The software translation services take source and destination code pages as inputs. MVS filesystem Files, other than Unix files, are properly called data sets in MVS. Names of those files are organized in catalogs that are VSAM files themselves. Data set names (DSNs, mainframe term for filenames) are organized in a hierarchy whose levels are separated with dots, e.g. "DEPT01.SYSTEM01.FILE01". Each level in the hierarchy can be up to eight characters long. The total filename length is a maximum of 44 characters including dots. By convention, the components separated by the dots are used to organize files similarly to directories in other operating systems. For example, there were utility programs that performed similar functions to those of Windows Explorer (but without the GUI and usually in batch processing mode) - adding, renaming or deleting new elements and reporting all the contents of a specified element. However, unlike in many other systems, these levels are not usually actual directories but just a naming convention (like the original Macintosh File System, where folder hierarchy was an illusion maintained by the Finder). TSO supports a default prefix for files (similar to a "current directory" concept), and RACF supports setting up access controls based on filename patterns, analogous to access controls on directories on other platforms. As with other members of the OS family, MVS' data sets were record-oriented. MVS inherited three main types from its predecessors: Sequential data sets were normally read one record at a time from beginning to end. In BDAM (direct access) data sets, the application program had to specify the physical location of the data it wanted to access (usually by specifying the offset from the start of the data set). In ISAM data sets a specified section of each record was defined as a key that could be used as a key to look up specific records. The key quite often consisted of multiple fields but these had to be contiguous and in the right order; and key values had to be unique. Hence an IBM ISAM file could have only one key, equivalent to the primary key of a relational database table; ISAM could not support foreign keys. Sequential and ISAM datasets could store either fixed-length or variable length records, and all types could occupy more than one disk volume. All of these are based on the VTOC disk structure. Early IBM database management systems used various combinations of ISAM and BDAM datasets - usually BDAM for the actual data storage and ISAM for indexes. In the early 1970s IBM's virtual memory operating systems introduced a new file management component, VSAM, which provided similar facilities: Entry-Sequenced Datasets (ESDS) provided facilities similar to those of both sequential and BDAM datasets, since they could be read either from start to finish or directly by specifying an offset from the start. Key-Sequenced Datasets (KSDS) were a major upgrade from ISAM: they allowed secondary keys with non-unique values and keys formed by concatenating non-contiguous fields in any order; they greatly reduced the performance problems caused by overflow records in ISAM; and they greatly reduced the risk that a software or hardware failure in the middle of an index update might corrupt the index. These VSAM formats became the basis of IBM's database management systems, IMS/VS and DB2 - usually ESDS for the actual data storage and KSDS for indexes. VSAM also included a catalog component used for user catalogs and MVS' master catalog. Partitioned data sets (PDS) were sequential data sets subdivided into "members" that could each be processed as sequential files in their own right (like a folder in a hierarchical file system). The most important use of PDSes was for program libraries - system administrators used the main PDS as a way to allocate disk space to a project and the project team then created and edited the members. Other uses of PDSs were libraries of frequently used job control procedures (PROCs), and "copy books" of programming language statements such as record definitions used by several programs. Generation Data Groups (GDGs) are groups of like named data sets, which can be referenced by absolute generation number, or by an offset from the most recent generation. They were originally designed to support grandfather-father-son backup procedures - if a file was modified, the changed version became the new "son", the previous "son" became the "father", the previous "father" became the "grandfather" and the previous "grandfather" was deleted. But one could set up GDGs with more than 3 generations and some applications used GDGs to collect data from several sources and feed the information to one program - each collecting program created a new generation of the file and the final program read the whole group as a single sequential file (by not specifying a generation in the JCL). Modern versions of MVS (e.g., z/OS) use datasets as containers for Unix filesystems along with facilities for partially integrating them. That is, Unix programs using fopen() can access an MVS dataset and a user can allocate a Unix file as though it were a dataset, with some restrictions. The Hierarchical File System (HFS) (not to be confused with Apple's Hierarchical File System) uses a unique type of dataset, while the newer z/OS File System (zFS) (not to be confused with Sun's ZFS) uses a VSAM Linear Data Set (LDS). Programs running on network-connected computers (such as the IBM AS/400) can use local data management interfaces to transparently create, manage, and access VSAM record-oriented files by using client-server products implemented according to Distributed Data Management Architecture (DDM). DDM is also the base architecture for the MVS DB2 server that implements Distributed Relational Database Architecture (DRDA). Upgrades to MVS In addition to new functionality that IBM added with releases and sub-releases of OS/VS2, IBM provided a number of free Incremental Change Releases (ICRs) and Selectable Units (SUs) and chargeable program products and field developed programs that IBM eventually bundled as part of z/OS. These include: ACF/TCAM (5735-RCl) ACF/VTAM (5746-RC3, 5735-RC2) Data Facility/Device Support (DF/DS), 5740-AM7 Data Facility Extended Function (DF/EF), 5740-XYQ Data Facility/Data Set Services (DF/DSS), 5740-UT3. Data Facility Sort, 5740-SM1 OS/VS2 MVS Sequential Access Method-Extended (SAM-E), 5740-AM3 MVS/370 Data Facility Product (DFP), 5665-295, replacing 5740-AM7 Data Facility Device Support (DFDS) 5740-XYQ Data Facility Extended Function (DFEF) 5740-AM3 Sequential Access Method Extended (SAM-E) 5740-AM8 Access Method Services Cryptographic Option 5748-UT2 Offline 3800 Utility MVS/XA Data Facility Product Version 1 Release 1, 5665-284 MVS/XA Data Facility Product Version 2 Release 1, 5665-XA2 MVS/ESA Data Facility Product Version 3, 5665-XA3 Data Facility Storage Management Subsystem (DFSMS), 5695-DF1Replaces DFP, DF/DSS and DF/HSM OS/VS2 MVS TSO Command Package (5740-XT6) TSO Command Processor - FDP 5798-AYF (PRINT command) TSO/VS2 Programming Control Facility - FDP 5798-BBJ TSO Programming Control Facility - II (PCF II), FDP 5798-CLW, TSO ExtensionsReplaces TSO Command Package, TSO Command Processor and PCF 5665-285 for MVS/370 5665-293 for MVS/XA 5685-025 for MVS/XAFirst version with REXX OS/VS2 MVS/System Extensions, 5740-XEl MVS/System Product JES3 Version 1 5740-XYN JES2 Version 1 5740-XYS MVS/System Product-JES2 Version 2, 5740-XC6 MVS/System Product-JES3 Version 2, 5665-291 MVS/System Product-JES2 Version 3, 5685-001 MVS/System Product-JES3 Version 3, 5685-002 MVS/ESA System Product: JES2 Version 4, 5695-047 MVS/ESA System Product: JES3 Version 4, 5695-048 MVS/ESA System Product: JES2 Version 5, 5655-068 MVS/ESA System Product: JES3 Version 5, 5655-069 Data Facility Product (DFP) In the late seventies and early eighties IBM announced: 5740-AM7 Data Facility Device Support (DF/DS) 5740-XYQ Data Facility Extended Function (DF/EF) 5740-AM3 Sequential Access Method Extended (SAM-E) 5740-AM8 Access Method Services Cryptographic Option 5748-UT2 Offline 3800 Utility DF/DS added new device support, and IBM announced that it would no longer add device support to the free base. DF/EF added the Improved Catalog Structure (ICF) as an alternative to VSAM catalogs and Control Volumes (CVOLs), but it was riddled with reliability problems. When IBM announced MVS/SP Version 2 (MVS/XA), it also announced Data Facility Product™ (DFP™) as a replacement for and upgrade to the other five products above, which it said would be withdrawn from marketing, effective December 1, 1984. DFP/370 Release 1 (program number 5665-295), announced June 7, 1983, was for MVS/SP Version 1, MVS/SE and OS/VS2 R3.8, and was optional, but MVS/Extended Architecture Data Facility Product (5665-284) was a corequisite for MVS/SP Version 2 (MVS/XA). In addition to enhancing data management facilities, DFP replaced free versions of the linkage editor and utilities. Modern MVS MVS has now evolved into z/OS; older MVS releases are no longer supported by IBM and, since 2007, only 64-bit z/OS releases are supported. z/OS supports running older 24-bit and 31-bit MVS applications alongside newer 64-bit applications. MVS releases up to 3.8j (24-bit, released in 1981) were freely available and it is now possible to run the MVS 3.8j release in mainframe emulators for free. MVS/370 MVS/370 is a generic term for all versions of the MVS operating system prior to MVS/XA. The System/370 architecture, at the time MVS was released, supported only 24-bit virtual addresses, so the MVS/370 operating system architecture is based on a 24-bit address. Because of this 24-bit address length, programs running under MVS/370 are each given 16 MB of contiguous virtual storage. MVS/XA MVS/XA, or Multiple Virtual Storage/Extended Architecture, was a version of MVS that supported the 370-XA architecture, which had a new I/O architecture and also expanded addresses from 24 bits to 31 bits, providing a 2 gigabyte addressable memory area. MVS/XA supported a 24-bit legacy addressing mode for older 24-bit applications (i.e. those that stored a 24-bit address in the lower 24 bits of a 32-bit word and utilized the upper 8 bits of that word for other purposes). MVS/ESA MVS/ESA: MVS Enterprise System Architecture. Versions of MVS, introduced as MVS/SP Version 3 in February 1988, then MVS/ESA SP Version 4 and MVS/ESA SP Version 5. Replaced by as OS/390 late 1995 and subsequently by z/OS. MVS/ESA OpenEdition: upgrade to Version 4 Release 3 of MVS/ESA announced February 1993 with support for POSIX and other standards. While the initial release only had National Institute of Standards and Technology (NIST) certification for Federal Information Processing Standard (FIPS) 151 compliance, subsequent releases were certified at higher levels and by other organizations, e.g. X/Open and its successor, The Open Group. It included about 1 million new lines of code, which provide an API shell, utilities, and an extended user interface. Works with a hierarchical file system provided by DFSMS (Data Facility System Managed Storage). The shell and utilities are based on Mortice Kerns' InterOpen products. Independent specialists estimate that it was over 80% open systems-compliant—more than most Unix systems. DCE2 support announced February 1994, and many application development tools in March 1995. From mid 1995, as all of the open features became a standard part of vanilla MVS/ESA SP Version 5 Release 1, IBM stopped distinguishing OpenEdition from the operating system. Under OS/390 V2R6 it became UNIX System Services, and has kept that name under z/OS. OS/390 In late 1995 IBM bundled MVS with several program products and changed the name from MVS/ESA to OS/390. z/OS The current level of MVS is marketed as z/OS. Closely related operating systems Japanese mainframe manufacturers Fujitsu and Hitachi both repeatedly and illegally obtained IBM's MVS source code and internal documentation in one of the 20th century's most famous cases of industrial espionage. Fujitsu relied heavily on IBM's code in its MSP mainframe operating system, and likewise Hitachi did the same for its VOS3 operating system. MSP and VOS3 were heavily marketed in Japan, where they still hold a substantial share of the mainframe installed base, but also to some degree in other countries, notably Australia. Even IBM's bugs and documentation misspellings were faithfully copied. IBM cooperated with the U.S. Federal Bureau of Investigation in a sting operation, reluctantly supplying Fujitsu and Hitachi with proprietary MVS and mainframe hardware technologies during the course of multi-year investigations culminating in the early 1980s—investigations which implicated senior company managers and even some Japanese government officials. Amdahl, however, was not involved in Fujitsu's theft of IBM's intellectual property. Any communications from Amdahl to Fujitsu were through "Amdahl Only Specifications" which were scrupulously cleansed of any IBM IP or any references to IBM's IP. Subsequent to the investigations, IBM reached multimillion-dollar settlements with both Fujitsu and Hitachi, collecting substantial fractions of both companies' profits for many years. Reliable reports indicate that the settlements exceeded US$500,000,000. The three companies have long since amicably agreed to many joint business ventures. For example, in 2000 IBM and Hitachi collaborated on developing the IBM z900 mainframe model. Because of this historical copying, MSP and VOS3 are properly classified as "forks" of MVS, and many third-party software vendors with MVS-compatible products were able to produce MSP- and VOS3-compatible versions with little or no modification. When IBM introduced its 64-bit z/Architecture mainframes in the year 2000, IBM also introduced the 64-bit z/OS operating system, the direct successor to OS/390 and MVS. Fujitsu and Hitachi opted not to license IBM's z/Architecture for their quasi-MVS operating systems and hardware systems, and so MSP and VOS3, while still nominally supported by their vendors, maintain most of MVS's 1980s architectural limitations to the present day. Since z/OS still supports MVS-era applications and technologies— z/OS still contains most of MVS's code, albeit greatly enhanced and improved over decades of evolution—applications (and operational procedures) running on MSP and VOS3 can move to z/OS much more easily than to other operating systems. See also Hercules a S/370, S/390, and zSeries emulator capable of running MVS Utility programs supplied with MVS (and successor) operating systems BatchPipes is a batch job processing utility designed for the MVS/ESA operating system, and all later incarnations—OS/390 and z/OS. Notes References Bob DuCharme: "The Operating Systems Handbook, Part 06: MVS" (available online here) External links IBM: z/OS V1R11.0 MVS Manuals IBM: z/OS V1R8.0 MVS manuals MVS: the operating system that keeps the world going MVS... a long history Functional structure of IBM virtual storage operating systems Part II: OS/VS2-2 concepts and philosophies by A. L. Scherr IBM mainframe operating systems 1974 software
48627412
https://en.wikipedia.org/wiki/Wikipedia%20logo
Wikipedia logo
The logo of the free online encyclopedia Wikipedia is an unfinished globe constructed from jigsaw pieces—some pieces are missing at the top—each inscribed with a glyph from a different writing system. As displayed on the web pages of the English-language edition of the project, there is the wordmark "WA" under the globe, and below that the text "The Free Encyclopedia" in the free open-source Linux Libertine font. Puzzle-globe design Each piece bears a glyph (a letter or other character), or glyphs, symbolizing the multilingualism of Wikipedia. As with the Latin letter "W", these glyphs are in most cases the first glyph or glyphs of the name "Wikipedia" rendered in that language. They are as follows: At left, from the top down, are Armenian v, Cambodian vĕ (lying on its side), Bengali U, Devanagari वि vi, and Georgian v. In the middle-left column is Greek ō, and below that are Chinese wéi, Kannada vi, and (barely visible at the bottom) Tibetan wi. In the middle-right column is Latin . Above that is Japanese wi; below it are Cyrillic i, Hebrew v, and (barely visible at the bottom) Tamil vi. The rightmost column is Ethiopic wə, Arabic w, Korean wi, and Thai wi. The top edge line of the વિ puzzle piece on the rear side of the ball (as seen the 2D character map) crosses through the inwards indentation of the ウィ puzzle piece when viewed from the default front perspective. The empty space at the top represents the incomplete nature of the project, the articles and languages yet to be added. History The design of "WA" text beneath a globe, with the interlocking-V W and large A, was designed by Wikipedia user The Cunctator for a November 2001 logo contest. An initial design of the puzzle-globe logo was created by Paul Stansifer, a then 17-year-old Wikipedia user, whose entry won a design competition run by the site in 2003. Another Wikipedia user, David Friedland, subsequently improved the logo by changing the styling of the jigsaw pieces so that their boundaries seemed indented and simplified their contents to be a single glyph per piece, rather than a jumble of nonsense multilingual text. In the process, some errors were introduced. In particular, one piece of Devanagari script, and one piece of Japanese katakana were incorrectly rendered. Also, the Chinese character (袓) has no immediate connection with Wikipedia. Current logo In 2007, a modified 3D model was developed by Wikimedia Taiwan for Wikimania, when they distributed a diameter spherical puzzle based on the logo, that attendees could piece together. It did not add other glyphs on the parts that cannot be seen on the 2D logo, but used that space to include small logos of the sister projects and information about Wikimania. A variant of that model was used to build a person-sized Wikiball that spun on a stand, featured during the event. This led to a renewed interest in getting a proper 3D model for the logo. By 2007, users on listservs discovered that the logo had some minor errors. The errors were not immediately fixed, because, according to Friedland, he could not locate the original project file. Friedland added that "I have tried to reconstruct it, but it never looks right" and that the logo "should be redrawn by a professional illustrator." , a Wikipedian, said that most Japanese users supported correcting the errors. In an e-mail to Noam Cohen of The New York Times, Kizu said that "It could be an option to leave them as they are. Most people don't take it serious and think the graphical logo is a sort of pot-au-feu of various letters without meaning." In late 2009, the Wikimedia Foundation undertook to fix the errors and generally update the puzzle globe logo. Among other concerns, the original logo did not scale well and some letters appeared distorted. For the new logo, the Wikimedia Foundation defined which characters appear on the "hidden" puzzle pieces, and had a three-dimensional computer model of the globe created to allow the generation of other views. A partial 3D globe was commissioned for the Wikimedia office. The logo was rolled out on the projects in May 2010. It features the new 3D rendering of the puzzle globe, with corrected characters (and the Klingon character replaced by a Ge'ez character). The wordmark has been modified from the Hoefler Text font to the open-source Linux Libertine font, and the subtitle is no longer italicized. The "W" character, which was used in various other places in Wikipedia (such as the favicon) and was a "distinctive part of the Wikipedia brand", was stylized as crossed V's in the original logo, , while the W in Linux Libertine is rendered with a single line. To provide the traditional appearance of the Wikipedia "W", a "crossed" W was added as an OpenType variant to the Linux Libertine font. On October 24, 2014, the Wikimedia Foundation released the logo, along with all other logos belonging to the Foundation, under the Creative Commons Attribution-ShareAlike 3.0 license. On September 29, 2017, the logo of Wikipedia was submerged to the bottom of Armenia's Lake Sevan thanks to the joint efforts of Wikimedia Armenia and ArmDiving divers' club. The logo is an unfinished globe made of puzzle pieces with symbols (including the Armenian equivalent of "v") from different sign systems written on them. The 2m-wide, 2m-high () logo (the largest in the world) was made in Armenia for the annual meeting of the Central and Eastern Europe Wikimedia affiliates, Wikimedia CEE Meeting that the country hosted in August 2016 in Dilijan. Trademark The (former) logo was registered as a European Community Trade Mark by Wikimedia Foundation, Inc. The trade mark bears a filing date of 31 January 2008 and a registration date of 20 January 2009. Logos Historical logos Special logos Anniversaries Milestone commemorations Events Holidays See also Content guideline: using logos on Wikipedia References External links Wikimedia Blog: Wikipedia in 3D – 3D version of the Wikipedia logo unveiled; description of the new puzzle globe logo Wikimedia Blog: A new look for Wikipedia Wikipedia Symbols introduced in 2001 logo
4813009
https://en.wikipedia.org/wiki/Sound%20Blaster%20Audigy
Sound Blaster Audigy
Sound Blaster Audigy is a product line of sound cards from Creative Technology. The flagship model of the Audigy family used the EMU10K2 audio DSP, an improved version of the SB-Live's EMU10K1, while the value/SE editions were built with a less-expensive audio controller. The Audigy family is available for PCs with a PCI or PCI Express slot, or a USB port. First generation The Audigy cards equipped with EMU10K2 (CA0100 chip) could process up to 4 EAX environments simultaneously with its on-chip DSP and native EAX 3.0 ADVANCED HD support, and supported from stereo up to 5.1-channel output. The audio processor could mix up to 64 DirectSound3D sound channels in hardware, up from Live!'s 32 channels. Creative Labs advertised the Audigy as a 24-bit sound card, a controversial marketing claim for a product that did not support end-to-end playback of 24-bit/96 kHz audio streams. The Audigy and Live shared a similar architectural limitation: the audio transport (DMA engine) was fixed to 16-bit sample precision at 48 kHz. So despite its 24-bit/96 kHz high-resolution DACs, the Audigy's DSP could only process 16-bit/48 kHz audio sources. This fact was not immediately obvious in Creative's literature, and was difficult to ascertain even upon examination of the Audigy's spec sheets. (A resulting class-action settlement with Creative later awarded US customers a 35% discount on Creative products, up to a maximum discount of $65.) Aside from the lack of an end-to-end path for 24-bit audio, Dolby Digital (AC-3) and DTS passthrough (to the S/PDIF digital out) had issues that have never been resolved. Audigy card supports the professional ASIO 1 driver interface natively, making it possible to obtain low latencies from Virtual Studio Technology (VST) instruments. Some versions of Audigy featured an external break out box with connectors for S/PDIF, MIDI, IEEE 1394, analog and optical signals. The ASIO and break out box features were an attempt to tap into the "home studio" market, with a mainstream product. Sound Blaster Audigy ES This variant (SB0160) uses the full EMU10K2 chip (CA0100 chip ) and is, as a result, quite similar in feature set. It is only missing its FireWire port. Sound Blaster Audigy SE & Audigy Value The Audigy SE (SB0570) and Audigy Value (SB0570) are stripped down models, with a less expensive CA0106 audio-controller in place of the EMU10k2. With the CA0106, the SE/Value are limited to software-based EAX 3.0 (upgraded to software-based EAX 4.0 with a driver update), no advanced resolution DVD-Audio Playback, and no Dolby Digital 5.1 or Dolby Digital EX 6.1 playback. With these cards only one of the mic, line in, or AUX sources may be unmuted at a time. The Audigy SE and Audigy Value both carry the SB0570 model number. It is possible that the same card was sold in different markets with different names, that perhaps the cards were sold with one name for a while and later it was changed or it's possible they could even be slightly different cards. The SE is a low-profile PCI card in the Audigy family, and still has many unsold units at online retailers unlike the other Audigy cards. Sound quality Wavetable 64-voice synthesizer Audio path Analog-Digital Converter (ADC): 24 bit @ 96 kHz Digital-Analog Converter (DAC): 24 bit @ 96 kHz recording: 16‥24 bit @ 8, 11.025, 16, 22.05, 24, 32, 44.1, 48, 96 kHz Digital path S/PDIF: 24 bit @ 44.1, 48, 96 kHz Sound channels Analog: 2.1, 4.1, 5.1, 6.1, 7.1 and Creative Multi Speaker Surround (CMSS) which means that Audigy SE 7.1 cards can upmix mono or stereo sources to 7.1 channels. Digital: 2.1 Sound Blaster Audigy LS The Sound Blaster Audigy LS (SB0310) is similar to the Audigy SE in that it supports neither hardware acceleration nor FireWire. Sound Blaster Audigy Platinum EX The Sound Blaster Audigy Platinum EX (SB0090) is similar to the Audigy ES, but supported an external break out box instead of the standard internal version. It came with a Firewire port and was introduced before the AS models. Sound Blaster Audigy VX The VX (SB0060) is a low-profile PCI card in the Audigy family. Second generation Sound Blaster Audigy 2 series Sound Blaster Audigy 2 The Sound Blaster Audigy 2 (SB0240) (September 2002) featured an updated EMU10K2 processor called CA0102 to gain access to CA0151 which is a separate chip. Collectively CA0102 and CA0151 was sometimes referred to as EMU10K2.5 (The CA0102 chip alone is just a version of Emu10k2 ). To address the biggest shortcoming of the original Audigy, a revised DMA engine allowed end-to-end high-resolution (24-bit) audio playback: 96 kHz 6.1 channel recording, and 192 kHz stereo. However, the high-resolution audio was achieved by bypassing the DSP, being decoded directly by CA0151 chip also known as "p16v" to take advantage of which Creative substituted CA0102 for the old CA0100 used in Audigy 1. Using the DSP with high-resolution audiostreams resulted in the Audigy's characteristic downsampling (to the DSP's native-rate of 48 kHz), for mixing with other audio sources. Use of Windows Vista or 7 should mitigate the DSP sample rate conversion issue as setting the card to 16-bit/48 kHz resamples audio using the much superior 32-bit float Windows audio stack before sending it to the card. It is unclear whether this works for all use cases (e.g. OpenAL). The Audigy 2 supported up to 6.1 speakers and had improved signal-to-noise ratio (SNR) over the Audigy (106 vs. 100 decibels (A)). Audio output was supplied by the AC'97 codec on the front outputs and I²S on the rear. It also featured built-in Dolby Digital 5.1 EX decoding for improved DVD play-back. An IEEE-1394 (FireWire) connector was present in all modifications except Value. Audigy 2's 3D audio capabilities received a boost when compared to its predecessors. Creative created the EAX 4.0 ADVANCED HD standard to coincide with Audigy 2's release. The chip again can process up to 64 DirectSound3D audio channels in hardware. It also has native support for the free and open source OpenAL audio API. Sound Blaster Audigy 2 ZS series Sound Blaster Audigy 2 ZS The Sound Blaster Audigy 2 ZS (SB0350) was a revision of the Audigy 2 with a slightly improved signal-to-noise ratio (108 vs. 106 dB) and DTS-ES (Extended Surround) for DVD playback. The Audigy 2 ZS supports up to 7.1 speakers via 4-pole mini-jacks, although it used a non-conventional pin out: Side R/L are on Line Out 2/3, respectively. Most widespread card of Audigy series. Unofficial drivers for 32 and 64-bit editions of Windows 10 / 8.x / 7 / Vista SP2 / XP SP3 are available. IRIX has drivers for the Sound Blaster Audigy 2 ZS and it can be installed into the SGI Fuel series of workstations. There was also a cardbus version of the ZS for use with notebook computers. Sound Blaster Audigy 2 ZS Platinum (SB0360) Testing chain: External loopback (line-out1 - line-in3) Sampling mode: 24-bit, 96 kHz Measured values: Noise level, dB (A): -101.3 THD, %: 0.0034 IMD, %: 0.0080 Stereo crosstalk, dB: -91.8 Sound Blaster Audigy 2 ZS Platinum Pro Testing chain: External loopback (line-out1 - line-in3) Sampling mode: 24-bit, 96 kHz Measured values: Noise level, dB (A): -104.3 THD, %: 0.0015 IMD, %: 0.0070 Stereo crosstalk, dB: -103.2 Sound Blaster Audigy 2 ZS Notebook The Sound Blaster Audigy 2 ZS Notebook (SB0530) is a CardBus version of the Audigy 2 ZS released in Fall 2004 for the notebook market. It had nearly all of the capabilities of the PCI edition, but in a far smaller form factor. Reductions in capability included somewhat limited MIDI capability (compared to the PCI version) and the loss of FireWire. It was the first gaming-oriented sound hardware add-on board for notebooks that offered full hardware acceleration of 3D audio along with high-fidelity audio output quality. The card struggled with compatibility due to quality issues with the CardBus host chipsets in many notebooks of the time, a problem also suffered with other companies' products, such as Echo Digital Audio Corporation's Indigo. Sound Blaster Audigy 2 ZS Video Editor The Sound Blaster Audigy 2 ZS Video Editor (SB0480) was an external USB soundcard, which combined audio playback, accelerated video editing and a 4-port USB 2.0 hub in one solution. It featured accelerated video encoding with DoMiNoFX video processing technologies. The audio system provided THX certified sound and 24-bit EAX ADVANCED HD in 5.1 or 7.1 surround. The video capture of the device is hardware-accelerated; encoding it to a complex format in real-time rather than using the CPU. While this results in good quality video even on basic systems, the device cannot be used by software that uses the standard DirectShow or VfW interface. Because of this limitation, the supplied software to capture video must be used. This prevents use of the device in conjunction with a video camera as a webcam, as standard webcam interfaces use DirectShow. Creative has made the free VidCap application available on their website. It allows quick and easy capture and output to devices. Captured files can be imported into a video editor application or DVD authoring program. Sound Blaster Audigy 2 Value The Sound Blaster Audigy 2 Value (SB0400) was a somewhat stripped down version of the Audigy 2 ZS - It uses EMU10K2.5 chip CA0108 which integrate CA0102 and CA0151 on a single piece of silicon but is a value version, with an SNR of 106 dB, no IEEE-1394 FireWire connector, and no DTS-ES 6.1 playback. It is, however, fully hardware accelerated for DirectSound and EAX 4 and was sold as a cheaper companion for the more expensive ZS. Sound Blaster Audigy 2 SE The Sound Blaster Audigy 2 SE (SB0570) is similar to the Audigy SE and Live! 24-bit edition in that it does not have a hardware DSP as part of the audio chip. As such, it puts far more load on the host system's CPU. The card is physically smaller than other Audigy 2 cards. It is designed as an entry-level budget sound card. Sound Blaster Audigy 2 NX The Sound Blaster Audigy 2 NX (SB0300) was an external USB soundcard, supporting 24 bit playback, but with no DSP chip. (CA0186-EAT) Sound Blaster Audigy HD Software Edition Also known as Sound Blaster Audigy ADVANCED MB (SB060), it is similar to Audigy 2 SE, but the software supports EAX 3.0, which supports 64-channel software wavetable with DirectSound acceleration, but without hardware accelerated wavetable synthesis. DAC is rated 95 dB Signal-to-Noise Ratio. It is available as an integrated option for Dell Inspiron, Studio and XPS notebooks. Of note is that Creative hardware is not necessary for this device. It is entirely a software solution that is adaptable to various DACs. Sound Blaster Audigy 4 series Sound Blaster Audigy 4 The Sound Blaster Audigy 4 (SB0610) uses CA10300 (CA0108's unleaded counterpart) DSP instead of the more advanced CA10200 (CA0102's unleaded counterpart) and does not have external hub, FireWire port or gold connectors. The board layout is similar to the Audigy 2 Value. The SNR is rated 106 dB. Sound Blaster Audigy 4 Pro The Sound Blaster Audigy 4 Pro (SB0380) improves on the Sound Blaster Audigy 2 ZS by improving the SNR to 113 dB. It features much of the same core technology as the Audigy 2 ZS which uses the CA0102. The newer model uses a CA10200 which is unleaded instead, and a new external I/O hub which has superior DACs offering higher digital-to-analog audio conversion quality. It also allows for simultaneous recording of up to six audio channels in 96 kHz/24-bit. It still supports a maximum of 7.1 audio channels up to 96 kHz/24bit, and stereo output at 192 kHz/24bit. The 7.1 mode is only supported under Windows XP, as well as 6.1 speaker mode is not supported by Windows 7 and Windows Vista. Sound Blaster Audigy 4 SE The Sound Blaster Audigy 4 SE (SB0610VP) is a Sound Blaster Audigy 4 Pro without the remote control. However, it uses the same audio DSP and is functionally as capable as the Audigy 2 and 4 series (other than Audigy 2 SE). It features full hardware acceleration of DirectSound and EAX. Sound Blaster Audigy Rx The Sound Blaster Audigy Rx (SB1550), released in September 2013, uses E-MU CA10300 from Audigy 4, but with a dedicated 600-ohm headphone amplifier, one TOSLINK optical output, and a PCI Express ×1 interface supported via a PLX Technology bridge controller. Sound Blaster Audigy Fx The Sound Blaster Audigy Fx (SB1570), released in September 2013, is a HDA card, it uses an ALC898 chip from Realtek, includes a 600-ohm amplifier, Sound Blaster Audigy Fx Control Panel, EAX Studio Software, and independent line-in and microphone inputs. It is a half-height expansion card with a PCI Express ×1 interface. Alternate drivers kX Project Drivers An alternate, independent WDM driver for Windows was developed to provide user-control of the EMU10K1 and EMU10K2 chips found in many Audigy-branded cards. The kX Project driver supports mixing numerous different effects in real time and on the hardware of EMU10K1 and EMU10K2 chips. It was developed by Eugene Gavrilov. The driver is no longer maintained on a regular basis by its original authors, but the source code was freed under the GPLv2 license and continues to get contributions from time to time. SB Audigy Series Support Pack User daniel_k (Daniel Kawakami) from Creative's forums does maintenance updates to keep compatibility with the latest version of Windows and implements several non public fixes. They are available on both Creative's forums and his blog. The latest version is based on Creative's Audigy Rx driver. For the older Audigy cards, there are both benefits and drawbacks compared to the latest official drivers: while they bring back CMSS2, which was deprecated by Creative on Vista/7, OpenAL quality is reported to differ significantly and these drivers do not support EAX in combination with OpenAL. After Windows 10 1903 update, the drivers stopped working, they install, but there is no sound from soundcard, neither the Creative Audio Console cannot see the card. A solution is to install latest driver from Audigy RX (manually via *inf). Rest of the package (Creative programs) can be left from daniel_k package. See also Sound Blaster References External links Official website IBM PC compatibles Creative Technology products Sound cards
44129215
https://en.wikipedia.org/wiki/School%20of%20Information%20Technology%2C%20King%20Mongkut%27s%20University%20of%20Technology%20Thonburi
School of Information Technology, King Mongkut's University of Technology Thonburi
School of Information Technology (SIT) School of Information Technology faculty is a small building near the library at King Mongkut's University of Technology Thonburi(KMUTT) in Bangmod Bangkok Thailand that is the number one University of Thailand. The famous short name of the faculty is "SIT". Courses range from undergraduate to PHD. The bachelor's degree is separated into information technology and computer science (English program). History August 1995 First Bachelor of Science in Information Technology degree offered. April–June 2000 First Computer science courses (in English program) Bachelor's degree and master's degree offered. Then Information Technology course Master and Doctor Degree offered. May 2002 E-Learning project opened to provide students the opportunity to a review lessons by DVD and CD. Then students can prepare or repeat the lesson by classroom on demand on the internet. June 2004 courses Software Engineering in master's degree. Life at School of Information Technology SIT students study in two locations at KMUTT, and they are classroom building 2 on third floor and the SIT building. In the classroom building 2, it separated into 7 classrooms and 2 common rooms have computers, and in the SIT building there are 5 training rooms and 3 labs for relaxation or working. On the second floor there is private SIT library for borrowing technical books. Most classes have a teacher assistant to help student with lessons. All lecturers are dedicated to teaching, for instance, some lecturers increase lab time every week. Younger students respect the older students who mentor and assist them, and this also creates a good relationship. References University departments in Thailand
1942994
https://en.wikipedia.org/wiki/Turnitin
Turnitin
Turnitin (stylized as turnitin) is an Internet-based plagiarism detection service run by the American company Turnitin, LLC, a subsidiary of Advance Publications. Founded in 1998, it sells its licenses to universities and high schools who then use the software as a service (SaaS) website to check submitted documents against its database and the content of other websites with the aim of identifying plagiarism. Results can identify similarities with existing sources and can also be used in formative assessment to help students learn to avoid plagiarism and improve their writing. Students may be required to submit work to Turnitin as a requirement of taking a certain course or class. The software has been a source of controversy, with some students refusing to submit, arguing that requiring submission implies a presumption of guilt. Some critics have alleged that use of this proprietary software violates educational privacy as well as international intellectual-property laws, and exploits students' works for commercial purposes by permanently storing them in Turnitin's privately held database. Turnitin, LLC also runs the informational website plagiarism.org and offers a similar plagiarism-detection service for newspaper editors and book and magazine publishers called iThenticate. Other tools included with the Turnitin suite are GradeMark (online grading and corrective feedback) and PeerMark (student peer-review service). In March 2019, Advance Publications acquired Turnitin, LLC for . In the UK the service is supported and promoted by JISC as 'Plagiarism Detection Service Turnitin UK'. The Service is operated by iParadigms, in conjunction with Northumbria Learning, the European reseller of the Service. Functionality The Turnitin software checks for potentially unoriginal content by comparing submitted papers to several databases using a proprietary algorithm. It scans its own databases and also has licensing agreements with large academic proprietary databases. Student-paper database The essays submitted by students are stored in a database used to check for plagiarism. This prevents one student from using another student's paper, by identifying matching text between papers. In addition to student papers, the database contains a copy of the publicly accessible Internet, with the company using a web crawler to continually add content to Turnitin's archive. It also contains commercial and/or copyrighted pages from books, newspapers, and journals. Classroom integration Students typically upload their papers directly to the service for teachers to access. Teachers may also submit a student's papers to Turnitin.com as individual files, by bulk upload, or as a ZIP file. Teachers can also set assignment-analysis options so that students can review the system's "originality reports" before they finalize their submission. A peer-review option is also available. Some virtual learning environments can be configured to support Turnitin, so that student assignments can be automatically submitted for analysis. Blackboard, Moodle, ANGEL, Instructure, Desire2Learn, Pearson Learning Studio, Sakai, and Studywiz integrate in some way with the software. Admissions applications In 2019, Turnitin began analyzing admissions application materials through a partner software, Kira Talent. Reception Privacy The Student Union at Dalhousie University has criticized the use of Turnitin at Canadian universities because the American government may be able to access the submitted papers and personal information in the database under the USA PATRIOT Act. Mount Saint Vincent University became the first Canadian university to ban Turnitin's service partly because of implications of the Act. Copyright-violation concerns Lawyers for the company claim that student work is covered under the theory of implied license to evaluate, since it would be pointless to write the essays if they were not meant to be graded. That implied license, the lawyers argue, thus grants Turnitin permission to copy, reproduce and preserve the works. The company's lawyers further claim that dissertations and theses also carry with them an implied permission to archive in a publicly accessible collection such as a university library. University of Minnesota Law School professor Dan Burk countered that the company's use of the papers may not meet the fair-use test for several reasons: The company copies the entire paper, not just a portion Students' work is often original, interpretive and creative rather than just a compilation of established facts Turnitin is a commercial enterprise When a group of students filed suit against Turnitin on that basis, in Vanderhye et al. v. iParadigms LLC, the district court found the practice fell within fair use; on appeal, the United States Court of Appeals for the Fourth Circuit affirmed. Presumption of guilt Some students argue that requiring them to submit papers to Turnitin creates a presumption of guilt, which may violate scholastic disciplinary codes and applicable local laws and judicial practice. Some teachers and professors support this argument when attempting to discourage schools from using Turnitin. WriteCheck iParadigms, the company that used to be behind Turnitin, ran another commercial website called WriteCheck, where students paid a fee to have a paper tested against the database used by Turnitin, to determine whether or not that paper would be detected as plagiarism when the student submitted that paper to the main Turnitin website through the account provided by the school. It was announced that the WriteCheck product was being withdrawn in 2020 with no new subscriptions being accepted from November 2019. The economist Alex Tabarrok has complained that Turnitin's systems "are warlords who are arming both sides in this plagiarism war". The website has subsequently been shut down. Litigation In one well-publicized dispute over mandatory Turnitin submissions, Jesse Rosenfeld, a student at McGill University declined, in 2004, to submit his academic work to Turnitin. The University Senate eventually ruled that Rosenfeld's assignments were to be graded without using the service. The following year, another McGill student, Denise Brunsdon, refused to submit her assignment to Turnitin.com and won a similar ruling from the Senate Committee on Student Grievances. In 2006, the Senate at Mount Saint Vincent University in Nova Scotia prohibited the submission of students' academic work to Turnitin.com and any software that requires students' work to become part of an external database where other parties might have access to it. This decision was granted after the students' union alerted the university community of their legal and privacy concerns associated with the use of Turnitin.com and other anti-plagiarism devices that profit from students' academic work. This was the first campus-wide ban of its kind in Canada, following decisions by Princeton, Harvard, Yale and Stanford not to use Turnitin. At Ryerson University in Toronto, students may decide whether to submit their work to Turnitin.com or make alternate arrangements with an instructor. Similar policies are in place at Brock University in Saint Catharines. On March 27, 2007, with the help of an intellectual property attorney, two students from McLean High School in Virginia (with assistance from the Committee For Students' Rights) and two students attending Desert Vista High School in Phoenix, Arizona, filed suit in United States Circuit Court (Eastern District, Alexandria Division) alleging copyright infringement by iParadigms, Turnitin's parent company. Nearly a year later, Judge Claude M. Hilton granted summary judgment on the students' complaint in favor of iParadigms/Turnitin, because they had accepted the click-wrap agreement on the Turnitin website. The students appealed the ruling, and on April 16, 2009, the United States Court of Appeals for the Fourth Circuit affirmed Judge Hilton's judgment in favor of iParadigms/Turnitin. Flaws Ad hoc encodings, fonts and text representation Several flaws and bugs in the Turnitin plagiarism detection software have been documented in scientific literature. In particular, Turnitin has been proven to be vulnerable to ad hoc text encodings, rearranged glyphs in a computer font, text replaced with Bézier curves representing its shape. Automated paraphrasing Another study showed that Turnitin failed to detect text produced by popular free Internet-based paraphrasing tools. Besides, more sophisticated machine learning techniques, such as automated paraphrasing, can produce natural and expressive text, which is virtually impossible for Turnitin to detect. Also, article spinning was not recognized by Turnitin. Asked about the situation, the then vice president of marketing at Turnitin Chris Harrick said that the company was "working on a solution", but it was "not a big concern" because in his opinion "the quality of these tools is pretty poor". Turnitin's response Several years later, Turnitin published an article titled "Can students trick Turnitin? Some students believe that they can 'beat' Turnitin by employing various tactics". The company denied any technical issues and said that "the authors of these 'tricks' are mostly essay mills." The article then listed a few possible "tricks" and how Turnitin intended to take care of them, without mentioning scientific literature, technical treatises or examples of computer code. Further criticism The Italian scholar Michele Cortelazzo, full professor of linguistics, who also studies copyright attribution and similarity between texts, noted that, paradoxically, it is impossible to tell if Turnitin's source code has been plagiarized from other sources, because it is not open source. For the same reason, it is unknown what scientific methodologies, if any, Turnitin uses to assess papers. In 2009, a group of researchers from Texas Tech University reported that many of the instances of "non-originality" that Turnitin finds aren't plagiarism, but are just the use of jargon, course terms or phrases that appeared for legitimate reasons. For example, the researchers found high percentages of flagged material in the topic terms of papers (e.g. "global warming") or "topic phrases", which they defined as the paper topic with a few words added (e.g. "the prevalence of childhood obesity continues to rise"). Turnitin was also criticized for paying panelists at conferences on education and writing. See also iThenticate References External links Software for teachers Plagiarism detectors Companies based in Oakland, California Computer-related introductions in 1997 Advance Publications 2019 mergers and acquisitions
40698335
https://en.wikipedia.org/wiki/Cornelia%20Boldyreff
Cornelia Boldyreff
Cornelia Boldyreff is very active in encouraging girls into computing, is a Council Member of The BCS, The Chartered Institute of IT (previously British Computer Society ), a Committee member of the BCSWomen and a visiting professor in the School of Computing and Mathematical Sciences at the University of Greenwich in London. Academic posts February 2013 to date Visiting Professor, School of Computing and Mathematical Sciences, University of Greenwich 2009 - 2013 Associate Dean (Research and Enterprise), School of Architecture, Computing and Engineering at the University of East London 2004 Professor of Software Engineering, University of Lincoln Reader, Computer Science Department, University of Durham Academic and professional qualifications Fellow of the British Computer Society Fellow of the Higher Education Academy PhD in Software Engineering, University of Durham Member of the Association for Computing Machinery (ACM) Member of the IEEE Computer Society Member of British Federation of Women Graduates (BFWG) Centres, specialist groups and committees Co-founder and Director, Centre for Research in Open Source Software. Founding member of BCS Women Specialist Group Committee member, BCS e-Learning Specialist Group Chair BCS Open Source Specialist Group Grants committee of Funds for Women Graduates (FfWG) Reviewing and programme committee work EPSRC Peer Review College Programme Committee/Organising Committee for various conferences/workshops Recent journal papers Mariano Ceccato, Andrea Capiluppi, Paolo Falcarin, Cornelia Boldyreff, A Large Study on the Effect of Code Obfuscation on the Quality of Java Code, Journal of Empirical Software Engineering, (under review). Andrea Capiluppi, Paolo Falcarin and Cornelia Boldyreff, Decompile, Defactor, Decouple: Measuring the Obfuscation Tirade to Protect Software Systems, Journal of Software Evolution and Process, Wiley (invited paper for special issue - under review). Andrea Capiluppi, Klaas-Jan Stol, Cornelia Boldyreff: Software Reuse in Open Source: A Case Study. IJOSSP 3(3): 10-35 (2011) Andrea Capiluppi, Cornelia Boldyreff, Karl Beecher, Paul J. Adams: Quality Factors and Coding Standards - a Comparison Between Open Source Forges. Electr. Notes Theor. Comput. Sci. 233: 89-103 (2009) Karl Beecher, Andrea Capiluppi, Cornelia Boldyreff: Identifying exogenous drivers and evolutionary stages in FLOSS projects. Journal of Systems and Software 82(5): 739-750 (2009) Awards Cornelia Boldyreff was one of the 30 women identified in the BCS Women in IT Campaign in 2014. Who were then featured in the e-book "Women in IT: Inspiring the next generation" produced by the BCS, The Chartered Institute for IT, as a free downloade-book, from various sources. References British women computer scientists British computer scientists Fellows of the British Computer Society Year of birth missing (living people) Living people
67911
https://en.wikipedia.org/wiki/Busy%20beaver
Busy beaver
In theoretical computer science, the busy beaver game aims at finding a terminating program of a given size that produces the most output possible. Since an endlessly looping program producing infinite output is easily conceived, such programs are excluded from the game. More precisely, the busy beaver game consists of designing a halting Turing machine with alphabet {0,1} which writes the most 1s on the tape, using only a given set of states. The rules for the 2-state game are as follows: the machine must have two states in addition to the halting state, and the tape initially contains 0s only. A player should conceive a transition table aiming for the longest output of 1s on the tape while making sure the machine will halt eventually. An nth busy beaver, BB-n or simply "busy beaver" is a Turing machine that wins the n-state Busy Beaver Game. That is, it attains the largest number of 1s among all other possible n-state competing Turing Machines. The BB-2 Turing machine, for instance, achieves four 1s in six steps. Determining whether an arbitrary Turing machine is a busy beaver is undecidable. This has implications in computability theory, the halting problem, and complexity theory. The concept was first introduced by Tibor Radó in his 1962 paper, "On Non-Computable Functions". The game The n-state busy beaver game (or BB-n game), introduced in Tibor Radó's 1962 paper, involves a class of Turing machines, each member of which is required to meet the following design specifications: The machine has n "operational" states plus a Halt state, where n is a positive integer, and one of the n states is distinguished as the starting state. (Typically, the states are labelled by 1, 2, ..., n, with state 1 as the starting state, or by A, B, C, ..., with state A as the starting state.) The machine uses a single two-way infinite (or unbounded) tape. The tape alphabet is {0, 1}, with 0 serving as the blank symbol. The machine's transition function takes two inputs: the current non-Halt state, the symbol in the current tape cell, and produces three outputs: a symbol to write over the symbol in the current tape cell (it may be the same symbol as the symbol overwritten), a direction to move (left or right; that is, shift to the tape cell one place to the left or right of the current cell), and a state to transition into (which may be the Halt state). There are thus (4n + 4)2n n-state Turing machines meeting this definition because the general form of the formula is (symbols × directions × (states + 1))(symbols × states). The transition function may be seen as a finite table of 5-tuples, each of the form (current state, current symbol, symbol to write, direction of shift, next state). "Running" the machine consists of starting in the starting state, with the current tape cell being any cell of a blank (all-0) tape, and then iterating the transition function until the Halt state is entered (if ever). If, and only if, the machine eventually halts, then the number of 1s finally remaining on the tape is called the machine's score. The n-state busy beaver (BB-n) game is a contest to find such an n-state Turing machine having the largest possible score — the largest number of 1s on its tape after halting. A machine that attains the largest possible score among all n-state Turing machines is called an n-state busy beaver, and a machine whose score is merely the highest so far attained (perhaps not the largest possible) is called a champion n-state machine. Radó required that each machine entered in the contest be accompanied by a statement of the exact number of steps it takes to reach the Halt state, thus allowing the score of each entry to be verified (in principle) by running the machine for the stated number of steps. (If entries were to consist only of machine descriptions, then the problem of verifying every potential entry is undecidable, because it is equivalent to the well-known halting problem — there would be no effective way to decide whether an arbitrary machine eventually halts.) Related functions The busy beaver function Σ The busy beaver function quantifies the maximum score attainable by a Busy Beaver on a given measure. This is a noncomputable function. Also, a busy beaver function can be shown to grow faster asymptotically than any computable function. The busy beaver function, , is defined such that Σ(n) is the maximum attainable score (the maximum number of 1s finally on the tape) among all halting 2-symbol n-state Turing machines of the above-described type, when started on a blank tape. It is clear that Σ is a well-defined function: for every n, there are at most finitely many n-state Turing machines as above, up to isomorphism, hence at most finitely many possible running times. This infinite sequence Σ is the busy beaver function, and any n-state 2-symbol Turing machine M for which σ(M) = Σ(n) (i.e., which attains the maximum score) is called a busy beaver. Note that for each n, there exist at least four n-state busy beavers (because, given any n-state busy beaver, another is obtained by merely changing the shift direction in a halting transition, another by shifting all direction changes to their opposite, and the final by shifting the halt direction of the all-swapped busy beaver). Non-computability Radó's 1962 paper proved that if is any computable function, then Σ(n) > f(n) for all sufficiently large n, and hence that Σ is not a computable function. Moreover, this implies that it is undecidable by a general algorithm whether an arbitrary Turing machine is a busy beaver. (Such an algorithm cannot exist, because its existence would allow Σ to be computed, which is a proven impossibility. In particular, such an algorithm could be used to construct another algorithm that would compute Σ as follows: for any given n, each of the finitely many n-state 2-symbol Turing machines would be tested until an n-state busy beaver is found; this busy beaver machine would then be simulated to determine its score, which is by definition Σ(n).) Even though Σ(n) is an uncomputable function, there are some small n for which it is possible to obtain its values and prove that they are correct. It is not hard to show that Σ(0) = 0, Σ(1) = 1, Σ(2) = 4, and with progressively more difficulty it can be shown that Σ(3) = 6 and Σ(4) = 13 . Σ(n) has not yet been determined for any instance of n > 4, although lower bounds have been established (see the Known values section below). In 2016, Adam Yedidia and Scott Aaronson obtained the first (explicit) upper bound on the minimum n for which Σ(n) is unprovable in ZFC. To do so they constructed a 7910-state Turing machine whose behavior cannot be proven based on the usual axioms of set theory (Zermelo–Fraenkel set theory with the axiom of choice), under reasonable consistency hypotheses (stationary Ramsey property). This was later reduced to 1919 states, with the dependency on the stationary Ramsey property eliminated, and later to 748 states. Complexity and unprovability of Σ A variant of Kolmogorov complexity is defined as follows [cf. Boolos, Burgess & Jeffrey, 2007]: The complexity of a number n is the smallest number of states needed for a BB-class Turing machine that halts with a single block of n consecutive 1s on an initially blank tape. The corresponding variant of Chaitin's incompleteness theorem states that, in the context of a given axiomatic system for the natural numbers, there exists a number k such that no specific number can be proved to have complexity greater than k, and hence that no specific upper bound can be proven for Σ(k) (the latter is because "the complexity of n is greater than k" would be proved if "n > Σ(k)" were proved). As mentioned in the cited reference, for any axiomatic system of "ordinary mathematics" the least value k for which this is true is far less than 10↑↑10; consequently, in the context of ordinary mathematics, neither the value nor any upper-bound of Σ(10 ↑↑ 10) can be proven. (Gödel's first incompleteness theorem is illustrated by this result: in an axiomatic system of ordinary mathematics, there is a true-but-unprovable sentence of the form "Σ(10 ↑↑ 10) = n", and there are infinitely many true-but-unprovable sentences of the form "Σ(10 ↑↑ 10) < n".) Maximum shifts function S In addition to the function Σ, Radó [1962] introduced another extreme function for the BB-class of Turing machines, the maximum shifts function, S, defined as follows: = the number of shifts M makes before halting, for any , S(n) = max{s(M) | M ∈ En} = the largest number of shifts made by any halting n-state 2-symbol Turing machine. Because these Turing machines are required to have a shift in each and every transition or "step" (including any transition to a Halt state), the max-shifts function is at the same time a max-steps function. Radó showed that S is noncomputable for the same reason that Σ is noncomputable — it grows faster than any computable function. He proved this simply by noting that for each n, S(n) ≥ Σ(n). Each shift may write a 0 or a 1 on the tape, while Σ counts a subset of the shifts that wrote a 1, namely the ones that hadn't been overwritten by the time the Turing machine halted; consequently, S grows at least as fast as Σ, which had already been proved to grow faster than any computable function. The following connection between Σ and S was used by Lin & Radó [Computer Studies of Turing Machine Problems, 1965] to prove that Σ(3) = 6: For a given n, if S(n) is known then all n-state Turing machines can (in principle) be run for up to S(n) steps, at which point any machine that hasn't yet halted will never halt. At that point, by observing which machines have halted with the most 1s on the tape (i.e., the busy beavers), one obtains from their tapes the value of Σ(n). The approach used by Lin & Radó for the case of n = 3 was to conjecture that S(3) = 21, then to simulate all the essentially different 3-state machines for up to 21 steps. By analyzing the behavior of the machines that had not halted within 21 steps, they succeeded in showing that none of those machines would ever halt, thus proving the conjecture that S(3) = 21, and determining that Σ(3) = 6 by the procedure just described. Inequalities relating Σ and S include the following (from [Ben-Amram, et al., 1996]), which are valid for all : and an asymptotically improved bound (from [Ben-Amram, Petersen, 2002]): there exists a constant c, such that for all , tends to be close to the square of , and in fact many machines give less than . Known values for Σ and S As of 2016 the function values for Σ(n) and S(n) are only known exactly for n < 5. The current (as of 2018) 5-state busy beaver champion produces 1s, using steps (discovered by Heiner Marxen and Jürgen Buntrock in 1989), but there remain 18 or 19 (possibly under 10, see below) machines with non-regular behavior which are believed to never halt, but which have not been proven to run infinitely. Skelet lists 42 or 43 unproven machines, but 24 are already proven. The remaining machines have been simulated to 81.8 billion steps, but none halted. Daniel Briggs also proved some machines. Another source says 98 machines remain unproven. There is an analysis of holdouts. So, it is likely that Σ(5) = 4098 and S(5) = 47176870, but this remains unproven, and it is unknown if there are any holdouts left (as of 2018). At the moment the record 6-state champion produces over (exactly (25×430341+23)/9), using over (found by Pavel Kropitz in 2010). As noted above, these are 2-symbol Turing machines. A simple extension of the 6-state machine leads to a 7-state machine which will write more than 10101010 1s to the tape, but there are undoubtedly much busier 7-state machines. However, other busy beaver hunters have different sets of machines. Milton Green, in his 1964 paper "A Lower Bound on Rado's Sigma Function for Binary Turing Machines", constructed a set of Turing machines demonstrating that where ↑ is Knuth up-arrow notation and A is Ackermann's function. Thus (with 333 =  terms in the exponential tower), and where the number g1 is the enormous starting value in the sequence that defines Graham's number. In 1964 Milton Green developed a lower bound for the Busy Beaver function that was published in the proceedings of the 1964 IEEE symposium on switching circuit theory and logical design. Heiner Marxen and Jürgen Buntrock described it as "a non-trivial (not primitive recursive) lower bound". This lower bound can be calculated but is too complex to state as a single expression in terms of n. When n=8 the method gives Σ(8) ≥ 3 × (7 × 392 - 1) / 2 ≈ 8.248×1044. It can be derived from current lower bounds that: In contrast, the best current (as of 2018) lower bound on Σ(6) is , which is greater than the lower bound given by Green's formula, 33 = 27 (which is tiny in comparison). In fact, it is much greater than the lower bound: 3 ↑↑ 3 = 333 = , which is Green's first lower bound for Σ(8), and also much greater than the second lower bound: 3×(7×392−1)/2. Σ(7) is by the same way much, much greater than current common lower bound 331 (nearly 618 trillion), so the second lower bound is also very, very weak. Proof for uncomputability of S(n) and Σ(n) Suppose that S(n) is a computable function and let EvalS denote a TM, evaluating S(n). Given a tape with n 1s it will produce S(n) 1s on the tape and then halt. Let Clean denote a Turing machine cleaning the sequence of 1s initially written on the tape. Let Double denote a Turing machine evaluating function n + n. Given a tape with n 1s it will produce 2n 1s on the tape and then halt. Let us create the composition Double | EvalS | Clean and let n0 be the number of states of this machine. Let Create_n0 denote a Turing machine creating n0 1s on an initially blank tape. This machine may be constructed in a trivial manner to have n0 states (the state i writes 1, moves the head right and switches to state i + 1, except the state n0, which halts). Let N denote the sum n0 + n0. Let BadS denote the composition Create_n0 | Double | EvalS | Clean. Notice that this machine has N states. Starting with an initially blank tape it first creates a sequence of n0 1s and then doubles it, producing a sequence of N 1s. Then BadS will produce S(N) 1s on tape, and at last it will clear all 1s and then halt. But the phase of cleaning will continue at least S(N) steps, so the time of working of BadS is strictly greater than S(N), which contradicts to the definition of the function S(n). The uncomputability of Σ(n) may be proved in a similar way. In the above proof, one must exchange the machine EvalS with EvalΣ and Clean with Increment — a simple TM, searching for a first 0 on the tape and replacing it with 1. The uncomputability of S(n) can also be established by reference to the blank tape halting problem. The blank tape halting problem is the problem of deciding for any Turing machine whether or not it will halt when started on an empty tape. The blank tape halting problem is equivalent to the standard halting problem and so it is also uncomputable. If S(n) was computable, then we could solve the blank tape halting problem simply by running any given Turing machine with n states for S(n) steps; if it has still not halted, it never will. So, since the blank tape halting problem is not computable, it follows that S(n) must likewise be uncomputable. Generalizations For any model of computation there exist simple analogs of the busy beaver. For example, the generalization to Turing machines with n states and m symbols defines the following generalized busy beaver functions: Σ(n, m): the largest number of non-zeros printable by an n-state, m-symbol machine started on an initially blank tape before halting, and S(n, m): the largest number of steps taken by an n-state, m-symbol machine started on an initially blank tape before halting. For example, the longest-running 3-state 3-symbol machine found so far runs steps before halting. The longest running 6-state, 2-symbol machine which has the additional property of reversing the tape value at each step produces 1s after steps. So for the Reversal Turing Machine (RTM) class, SRTM(6) ≥ and ΣRTM(6) ≥ . It is possible to further generalize the busy beaver function by extending to more than one dimension. Likewise we could define an analog to the Σ function for register machines as the largest number which can be present in any register on halting, for a given number of instructions. Exact values and lower bounds The following table lists the exact values and some known lower bounds for S(n, m) and Σ(n, m) for the generalized busy beaver problems. Note: entries listed as "?" are bounded from below by the maximum of all entries to left and above. These machines either haven't been investigated or were subsequently surpassed by a smaller machine. The Turing machines that achieve these values are available on Pascal Michel's webpage. Each of these websites also contains some analysis of the Turing machines and references to the proofs of the exact values. {| class="wikitable" | colspan="7" | Values of S(n, m) |- ! ! width="120px" | 2-state ! width="120px" | 3-state ! width="120px" | 4-state ! width="120px" | 5-state ! width="120px" | 6-state ! width="120px" | 7-state |- ! 2-symbol | align="right" | 6 | align="right" | 21 | align="right" | 107 | align="right" | ? | align="right" | > | align="right" | > 10101010 |- ! 3-symbol | align="right" | 38 | align="right" | | align="right" | > | | | |- ! 4-symbol | align="right" | ≥ | align="right" | > | | | | |- ! 5-symbol | align="right" | > | | | | | |- ! 6-symbol | align="right" | > | | | | | |- | colspan="7" | Values of Σ(n, m) |- ! ! width="120px" | 2-state ! width="120px" | 3-state ! width="120px" | 4-state ! width="120px" | 5-state ! width="120px" | 6-state ! width="120px" | 7-state |- ! 2-symbol | align="right" | 4 | align="right" | 6 | align="right" | 13 | align="right" | ? | align="right" | > | align="right" | > 10101010 |- ! 3-symbol | align="right" | 9 | align="right" | ≥ | align="right" | > | | | |- ! 4-symbol | align="right" | ≥ | align="right" | > | | | | |- ! 5-symbol | align="right" | > | | | | | |- ! 6-symbol | align="right" | > | | | | | |} Nondeterministic Turing machines The problem can be extended to Nondeterministic Turing machines by looking for the system with the most number of states across all branches or the branch with the longest number of steps. The question of whether a given NDTM will halt is still computationally irreducible, and the computation required to find an NDTM busy beaver is significantly greater than the deterministic case, since there are multiple branches that need to be considered. For a 2-state, 2-color system with p cases or rules, the table to the right gives the maximum number of steps before halting and maximum number of unique states created by the NDTM. Applications In addition to posing a rather challenging mathematical game, the busy beaver functions offer an entirely new approach to solving pure mathematics problems. Many open problems in mathematics could in theory, but not in practice, be solved in a systematic way given the value of S(n) for a sufficiently large n. Consider any conjecture that could be disproven via a counterexample among a countable number of cases (e.g. Goldbach's conjecture). Write a computer program that sequentially tests this conjecture for increasing values. In the case of Goldbach's conjecture, we would consider every even number ≥ 4 sequentially and test whether or not it is the sum of two prime numbers. Suppose this program is simulated on an n-state Turing machine. If it finds a counterexample (an even number ≥ 4 that is not the sum of two primes in our example), it halts and indicates that. However, if the conjecture is true, then our program will never halt. (This program halts only if it finds a counterexample.) Now, this program is simulated by an n-state Turing machine, so if we know S(n) we can decide (in a finite amount of time) whether or not it will ever halt by simply running the machine that many steps. And if, after S(n) steps, the machine does not halt, we know that it never will and thus that there are no counterexamples to the given conjecture (i.e., no even numbers that are not the sum of two primes). This would prove the conjecture to be true. Thus specific values (or upper bounds) for S(n) could be used to systematically solve many open problems in mathematics (in theory). However, current results on the busy beaver problem suggest that this will not be practical for two reasons: It is extremely hard to prove values for the busy beaver function (and the max shift function). It has only been proven for extremely small machines with fewer than five states, while one would presumably need at least 20-50 states to make a useful machine. Furthermore, every known exact value of S(n) was proven by enumerating every n-state Turing machine and proving whether or not each halts. One would have to calculate S(n) by some less direct method for it to actually be useful. But even if one did find a better way to calculate S(n), the values of the busy beaver function (and max shift function) get very large, very fast. S(6) > 10 already requires special pattern-based acceleration to be able to simulate to completion. Likewise, we know that S(10) > Σ(10) > 3 ↑↑↑ 3 is a gigantic number and S(17) > Σ(17) > G, where G is Graham's number - an enormous number. Thus, even if we knew, say, S(30), it is completely unreasonable to run any machine that number of steps. There is not enough computational capacity in the known part of the universe to have performed even S(6) operations directly. Notable instances A 748-state binary Turing machine has been constructed that halts iff ZFC is inconsistent. A 744-state Turing machine has been constructed that halts iff the Riemann hypothesis is false. A 43-state Turing machine has been constructed that halts iff Goldbach's conjecture is false, and a 27-state machine for that conjecture has been proposed but not yet verified. A 15-state Turing machine has been constructed that halts iff the following conjecture formulated by Paul Erdős in 1979 is false: for all n > 8 there is at least one digit 2 in the base 3 representation of 2n. Examples These are tables of rules for the Turing machines that generate Σ(1) and S(1), Σ(2) and S(2), Σ(3) (but not S(3)), Σ(4) and S(4), and the best known lower bound for Σ(5) and S(5), and Σ(6) and S(6). For other visualizations, In the tables, columns represent the current state and rows represent the current symbol read from the tape. Each table entry is a string of three characters, indicating the symbol to write onto the tape, the direction to move, and the new state (in that order). The halt state is shown as H. Each machine begins in state A with an infinite tape that contains all 0s. Thus, the initial symbol read from the tape is a 0. Result key: (starts at the position , halts at the position ) {| class="wikitable" |+ 1-state, 2-symbol busy beaver ! width="20px" | ! A |- ! 0 | 1RH |- ! 1 | (not used) |} Result: 0 0 0 (1 step, one "1" total) {| class="wikitable" |+ 2-state, 2-symbol busy beaver ! width="20px" | ! A ! B |- ! 0 | 1RB | 1LA |- ! 1 | 1LB | 1RH |} Result: 0 0 1 1 1 0 0 (6 steps, four "1"s total) {| class="wikitable" |+ 3-state, 2-symbol busy beaver ! width="20px" | ! A ! B ! C |- ! 0 | 1RB | 0RC | 1LC |- ! 1 | 1RH | 1RB | 1LA |} Result: 0 0 1 1 1 1 0 0 (14 steps, six "1"s total). Unlike the previous machines, this one is a busy beaver only for Σ, but not for S. (S(3) = 21.) {| class="wikitable" |+ 4-state, 2-symbol busy beaver ! width="20px" | ! A ! B ! C ! D |- ! 0 | 1RB | 1LA | 1RH | 1RD |- ! 1 | 1LB | 0LC | 1LD | 0RA |} Result: 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 (107 steps, thirteen "1"s total) {| class="wikitable" |+ current 5-state, 2-symbol best contender (possible busy beaver) ! width="20px" | ! A ! B ! C ! D ! E |- ! 0 | 1RB | 1RC | 1RD | 1LA | 1RH |- ! 1 | 1LC | 1RB | 0LE | 1LD | 0LA |} Result: 4098 "1"s with 8191 "0"s interspersed in 47,176,870 steps. Note in the image to the right how this solution is similar qualitatively to the evolution of some cellular automata. {| class="wikitable" |+ current 6-state, 2-symbol best contender ! width="20px" | ! A ! B ! C ! D ! E ! F |- ! 0 | 1RB | 1RC | 1LD | 1RE | 1LA | 1LH |- ! 1 | 1LE | 1RF | 0RB | 0LC | 0RD | 1RC |} Result: ≈3.515 × 1018267 "1"s in ≈7.412 × 1036534 steps. Visualizations In the following table, the rules for each busy beaver (maximizing Σ) are represented visually, with orange squares corresponding to a "1" on the tape, and white corresponding to "0". The position of the head is indicated by the black ovoid, with the orientation of the head representing the state. Individual tapes are laid out horizontally, with time progressing vertically. The halt state is represented by a rule which maps one state to itself (head doesn't move). See also Rayo's number Turmite Notes References This is where Radó first defined the busy beaver problem and proved that it was uncomputable and grew faster than any computable function. The results of this paper had already appeared in part in Lin's 1963 doctoral dissertation, under Radó's guidance. Lin & Radó prove that Σ(3) = 6 and S(3) = 21 by proving that all 3-state 2-symbol Turing Machines which don't halt within 21 steps will never halt. (Most are proven automatically by a computer program, however 40 are proven by human inspection.) Brady proves that Σ(4) = 13 and S(4) = 107. Brady defines two new categories for non-halting 3-state 2-symbol Turing Machines: Christmas Trees and Counters. He uses a computer program to prove that all but 27 machines which run over 107 steps are variants of Christmas Trees and Counters which can be proven to run infinitely. The last 27 machines (referred to as holdouts) are proven by personal inspection by Brady himself not to halt. Machlin and Stout describe the busy beaver problem and many techniques used for finding busy beavers (which they apply to Turing Machines with 4-states and 2-symbols, thus verifying Brady's proof). They suggest how to estimate a variant of Chaitin's halting probability (Ω). Marxen and Buntrock demonstrate that Σ(5) ≥ 4098 and S(5) ≥  and describe in detail the method they used to find these machines and prove many others will never halt. Green recursively constructs machines for any number of states and provides the recursive function that computes their score (computes σ), thus providing a lower bound for Σ. This function's growth is comparable to that of Ackermann's function. Busy beaver programs are described by Alexander Dewdney in Scientific American, August 1984, pages 19–23, also March 1985 p. 23 and April 1985 p. 30. Wherein Brady (of 4-state fame) describes some history of the beast and calls its pursuit "The Busy Beaver Game". He describes other games (e.g. cellular automata and Conway's Game of Life). Of particular interest is "The Busy Beaver Game in Two Dimensions" (p. 247). With 19 references. Cf Chapter 9, Turing Machines. A difficult book, meant for electrical engineers and technical specialists. Discusses recursion, partial-recursion with reference to Turing Machines, halting problem. A reference in Booth attributes busy beaver to Rado. Booth also defines Rado's busy beaver problem in "home problems" 3, 4, 5, 6 of Chapter 9, p. 396. Problem 3 is to "show that the busy beaver problem is unsolvable... for all values of n." Bounds between functions Σ and S. Improved bounds. This article contains a complete classification of the 2-state, 3-symbol Turing machines, and thus a proof for the (2, 3) busy beaver: Σ(2, 3) = 9 and S(2, 3) = 38. This is the description of ideas, of the algorithms and their implementation, with the description of the experiments examining 5-state and 6-state Turing machines by parallel run on 31 4-core computer and finally the best results for 6-state TM. External links The page of Heiner Marxen, who, with Jürgen Buntrock, found the above-mentioned records for a 5 and 6-state Turing machine. Pascal Michel's Historical survey of busy beaver results which also contains best results and some analysis. Definition of the class RTM - Reversal Turing Machines, simple and strong subclass of the TMs. The "Millennium Attack" at the Rensselaer RAIR Lab on the busy beaver Problem. This effort found several new records and established several values for the quadruple formalization. Daniel Briggs' website archive and forum for solving the 5-state, 2-symbol busy beaver problem, based on Skelet (Georgi Georgiev) nonregular machines list. Aaronson, Scott (1999), Who can name the biggest number? Busy Beaver Turing Machines - Computerphile Pascal Michel. The Busy Beaver Competition: a historical survey. 70 pages. 2017. <hal-00396880v5> Computability theory Theory of computation Large integers Metaphors referring to animals
59020316
https://en.wikipedia.org/wiki/Infysec
Infysec
infySEC is a company that provides cybersecurity services to medium-sized enterprises and governments across the world. The company is located in Chennai, India, and focuses on security technology services, security consulting, security training, and research and development. History The company was founded in 2010 by T. Vinod Senthil, an ethical hacker, along with Adhavan Rajadurai. They gathered a team of security professionals and started the company with the objective of providing cybersecurity services and training. In 2 years period, They had Karthick Vigneshwar join them to support. In November 2009, Senthil was the first person in Chennai to demonstrate wardriving, and he assisted the cyber crime department and NDTV Hindu NEWS. In July 2013, the company conducted an event dubbed "E-HACK", a 24-hour hackathon. The company presented at ASSOCHAM for two consecutive years on cybersecurity. In 2016, the company was awarded as a notable startup by That Startup Story. The company has completed over 200 security projects for various countries including Australia, the Maldives, and Dubai, as well as various governments, and has conducted 1000+ workshops to raise awareness of the issue of cybersecurity. Products Its products include Capture The Flag, for cybersecurity enthusiasts to test their skills in the field of cybersecurity; an ad-free Android app called CyberSec Tabloid for cybersecurity news updates; and a free Android app called AndroSentry that helps to monitor Android devices and includes a theft tracker, a call blocker, a virus scanner, and an app locker. References External links DoWebScan website security scanner Capture the Flag Cyber Sec Tabloid, Cyber Security News Hub Andro Sentry, Mobile Security and Antivirus Companies based in Chennai Indian companies established in 2010 Computer security companies 2010 establishments in Tamil Nadu Computer companies established in 2010
41297406
https://en.wikipedia.org/wiki/PX4%20autopilot
PX4 autopilot
PX4 autopilot is an open-source autopilot system oriented toward inexpensive autonomous aircraft. Low cost and availability enable hobbyist use in small remotely piloted aircraft. The project started in 2009 and is being further developed and used at Computer Vision and Geometry Lab of ETH Zurich (Swiss Federal Institute of Technology) and supported by the Autonomous Systems Lab and the Automatic Control Laboratory. Several vendors are currently producing PX4 autopilots and accessories. Overview An autopilot allows a remotely piloted aircraft to be flown out of sight. All hardware and software is open-source and freely available to anyone under a BSD license. Free software autopilots provide more flexible hardware and software. Users can modify the autopilot based on their own special requirements. The open-source software suite contains everything to let airborne system fly including QGroundControl and MAVLink Micro Air Vehicle Communication Protocol 2D/3D aerial maps (with Google Earth support) Drag-and-drop waypoints Other open-source robotics projects similar to PX4 include: the paparazzi Project ArduCopter Slugs OpenPilot Supported hardware For an up-to-date and complete list of the hardware supported by the PX4 Autopilot, visit their "Compatible Hardware" website. See also Crowdsourcing Micro air vehicle References External links PX4 Homepage Dronecode Homepage Avionics Aircraft instruments Unmanned aerial vehicles Free software Open-source hardware Software using the BSD license
5169750
https://en.wikipedia.org/wiki/Digital%20literacy
Digital literacy
Digital literacy refers to an individual's ability to find, evaluate, and clearly communicate information through typing and other media on various digital platforms. It is evaluated by an individual's grammar, composition, typing skills and ability to produce text, images, audio and designs using technology. The American Library Association (ALA) defines digital literacy as "the ability to use information and communication technologies to find, evaluate, create, and communicate information, requiring both cognitive and technical skills." While digital literacy initially focused on digital skills and stand-alone computers, the advent of the internet and use of social media, has resulted in the shift in some of its focus to mobile devices. Similar to other expanding definitions of literacy that recognize cultural and historical ways of making meaning, digital literacy does not replace traditional forms of literacy, but instead builds upon and expands the skills that form the foundation of traditional forms of literacy. Digital literacy should be considered to be a part of the path to knowledge. Digital literacy is built on the expanding role of social science research in the field of literacy as well as on concepts of visual literacy, computer literacy, and information literacy. Overall, digital literacy shares many defining principles with other fields that use modifiers in front of literacy to define ways of being and domain-specific knowledge or competence. The term has grown in popularity in education and higher education settings and is used in both international and national standards. History Digital literacy Digital literacy is often discussed in the context of its precursor media literacy. Media literacy education began in the United Kingdom and the United States as a result of war propaganda in the 1930s and the rise of advertising in the 1960s, respectively. Manipulative messaging and the increase in various forms of media further concerned educators. Educators began to promote media literacy education in order to teach individuals how to judge and assess the media messages they were receiving. The ability to critique digital and media content allows individuals to identify biases and evaluate messages independently. In order for individuals to evaluate digital and media messages independently, they must demonstrate digital and media literacy competence. Renee Hobbs developed a list of skills that demonstrate digital and media literacy competence. Digital and media literacy includes the ability to examine and comprehend the meaning of messages, judging credibility, and assess the quality of a digital work. A digitally literate individual becomes a socially responsible member of their community by spreading awareness and helping others find digital solutions at home, work, or on a national platform. Digital literacy doesn't just pertain to reading and writing on a digital device. It also involves knowledge of producing other forces of media, like recording and uploading video. Academic and pedagogical concepts In academia digital literacy is a part of the computing subject area alongside computer science and information technology. Given the many varied implications that digital literacy has on students and educators, pedagogy has responded by emphasizing four specific models of engaging with digital mediums. Those four models are text-participating, code-breaking, text-analyzing, and text-using. These methods present students (and other learners) with the ability to fully engage with the media, but also enhance the way the individual is able to relate the digital text to their lived experiences. 21st-century skills Digital literacy requires certain skill sets that are interdisciplinary in nature. Warschauer and Matuchniak (2010) list three skill sets, or 21st century skills, that individuals need to master in order to be digitally literate: information, media, and technology; learning and innovation skills; and life and career skills.. Aviram et al. assert that order to be competent in Life and Career Skills, it is also necessary to be able to exercise flexibility and adaptability, initiative and self-direction, social and cross-cultural skills, productivity and accountability, leadership and responsibility. Digital literacy is composed of different literacies, because of this fact there is no need to search for similarities and differences. Some of these literacies are media literacy and information literacy. Aviram & Eshet-Alkalai contend that there are five types of literacies that are encompassed in the umbrella term that is digital literacy. Photo-visual literacy: the ability to read and deduce information from visuals. Reproduction literacy: the ability to use digital technology to create a new piece of work or combine existing pieces of work together to make it your own. Branching literacy: the ability to successfully navigate in the non-linear medium of digital space. Information literacy: the ability to search, locate, assess and critically evaluate information found on the web and on-shelf in libraries. Socio-emotional literacy: the social and emotional aspects of being present online, whether it may be through socializing, and collaborating, or simply consuming content. In Society Digital literacy is necessary for the correct use of various digital platforms. Literacy in social network services and Web 2.0 sites helps people stay in contact with others, pass timely information, and even buy and sell goods and services. Digital literacy can also prevent people from being taken advantage of online, as photo manipulation, E-mail frauds and phishing often can fool the digitally illiterate, costing victims money and making them vulnerable to identity theft. However, those using technology and the internet to commit these manipulations and fraudulent acts possess the digital literacy abilities to fool victims by understanding the technical trends and consistencies; it becomes important to be digitally literate to always think one step ahead when utilizing the digital world. The emergence of Social media has paved the way for people to communicate and connect with one another in new and different ways. Websites like Facebook and Twitter, as well as personal websites and blogs, have enabled a new type of journalism that is subjective, personal, and "represents a global conversation that is connected through its community of readers." These online communities foster group interactivity among the digitally literate. Social media also help users establish a digital identity or a "symbolic digital representation of identity attributes." Without digital literacy or the assistance of someone who is digitally literate, one cannot possess a personal digital identity (this is closely allied to web literacy). Research has demonstrated that the differences in the level of digital literacy depend mainly on age and education level, while the influence of gender is decreasing. Among young people, digital literacy is high in its operational dimension. Young people rapidly move through hypertext and have a familiarity with different kinds of online resources. However, the skills to critically evaluate content found online show a deficit. With the rise of digital connectivity amongst young people, concerns of digital safety are higher than ever. A study conducted in Poland, commissioned by the Ministry of National Knowledge measured the digital literacy of parents in regards to digital and online safety. It concluded that parents often overestimate their level of knowledge, but clearly had an influence on their children’s attitude and behavior towards the digital world. It suggests that with proper training programs parents should have the knowledge in teaching their children about the safety precautions necessary to navigate the digital space. Digital divide Digital divide refers to the disparities among people - such as those living in developed and developing world - concerning access to and the use of information and communication technologies (ICT), particularly computer hardware, software, and the Internet. Individuals within societies that lack economic resources to build ICT infrastructure do not have adequate digital literacy, which means that their digital skills are limited. The divide can be explained by Max Weber's social stratification theory, which focuses on access to production rather ownership of the capital. The former becomes access to ICT so that an individual can accomplish interaction and produce information or create a product and that, without it, he or she cannot participate in the learning, collaboration, and production processes. Digital literacy and digital access have become increasingly important competitive differentiators for individuals using the internet meaningfully. In an article by Jen Schradie called, The Great Class Wedge and the Internet's Hidden Costs, she discusses how social class can affect digital literacy.  This creates a digital divide. Research published in 2012 found that the digital divide, as defined by access to information technology, does not exist amongst youth in the United States. Young people report being connected to the internet at rates of 94-98%. There remains, however, a civic opportunity gap, where youth from poorer families and those attending lower socioeconomic status schools are less likely to have opportunities to apply their digital literacy. The digital divide is also defined as emphasizing the distinction between the “haves” and “have-nots,” and presented all data separately for rural, urban, and central-city categories. Also, existing research on the digital divide reveals the existence of personal categorical inequalities between young and old people. An additional interpretation identified the gap between technology accessed by youth outside and inside the classroom. Participation gap Media theorist Henry Jenkins coined the term participation gap and distinguished the participation gap from the digital divide. According to Jenkins, in countries like the United States, where nearly everyone has access to the internet, the concept of digital divide does not provide enough insight. As such, Jenkins uses the term participation gap to develop a more nuanced view of access to the internet. Instead of referring to the "have's" vs "have-nots" when referring to digital technologies, Jenkins proposes the participation gap refer to people who have sustained access to and competency with digital technologies due to Media Convergence. Jenkins states that students learn different sets of technology skills if they only have access to the internet in a library or school. In particular, Jenkins observes that students who have access to the internet at home have more opportunities to develop their skills and have fewer limitations, such as computer time limits and website filters commonly used in libraries. The participation gap is geared toward millennials. As of 2008, when this study was created they were the oldest generation to be born in the age of technology. As of 2008 more technology has been integrated into the classroom. The issue with digital literacy is that students having access to the internet at home that is equivalent to what they interact with in class. Some students only have access while at school and in a library. They aren't getting enough or the same quality of the digital experience. This creates the participation gap, along with an inability to understand digital literacy. Digital rights Digital rights are an individual’s rights that allow them freedom of expression and opinion in an online setting, with roots centered on human theoretical and practical rights. It encompasses the individual’s privacy rights when using the Internet, and is essentially is responsible for how an individual uses different technologies and how content is distributed and mediated. Government officials and policymakers use digital rights as a springboard for enacting and developing policies and laws in order to obtain rights online the same way we obtain rights in real life. Private organizations who possess their own online infrastructures also develop rights specific to their property. In today’s world, most, if not all materials have shifted into an online setting and public policy has had a major influence in supporting this movement. Going beyond traditional academics, ethical rights such as copyright, citizenship and conversation can be attributed to digital literacy because tools and materials nowadays can be easily copied, borrowed, stolen, and repurposed, as literacy is collaborative and interactive, especially in a networked world. Digital Citizenship Digital Citizenship refers to the "right to participate in society online". It is connected to the notion of state-based citizenship which is determined by the country or region in which one was born in as well as the idea of being a 'dutiful citizen' who participates in the electoral process and online through mass media. A literate digital citizen possesses the skills to read, write and interact with online communities via screens and has an orientation for social justice. This is best described in the article Digital Citizenship during a Global Pandemic: Moving beyond Digital Literacy, "Critical digital civic literacy, as is the case of democratic citizenship more generally, requires moving from learning about citizenship to participating and engaging in democratic communities face‐to‐face, online, and in all the spaces in between." Through the various digital skills and literacy one gains, one is able to effectively solve social problems which might arise through social platforms. Additionally Digital Citizenship has three online dimensions: higher wages, democratic participation, and better communication opportunities which arise from the digital skills acquired. Digital citizenship also refers to online awareness and the ability to be safe and responsible online. This idea came from the rise of social media in the past decade which has enhanced global connectivity and faster interaction. However with this phenomenon the existence of fake news, hate speeches, cyberbullying, hoaxes and so on has emerged as well. Hence, this has created a codependent relationship between digital literacy and digital citizenship. Digital natives and digital immigrants Marc Prensky invented and popularized the terms digital natives and digital immigrants to describe respectively an individual born into the digital age and one adopting the appropriate skills later in life. A digital immigrant refers to an individual who adopts technology later in life. These two groups of people have had different interactions with technology since birth, a generational gap. This directly links to their individual unique relationship with digital literacy. Digital natives brought upon the creation of ubiquitous information systems (UIS).  These systems include mobile phones, laptop computers and personal digital assistants.  They have also expanded to cars and buildings (smart cars and smart homes), creating a new unique technological experience. Carr claims that digital immigrants, although they adapt to the same technology as natives, possess a sort of accent which restricts them from communicating the way natives do. In fact, research shows that, due to the brain's malleable nature, technology has changed the way today's students read, perceive, and process information. Marc Prensky believes this is a problem because today's students have a vocabulary and skill set educators (who at the time of his writing would be digital immigrants) may not fully understand. Statistics and popular representations of the elderly portray them as digital immigrants. For example, Canada in 2010 found that 29% of its citizens 75 years of age and older, and 60% of its citizens between the ages of 65-74 had browsed the internet in the past month. Conversely, internet activity reached almost 100% among its 15 through 24-year-old citizens. Applications of digital literacy In education Schools are continuously updating their curricula to keep up with accelerating technological developments. This often includes computers in the classroom, the use of educational software to teach curricula, and course materials being made available to students online. Students are often taught literacy skills such as how to verify credible sources online, cite web sites, and prevent plagiarism. Google and Wikipedia are frequently used by students "for everyday life research," and are just two common tools that facilitate modern education. Digital technology has impacted the way material is taught in the classroom. With the use of technology rising over the past decade, educators are altering traditional forms of teaching to include course material on concepts related to digital literacy. Educators have also turned to social media platforms to communicate and share ideas with one another. Social media and social networks have become a crucial part of the information landscape. Many students are using social media to share their areas of interests, which has been shown to be helpful in boosting their level of engagement with educators. A study on 268 eighth graders from two Moscow schools showed that a combination of social media use and activities guided by teachers boosted the level of performance in students. The students were encouraged to search and develop their social network skills to solve educational issues and boost cognition. The speed of access and enormous amounts of data found from these networks has made social media an invaluable cognitive tool. New standards have been put into place as digital technology has augmented classrooms, with many classrooms being designed to use smartboards and audience response systems in replacement of traditional chalkboards or whiteboards. “The development of Teacher’s Digital Competence (TDC) should start in initial teacher training, and continue throughout the following years of practice. All this with the purpose of using Digital Technologies (DT) to improve teaching and professional development.” New models of learning are being developed with digital literacy in mind. Several countries have based their models with the emphasis of finding new digital didactics to implement as they find more opportunities and trends through surveys conducted with educators and college instructors. It has been found that teachers from higher levels of educational institutes see digital literacy and digital competency as more important than ever when advancing the movement of society into a digitized one. Additionally, these new models of learning in the classroom has aided in promoting global connectivity and has enabled students to become globally minded citizens. According to the study Building Digital Literacy Bridges Connecting Cultures and Promoting Global Citizenship in Elementary Schools through School-Based Virtual Field Trips by Stacy Delacruz, Virtual Field Trips (VFT) a new form of multimedia presentation has gained popularity over the years in that they offer the "opportunity for students to visit other places, talk to experts and participate in interactive learning activities without leaving the classroom". They have also been used as a vessel for supporting cross-cultural collaboration amongst schools which include: "improved language skills, greater classroom engagement, deeper understandings of issues from multiple perspectives, and an increased sensitivity to multicultural differences". It also allows students to be the creators of their own digital content, a core standard from The International Society for Technology in Education (ISTE). The COVID-19 virus that started in late 2019 had spread to over multiple countries within months, forcing the World Health Organization to declare an international public health emergency and a pandemic. The outbreak pushed education into a more digital and online experience where teachers had to adopt to new levels of digital competencies in software to continue the education system as academic institutions discontinued all in-person activity and different online meeting platforms are being used for better communications (e.g: Skype, Zoom, Cisco Webex, Google Hangouts, Microsoft Teams, BlueJeans and Slack). Two major formats of online learnings: Asynchronous allow students to have more collaborative space and build up involvement. Synchronous learnings mostly take on live video format for better. An estimated 84% of the global student body was affected by this sudden closure due to the pandemic. Because of this sudden transition, there had been a clear disparity in student and school preparedness for digital education due in large part to a divide in digital skills and literacy that both the students and educators experience. The switch to online learning has also brought about some concerns regarding learning effectiveness, exposure to cyber-risks and lack of socialization, prompting the need to implement changes to how students are able to learn much needed digital skills and develop digital literacy. As a response, the DQ (Digital Intelligence) Institute, designed a common framework for enhancing digital literacy, digital skills and digital readiness. Attention and focus was also brought on the development of digital literacy on higher education. An interesting fact discovered through the process of digital learning is those who were born as Generation Z (born between the years 1996 and 2000) are "natural skills of digital native learners". These young adults tend to have a higher acceptability on digital learning. A study in Spain measured the digital knowledge of 4883 teachers of all education levels over the last school years and found that their digital skills required further training in order to advance new learning models for the digital age. Training programs have been proposed favoring the joint framework of INTEF (Spanish acronym for National Institute of Educational Technologies and Teacher Training) as reference. Surveys taken in Spain, Italy and Ecuador asking questions related to local student's online learning experience, 86.16% of students in Italy said they felt less accommodated, following with 68.8% in Italy, and 17.39% in Ecuador. In Europe, the Digital Competence of Educators (DigCompEdu) developed a framework to address and promote development of digital literacy. It is divided into six branches (professional engagement, digital sources resources, teaching and learning, assessment, empowering learners & facilitating learners’ digital competence). Moreover, the European Commission also developed the Digital Education Action Plan (2021-2027) which focuses on using the COVID-19 pandemic experience as a learning point, when technology is being used at a large scale for education, and being able to adapt the systems used for leaning and training towards the digital age. The framework is divided into two main strategic priorities: fostering the development of a high-performing digital education ecosystem and enhancing digital skills and competences for the digital transformation. Digital competences In 2013 the Open Universiteit Nederland release an article defining twelve digital competence areas. These areas are based on the knowledge and skills people have to acquire to be a literate person. A. General knowledge and functional skills. Knowing the basics of digital devices and using them for elementary purposes. B. Use in everyday life. Being able to integrate digital technologies into the activities in everyday life. C. Specialized and advanced competence for work and creative expression. Being able to use ICT to express your creativity and improve your professional performance. D. Technology mediated communication and collaboration. Being able to connect, share, communicate, and collaborate with others effectively in a digital environment. E. Information processing and management. Using technology to improve your ability to gather, analyze and judge the relevance and purpose of digital information. F. Privacy and security. Being able to protect your privacy and take appropriate security measures. G. Legal and ethical aspects. Behaving appropriately and in a socially responsible way in the digital environment and being aware of the legal and ethical aspects on the use of ICT. H. Balanced attitude towards technology. Demonstrating an informed, open-minded, and balanced attitude towards information society and the use of digital technologies. I. Understanding and awareness of role of ICT in society. Understanding the broader context of use and development of ICT. J. Learning about and with digital technologies. Exploring emerging technologies and integrating them. K. Informed decisions on appropriate digital technologies. Being aware of most relevant or common technologies. L. Seamless use demonstrating self-efficacy. Confidently and creatively applying digital technologies to increase personal and professional effectiveness and efficiency. The competences mentioned are based on each other. Competences A, B, and C are the basic knowledge and skills a person has to have to be a fully digital literate person. When these three competences are acquired you can build upon this knowledge and those skills to build the other competencies. Digital writing University of Southern Mississippi professor, Dr. Suzanne Mckee-Waddell conceptualized the idea of digital composition as the ability to integrate multiple forms of communication technologies and research to create a better understanding of a topic. Digital writing is a pedagogy being taught increasingly in universities. It is focused on the impact technology has had on various writing environments; it is not simply the process of using a computer to write. Educators in favor of digital writing argue that it is necessary because "technology fundamentally changes how writing is produced, delivered, and received." The goal of teaching digital writing is that students will increase their ability to produce a relevant, high-quality product, instead of just a standard academic paper. One aspect of digital writing is the use of hypertext or LaTeX. As opposed to printed text, hypertext invites readers to explore information in a non-linear fashion. Hypertext consists of traditional text and hyperlinks that send readers to other texts. These links may refer to related terms or concepts (such is the case on Wikipedia), or they may enable readers to choose the order in which they read. The process of digital writing requires the composer to make unique "decisions regarding linking and omission." These decisions "give rise to questions about the author's responsibilities to the [text] and to objectivity." In the workforce The 2014 Workforce Innovation and Opportunity Act (WIOA) defines digital literacy skills as a workforce preparation activity. In the modern world employees are expected to be digitally literate, having full digital competence. Those who are digitally literate are more likely to be economically secure, as many jobs require a working knowledge of computers and the Internet to perform basic tasks. Additionally, digital technologies such as mobile devices, production suites and collaboration platforms are ubiquitous in most office workplaces and are often crucial in daily tasks as many White collar jobs today are performed primarily using said devices and technology. Many of these jobs require proof of digital literacy to be hired or promoted. Sometimes companies will administer their own tests to employees, or official certification will be required. A study on the role of digital literacy in the EU labour market found that individuals are more likely to be employed the more digitally literate they are. As technology has become cheaper and more readily available, more blue-collar jobs have required digital literacy as well. Manufacturers and retailers, for example, are expected to collect and analyze data about productivity and market trends to stay competitive. Construction workers often use computers to increase employee safety. In entrepreneurship The acquisition of digital literacy is also important when it comes to starting and growing new ventures. The emergence of World Wide Web and digital platforms has led to a plethora of new digital products or services that can be bought and sold. Entrepreneurs are at the forefront of this development, using digital tools or infrastructure to deliver physical products, digital artifacts, or Internet-enabled service innovations. Research has shown that digital literacy for entrepreneurs consists of four levels (basic usage, application, development, and transformation) and three dimensions (cognitive, social, and technical). At the lowest level, entrepreneurs need to be able to use access devices as well as basic communication technologies to balance safety and information needs. As they move to higher levels of digital literacy, entrepreneurs will be able to master and manipulate more complex digital technologies and tools, enhancing the absorptive capacity and innovative capability of their venture. In a similar vein, if Small to Medium Enterprises(SME's) possess the ability to adapt to dynamic shifts in technology, then they can take advantage of trends, marketing campaigns as well as communication to consumers in order to generate more demand for their goods and services. Moreover, if entrepreneurs are digitally literate, then online platforms like social media can further help businesses receive feedback and generate community engagement that could potentially boost their business's performance as well as their brand image. A research paper published in The Journal of Asian Finance, Economics and Business provides critical insight that suggests digital literacy has the greatest influence on the performance of SME entrepreneurs.  The authors suggest their findings can help craft performance development strategies for said SME entrepreneurs and argue their research shows the essential contribution of digital literacy in developing business and marketing networks.   Additionally, the study found digitally literate entrepreneurs are able to communicate and reach wider markets than non-digitally literate entrepreneurs because of the use web-management and e-commerce platforms supported by data analysis and coding. That said, constraints do exist for SME's to use e-commerce. Some of these constraints include lack of technical understanding of information technologies, high cost of internet access(especially for those in rural/underdeveloped areas), and other constraints. Global impact The United Nations included digital literacy in its 2030 Sustainable Development Goals, under thematic indicator 4.4.2, which encourages the development of digital literacy proficiency in teens and adults to facilitate educational and professional opportunities and growth. International initiatives like the Global Digital Literacy Council (GDLC) and the Coalition for Digital Intelligence (CDI) have also highlighted the need for, and strategies to address, digital literacy on a global scale. The CDI, under the umbrella of the DQ Institute, created a Common Framework for Digital Literacy, Skills, and Readiness in 2019 that conceptualizes eight areas of digital life (identity, use, safety, security, emotional intelligence, communication, literacy, and rights), three levels of maturity (citizenship, creativity, and competitiveness), and three components of competency (knowledge, attitudes and values, and skills; or, what, why, and how). The UNESCO Institute for Statistics (UIS) also works to create, gather, map, and assess common frameworks on digital literacy across multiple member states around the world. In an attempt to narrow the Digital Divide, on September 26, 2018, the United States' Senate Foreign Relations Committee passed legislation to help provide access to the internet in developing countries via the H.R.600 Digital Global Access Policy Act. The legislation itself was based on Senator Markey's Digital Age Act, which was first introduced to the senate in 2016. In addition, Senator Markey provided a statement after the act was passed through the senate: “American ingenuity created the internet and American leadership should help bring its power to the developing world,” said Senator Markey. “Bridging the global digital divide can help promote prosperity, strengthen democracy, expand educational opportunity and lift some of the world’s poorest and most vulnerable out of poverty. The Digital GAP Act is a passport to the 21st century digital economy, linking the people of the developing world to the most successful communications and commerce tool in history. I look forward to working with my colleagues to get this legislation signed into law and to harness the power of the internet to help the developing world." The Philippines' Education Secretary Jesli Lapus has emphasized the importance of digital literacy in Filipino education. He claims a resistance to change is the main obstacle to improving the nation's education in the globalized world. In 2008, Lapus was inducted into Certiport's "Champions of Digital Literacy" Hall of Fame for his work to emphasize digital literacy. A study done in 2011 by the Southern African Linguistics & Applied Language Studies program observed some South African university students regarding their digital literacy. It was found that while their courses did require some sort of digital literacy, very few students actually had access to a computer. Many had to pay others to type any work, as they their digital literacy was almost nonexistent. Findings show that class, ignorance, and inexperience still affect any access to learning South African university students may need. See also Computer literacy Cyber self-defense Data literacy Information literacies Web literacy Media literacy Digital intelligence Digital rhetoric Digital rights Digital citizen References Bibliography Vuorikari, R., Punie, Y., Gomez, S. C., & Van Den Brande, G. (2016). DigComp 2.0: The Digital Competence Framework for Citizens. Update Phase 1: The Conceptual Reference Model (No. JRC101254). Institute for Prospective Technological Studies, Joint Research Centre. https://ec.europa.eu/jrc/en/digcomp and https://ec.europa.eu/jrc/en/publication/eur-scientific-and-technical-research-reports/digcomp-20-digital-competence-framework-citizens-update-phase-1-conceptual-reference-model External links digitalliteracy.gov An initiative of the Obama Administration to serve as a valuable resource to practitioners who are delivering digital literacy training and services in their communities. digitalliteracy.org A Clearinghouse of Digital Literacy and Digital Inclusion best practices from around the world. DigitalLiteracy.us A reference guide for public educators on the topic of digital literacy. Digital divide Literacy
22799050
https://en.wikipedia.org/wiki/Motor%20Industry%20Software%20Reliability%20Association
Motor Industry Software Reliability Association
Motor Industry Software Reliability Association (MISRA) is an organization that produces guidelines for the software developed for electronic components used in the automotive industry. It is a collaboration between vehicle manufacturers, component suppliers and engineering consultancies. In 2021, the loose consortium restructured as The MISRA Consortium Limited, a Company Limited by Guarantee. Aim The aim of this organization is to provide important advice to the automotive industry for the creation and application of safe, reliable software within vehicles. The safety requirements of the software used in Automobiles is different from that of other areas such as healthcare, industrial automation, aerospace etc. The mission statement of MISRA is "To provide assistance to the automotive industry in the application and creation within vehicle systems of safe and reliable software". Formation MISRA was formed by a consortium of organizations formed in response to the UK Safety Critical Systems Research Programme. This program was supported by the Department of Trade and Industry and the Engineering and Physical Sciences Research Council. Following the completion of the original work, the MISRA Consortium continued on a self-funding basis. MISRA Consortium The following organizations constitute the MISRA steering committee: Former members included: AB Automotive Electronics Ford Motor Company Jaguar Cars Lotus Engineering MIRA Ricardo TRW Automotive Electronics The University of Leeds Visteon Current members 2021 according to website: Bentley Motors Delphi Diesel Systems Ford Motor Company Ltd HORIBA MIRA Ltd Jaguar Land Rover Protean Electric Ltd Ricardo plc The University of Leeds Visteon Engineering Services Ltd ZF TRW The committee mainly includes vehicle manufacturers and component suppliers. Guidelines MISRA guidelines are the development guidelines for vehicle based software. The guidelines are intended to achieve the following: Ensure safety Ensure security Bring in robustness, reliability to the software Human safety must take precedence when in conflict with security of property Consider both random and systematic faults in system design Demonstrate robustness, not just rely on the absence of failures Application of safety considerations across the design, manufacture, operation, servicing and disposal of products As with many standards (for example, ISO, BSI, RTCA), the MISRA guideline documents are not free to users or implementers. Language guidelines Currently MISRA guidelines are produced for the C and C++ programming languages only. MISRA C++ was launched on March 2008. The third edition of MISRA C (known as MISRA C:2012) was published in 2013, and revised in 2019. See also MISRA C High Integrity C++ Static program analysis Coding standards Software quality References Automobile associations in the United Kingdom Computer science institutes in the United Kingdom Computer standards Hinckley and Bosworth Organisations based in Leicestershire Science and technology in Leicestershire Software design Standards organisations in the United Kingdom Technical specifications
10268
https://en.wikipedia.org/wiki/Editor%20war
Editor war
The editor war is the rivalry between users of the Emacs and vi (now usually Vim, or more recently Neovim) text editors. The rivalry has become a lasting part of hacker culture and the free software community. The Emacs versus vi debate was one of the original "holy wars" conducted on Usenet groups, with many flame wars fought between those insisting that their editor of choice is the paragon of editing perfection, and insulting the other, since at least 1985. Related battles have been fought over operating systems, programming languages, version control systems, and even source code indent style. Comparison The most important historical differences between vi and Emacs are presented in the following table: Benefits of Emacs Emacs has a non-modal interface Non-modal nature of Emacs keybindings makes it practical to be supported as OS-wide keybindings. One of the most ported computer programs. It runs in text mode and under graphical user interfaces on a wide variety of operating systems, including most Unix-like systems (Linux, the various BSDs, Solaris, AIX, IRIX, macOS etc.), MS-DOS, Microsoft Windows, AmigaOS, and OpenVMS. Unix systems, both free and proprietary, frequently provide Emacs bundled with the operating system. Emacs server architecture allows multiple clients to attach to the same Emacs instance and share the buffer list, kill ring, undo history and other state. Pervasive online help system with keybindings, functions and commands documented on the fly. Extensible and customizable Lisp programming language variant (Emacs Lisp), with features that include: Ability to emulate vi and vim (using Evil, Viper or Vimpulse). A powerful and extensible file manager (dired), integrated debugger, and a large set of development and other tools. Having every command be an Emacs Lisp function enables commands to DWIM (Do What I Mean) by programmatically responding to past actions and document state. For example, a switch-or-split-window command could switch to another window if one exists, or create one if needed. This cuts down on the number of keystrokes and commands a user must remember. "An OS inside an OS". Emacs Lisp enables Emacs to be programmed far beyond editing features. Even a base install contains several dozen applications, including two web browsers, news readers, several mail agents, four IRC clients, a version of ELIZA, and a variety of games. All of these applications are available anywhere Emacs runs, with the same user interface and functionality. Starting with version 24, Emacs includes a package manager, making it easy to install additional applications including alternate web browsers, EMMS (Emacs Multimedia System), and more. Also available are numerous packages for programming, including some targeted at specific language/library combinations or coding styles. Benefits of vi Edit commands are composable Vi has a modal interface Vi loads faster than Emacs. Being deeply associated with UNIX tradition, it runs on all systems that can implement the standard C library, including UNIX, Linux, AmigaOS, DOS, Windows, Mac, BeOS, OpenVMS, IRIX, AIX, HP-UX, BSD and POSIX-compliant systems. Extensible and customizable through Vim script or APIs for interpreted languages such as Python, Ruby, Perl, and Lua Ubiquitous. Essentially all Unix and Unix-like systems come with vi (or a variant) built-in. Vi (and ex, but not vim) is specified in the POSIX standard. System rescue environments, embedded systems (notably those with BusyBox) and other constrained environments often include vi, but not emacs. Evolution In the past, many small editors modeled after or derived from vi flourished. This was due to the importance of conserving memory with the comparatively minuscule amount available at the time. As computers have become more powerful, many vi clones, Vim in particular, have grown in size and code complexity. These vi variants of today, as with the old lightweight Emacs variants, tend to have many of the perceived benefits and drawbacks of the opposing side. For example, Vim without any extensions requires about ten times the disk space required by vi, and recent versions of Vim can have more extensions and run slower than Emacs. In The Art of Unix Programming, Eric S. Raymond called Vim's supposed light weight when compared with Emacs "a shared myth". Moreover, with the large amounts of RAM in modern computers, both Emacs and vi are lightweight compared to large integrated development environments such as Eclipse, which tend to draw derision from Emacs and vi users alike. Tim O'Reilly said, in 1999, that O'Reilly Media's tutorial on vi sells twice as many copies as that on Emacs (but noted that Emacs came with a free manual). Many programmers use either Emacs and vi or their various offshoots, including Linus Torvalds who uses MicroEMACS. Also in 1999, vi creator Bill Joy said that vi was "written for a world that doesn't exist anymore" and stated that Emacs was written on much more capable machines with faster displays so they could have "funny commands with the screen shimmering and all that, and meanwhile, I'm sitting at home in sort of World War II surplus housing at Berkeley with a modem and a terminal that can just barely get the cursor off the bottom line". In addition to Emacs and vi workalikes, pico and its free and open-source clone nano and other text editors such as ne often have their own third-party advocates in the editor wars, though not to the extent of Emacs and vi. , both Emacs and vi can lay claim to being among the longest-lived application programs of all time, as well as being the two most commonly used text editors on Linux and Unix. Many operating systems, especially Linux and BSD derivatives, bundle multiple text editors with the operating system to cater to user demand. For example, a default installation of macOS contains ed, nano, TextEdit, and Vim. Frequently, at some point in the discussion, someone will point out that ed is the standard text editor. Humor The Church of Emacs, formed by Emacs and the GNU Project's creator Richard Stallman, is a parody religion. While it refers to vi as the "editor of the beast" (vi-vi-vi being 6-6-6 in Roman numerals), it does not oppose the use of vi; rather, it calls proprietary software anathema. ("Using a free version of vi is not a sin but a penance.") The Church of Emacs has its own newsgroup, alt.religion.emacs, that has posts purporting to support this belief system. Stallman has referred to himself as St IGNU−cius, a saint in the Church of Emacs. Supporters of vi have created an opposing Cult of vi, argued by the more hard-line Emacs users to be an attempt to "ape their betters". Regarding vi's modal nature (a common point of frustration for new users) some Emacs users joke that vi has two modes – "beep repeatedly" and "break everything". vi users enjoy joking that Emacs's key-sequences induce carpal tunnel syndrome, or mentioning one of many satirical expansions of the acronym EMACS, such as "Escape Meta Alt Control Shift" (a jab at Emacs's reliance on modifier keys) or "Eight Megabytes And Constantly Swapping" (in a time when that was a great amount of memory) or "EMACS Makes Any Computer Slow" (a recursive acronym like those Stallman uses) or "Eventually Munches All Computer Storage", in reference to Emacs's high system resource requirements. GNU EMACS has been expanded to "Generally Not Used, Except by Middle-Aged Computer Scientists" referencing its most ardent fans, and its declining usage among younger programmers compared to more graphically-oriented editors such as Atom, BBEdit, Sublime Text, TextMate, and Visual Studio Code. As a poke at Emacs' creeping featurism, vi advocates have been known to describe Emacs as "a great operating system, lacking only a decent editor". Emacs advocates have been known to respond that the editor is actually very good, but the operating system could use improvement (referring to Emacs' famous lack of concurrency, which has now been added). A game among UNIX users, either to test the depth of an Emacs user's understanding of the editor or to poke fun at the complexity of Emacs, involved predicting what would happen if a user held down a modifier key (such as or ) and typed their own name. This game humor originated with users of the older TECO editor, which was the implementation basis, via macros, of the original Emacs. Due to how one exits vi (":q", among others), hackers joke about a proposed method of creating a pseudorandom character sequence by having a user unfamiliar with vi seated in front of an open editor and asking them to exit the program. The Google search engine also joined in on the joke by having searches for vi resulting in the question "Did you mean: emacs" prompted at the top of the page, and searches for emacs resulting in "Did you mean: vi". See also Browser wars Comparison of text editors Notes References External links Results of an experiment comparing Vi and Emacs Comparing keystrokes per task Humor around Vi, Emacs and their comparisons Results of the Sucks-Rules-O-Meter for Vi and Emacs from comments made on the Web In the Church of Emacs "using a free version of vi is not a sin, it's a penance." Emacs offers Vi functionality, from the Emacs wiki Emacs Vs Vi, from WikiWikiWeb The Right Size for an Editor discussing vi and Emacs in relatively modern terms Emacs Free software culture and documents Hacker culture Internet culture Software wars Vi
20618843
https://en.wikipedia.org/wiki/Remix%20%28book%29
Remix (book)
Remix: Making Art and Commerce Thrive in the Hybrid Economy is Lawrence Lessig's fifth book. The book was made available for free download and remixing under the CC BY-NC Creative Commons license via Bloomsbury Academic. It is still available via the Internet Archive. It details a hypothesis about the societal effect of the Internet, and how this will affect production and consumption of popular culture to a "remix culture". Summary In Remix, Lawrence Lessig, a Harvard law professor and a respected voice in what he deems the "copyright wars", describes the disjuncture between the availability and relative simplicity of remix technologies and copyright law. Lessig insists that copyright law as it stands now is antiquated for digital media since every "time you use a creative work in a digital context, the technology is making a copy". Thus, amateur use and appropriation of digital technology is under unprecedented control that previously extended only to professional use. Lessig insists that knowledge and manipulation of multi-media technologies is the current generation's form of "literacy"- what reading and writing was to the previous. It is the vernacular of today. The children growing up in a world where these technologies permeate their daily life are unable to comprehend why "remixing" is illegal. Lessig insists that amateur appropriation in the digital age cannot be stopped but only 'criminalized'. Thus most corrosive outcome of this tension is that generations of children are growing up doing what they know is "illegal" and that notion has societal implications that extend far beyond copyright wars. The book is now available as a free download under one of the Creative Commons' licenses (CC BY-NC 3.0). Read-only culture vs. read/write culture Lessig outlines two cultures - the read-only culture (RO) and the read/write culture (RW). The RO culture is the culture we consume more or less passively. The information or product is provided to us by a 'professional' source, the content industry, that possesses an authority on that particular product/information. Analog technologies inherently supported RO culture's business model of production and distribution and limited the role of the consumer to just that, 'consuming'. Digital technology, however, does not have the 'natural' constraints of the analog that preceded it. "What before was both impossible and illegal is now just illegal"(38). Steve Jobs was the first to see potential in this new market made possible by digital technology. RO culture had to be recoded in order to compete with the "free" distribution made possible by the Internet. iTunes Music store was proof of this. While it provided digital music it was protected by a Digital Rights Management (DRM) code from re-distribution. Lessig uses this key example to show that it is possible to achieve a business model which balances access and control and is equally attractive to both the consumers and the creators. In addition, digital technologies have changed the way we think about 'access'. Today, most of us would never structure our day around a particular program because we know that it is most likely available online - even if it is not necessarily free of charge. Lessig insists, using Amazon as his premiere example, that the future of entertainment and advertising lies in accumulating information about a consumer and tailoring the product to their preferences. As opposed to RO culture, Read/Write culture has a reciprocal relationship between the producer and the consumer. Taking works, such as songs, and appropriating them in private circles is exemplary of RW culture, which was considered to be the 'popular' culture before the advent of reproduction technologies. The technologies and copyright laws that soon followed, however, changed the dynamics of popular culture. As it became professionalized people were taught to defer production to the professionals. Lessig posits that digital technologies provide the tools for reviving RW culture and democratizing production. He uses blogs to explain the three layers of this democratization. Blogs have redefined our relationship to the content industry as they allowed access to non-professional content. The 'comments' feature that soon followed provided a space for readers to have a dialogue with the amateur contributors. 'Tagging' of the blogs by users based on the content provided the necessary layer for users to filter the sea of content according to their interest. The third layer added bots that analyzed the relationship between various websites by counting the clicks between them and, thus, organizing a database of preferences. The three layers working together established an "ecosystem of reputation"(61) that served to guide users through the blogosphere. Lessig uses the blog model to demonstrate a wider conclusion - while there is no doubt many amateur online publications cannot compete with the validity of professional sources, the democratization of digital RW culture and the 'ecosystem of reputation' provides a space for many talented voices to be heard that was not available in the pre-digital RO model. Hybrid Economies There are three economies that Lessig introduced in his book. The first is the commercial economy. Commercial Economies at their very center value money the most and build value around the monetary. Second to this is the sharing economy which completely ignores money as an item of value and instead focuses on valuing things that are not monetary. But settled in between the two is a third, the hybrid economy. He asserts that the hybrid economy will be the dominant force with the rise of the web, and in order for it thrive the two economies from which it borrows from must be preserved. Conceptually the monetizing nature from the commercial, and the 'lending' quality of the sharing economy are necessary to ensure that the hybrid doesn't lose sight of economic gain or doesn't lose the willingness to obtain economic resources. The Internet and Commons The internet is essentially the hub for this type of economy. With more people utilizing it as a platform for sharing and monetizing, the internet's primary function is split in two. In order for people to 'Remix' they need the internet for its open and free design. Remix, according to Lessig, is not solely digital, but also relates to the act of reading and applying texts to their personal life. Culturally, critically taking in what is going on (the original content) and developing an opinion that can be shared and given transformed meaning, is also considered remixing. Most of the debate in Remix is in regard to ownership. Due to the fact that remixing is limitless, it becomes difficult to end. Every mix becomes a resource for another new mix and expands to others even if they are never seen. When it comes to the internet, ownership has become a murky subject. Companies who originated a piece of work are owners of that product, but only if it is copyrighted and protected legally. That being said, people without access to these legalities are unprotected and liable to get their ideas and content stolen. This is where the commons becomes prevalent. He defines the commons as resources that are available for everyone equally in a certain group. The internet was invented for flexible accessibility and thus facilitates innovation. This is Lessig's philosophy, however the issue comes with a price tag. The fight to define who owns a creative work of art if it contains other works not owned by the party is what Lessig says is "killing creativity". Although people have become used to this, he argues that it is for this reason that he claims that it is an attempt at "counterrevolution". Free Software Notably Richard Stallman is vocal about his stance on the positive repercussions of utilizing free software, namely Linux. Essentially both Stallman and Lessig are on the same page. When it comes to 'hybrid' economies, Linux fits the description with its selling point being "benefits", instead of "features". This on its own has no standing for 'justice' but rather the profitability of such a software. Remixing is this software's very nature. The appeal is to "sell" the benefits of its use. People no longer have to wait for a company to fix bugs, or other issues with the software and instead they can collaborate and ultimately do it themselves. This can be done with other software, but the downside is that legally with paid proprietary software there are repercussions to prevent, the software from being "remixed" and sold as an alternative "original"'. The Prevalence of YouTube With the internet comes what Lessig described as community spaces, with site YouTube up for major debate for its ability to both provide original content and exist as an open bank for content to Remix. The website provides users a domain to not only consume, but to make creative content. Creativity in this sense, relates to the combining of elements or materials with an individuals original ideas to create a unique product. Lessig has had his own fight with the platform when his Lecture got taken down in 2013 on grounds of violating Copyright laws due to a song from the band Phoenix being used in part of the presentation. However, due to the non-commercialized and transformed nature of his usage, the video should have fallen under Fair use. This issue is an example of exactly what he is fighting for. In addition to the Digital Millennium Copyright Act of 1998, YouTube also allows the claimant to place advertisements on the video. This is done as retribution for using or allowing copyrighted media in the video, and allows the user to keep the video up without having to deal with legalities. The website is taking the emphasis off of the creation, and placing it on the monetary value that it holds. Lessig argues that these issues should be separated when it comes to amateur non commercialized content. With growing frequency, YouTube has begun copyright striking, and taking down videos that appear to have claimed content in them in any way. Without the claim in question, to be the main feature in the video, it can merely be a song playing in the background that can take a user's work off the web. While original content featuring sole a user's own ideas and content does exist, this is not the focus of Remixing, or Lessig's point. It is not solely creating new and unique ideas with novelty resources, but instead pulling from multiple sources to give way to new products. To that Lessig's rebuttal is that the work made on such platforms should be free of legal ownership aside from its originator. These new products leverage the references in their original work in order to build a new and different meaning; which has no implications of being 'better' or 'worse' than its origin. The remix Lessig argues that today digital culture permeates our lifestyle to such extent - an average teenager will spend an hour per weekend day using the computer for leisure and only 7 minutes reading - that "it is no surprise that these other forms of 'creating' are becoming an increasingly dominant form of 'writing'"(69). Previous generations used textual quotes to build on writings before them. Today, this process of quoting or collage is manifest through digital media. The remix utilizes the (multi-media) language through which the current generations communicate. They quote content from various sources to create something "new". Thus, the remix provides a commentary on the sounds and images it utilizes the same way a critical essay provides commentary on the texts it quotes. One of Lessig's favorite remix examples is the "Bush and Blair Love Song" which remixes images of President Bush and Tony Blair to make it appear as if they are lip-synching Lionel Richie's "Endless Love". "The message couldn't be more powerful: an emasculated Britain, as captured in the puppy love of its leader for Bush" (74). This remix in Lessig's eyes is exemplary of the power this type of expression holds - to not tell but show. Using preexisting images is vital to the art form because the production of meaning draws heavily on cultural reference an image or sound brings with it. Their meaning comes not from the content of what they say; it comes from the reference, which is expressible only if it is the original that gets used. Lessig describes the remix phenomenon instrumental in creating cultural literacy and a critical view of media and advertising that permeates our daily lives. But, as it stands today, copyright law will inhibit education employing these digital forms of literacy for institutions will shy away from use that might be deemed 'illegal'. Yet, Lessig reiterates, the remix form of expression cannot be killed, only criminalized. Commercial economies vs. sharing economies In addition to describing two cultures Lessig also proposes two economies: the commercial and the sharing. The commercial economy is governed by the simple logic of the market, where products and services have a tangible economic value, be it money or labor. The Internet has been extremely successful as a portal for commercial economies to flourish - improving existing businesses and serving as a platform for thousands of new ones. It has been exceptionally fruitful of businesses that cater to a niche market - exemplified by such companies as Amazon and Netflix which provide a range of items that could not be accommodated by one physical space. This dynamic has been outlined by Wired's editor in chief, Chris Anderson, in his book The Long Tail. Another obvious success story of a digital commercial economy is Google, which has managed to create value from value others have already created. The sharing economy functions outside monetary exchange. We all belong to sharing economies - most obvious examples are our friendships and relationships. This economy is regulated not by a metric of price but by a set of social relations. Like the commercial economy, the sharing economy extends into the digital realm. Lessig's favorite example is Wikipedia itself. The top ten most visited website relies on user contribution - from creation to editing - for its content and gives no monetary incentive for this contribution. While providing the option of anonymity, the users of Wikipedia have been remarkably consistent with the site's suggestions - be it regarding consistent aesthetic or neutral point of view. A vital characteristic of a successful sharing economy is people are in it because they want to be. Hybrids Lessig does a number of case studies of three types of successful hybrids. Community spaces Lessig cites sites such as Dogster, Craigslist, Flickr, and YouTube as successful internet community spaces that answer demand of the users who, in turn, reciprocate through sharing content and self-regulating by flagging inappropriate content. At the same time the sites make revenue through advertisements but are extremely careful to not overwhelm the users and disrupt the sense of community. Collaboration spaces Collaboration hybrids center on the belief of the users that they are working towards a common goal or building something together. Lessig's notable examples are volunteers of Usenet that help those technologically in need solve computer problems – from minor to complex. They are not paid or recognized by Microsoft yet they are instrumental in building value for the company. Similarly, Yahoo! Answers launched in December 2005 has gathered an enormous following of people answering other people's questions for free. They do not participate for any incentive other than to share their expertise and help others. In this category Lessig also cites the now infamous Heather Lawver 2000 case after the teenager started a fan site for J.K. Rowling's Harry Potter series, only to be constantly 'threatened' by Warner for illegal use of copyrighted content. Eight years later many large corporations have, at least in part, learned from Warner's mistake and Lawver's persuasive argument of the Potter Wars: fans are "a part of your marketing budget that you don't have to pay for". Thus lighter control of content use allows fans to share their appropriation of content while promoting it free. Everyone wins. Communities Lessig's third category lacks the 'spaces' qualification of the previous two because they create a community on a much grander, or more comprehensive scale. One such community is Second Life through which users can immerse themselves in a virtual environment and build a multi-faceted life not unlike real life but without the same limitations, while creating value by producing and sharing new codes for the program. Lessig concludes that a feeling of ownership and contribution is vital to making hybrid communities function. These communities are not built on sacrifice but on mutual satisfaction in which both the consumer and producer benefit. Parallel economies can coexist, the author insists, and are not mutually exclusive. In fact, crossover is not uncommon, particularly in the world of the Creative Commons which Lessig helped found. Many artists that have initially licensed their work under a CC license, that allowed others to share and remix their work as long as they were credited, have used the momentum from this visibility to crossover to the commercial economy. Lessig warns that hybrid economies will do well to avoid what he calls sharecropping, that is corporations forcing the remixer to give up the right to his/her creation (providing they don't own the rights to all/some of its components) even if they plan to use their work for commercial purposes. The hybrid that respects the rights of the creator - both the original creator and the remixer - is more likely to survive than the one that doesn't. Reforming copyright law Lessig outlines five steps that will put us on the path towards more efficient and sound copyright law. Deregulating Amateur Activity. Primarily this means exempting noncommercial and, particularly amateur, use from the rights granted by copyright. In addition, this loosening of control will, in turn, remove some of the burden from the corporations' monitoring for misuse of their content. Clear Title. As of now, there is no comprehensive and accessible registry that lists who owns rights to what. In addition to making the above clear, Lessig insists that author/owner should have to register their work in order to extend the copyright after a shorter period of time and for the work, otherwise, to enter public domain. He insists that this change would be instrumental to digital archiving and access for educational purposes. Simplify. Building on his previous suggestions, Lessig insists that the system should be simplified. If a child is expected to comply with copyright law, they should be able to understand it. Decriminalizing the Copy. As mentioned before the production of the 'copy' is a commonplace in daily transaction within the digital realm. If our daily activity triggers federal regulation on copyright law, it means that this regulation reaches too far. Thus the law must be rearticulated as to not include uses that are irrelevant to copyright owner's control. Decriminalizing File Sharing.''' Lessig suggests this should be done either by "authorizing at least noncommercial file sharing with taxes to cover a reasonable royalty to the artists whose work is shared, or by authorizing a simple blanket licensing procedure, whereby users could, for a low fee, buy the right to freely file-share" Conclusion In his final chapter "Reforming Us", Lessig insists that in order to move towards ending the senseless copyright wars, which are mostly harming our children, we must understand that governmental control has its limits. The children growing up in a digital age are seeing these laws as senseless and corrupt and, more importantly, trivial as they continue to remix and download despite it. Lessig warns that this phenomenon can have a larger trickle-down effect towards a child's view of law in general. When put in this light, copyright reform carries much larger implications for the morality of the digital age generations. Aside from morality of the generation, Lessig asserts that due to legislation being either too passive or too stern it creates the lack of understanding from policy makers. This assertion leads to the true meaning of [[w:fair use| fair use]]. In popular culture On an episode of The Colbert Report with Lessig as a guest, Stephen Colbert made fun of the book's status under Creative Commons by taking a copy, signing it, and then proclaiming it the 'Colbert' edition for sale. Lessig laughed. See also Remix CultureAn Army of DavidsFree Culture References External links Official site Creative Commons Book sources: Remix downloads on Archive.org Lessig's article "Copyright and Politics Don't Mix", October 21, 2008 in The New York Times'' Book discussion with Lessig on Remix, November 18, 2008 on c-span.org 2008 books Books by Lawrence Lessig Free content Creative Commons-licensed books
17530988
https://en.wikipedia.org/wiki/History%20of%20the%20web%20browser
History of the web browser
A web browser is a software application for retrieving, presenting and traversing information resources on the World Wide Web. It further provides for the capture or input of information which may be returned to the presenting system, then stored or processed as necessary. The method of accessing a particular page or content is achieved by entering its address, known as a Uniform Resource Identifier or URI. This may be a web page, image, video, or other piece of content. Hyperlinks present in resources enable users easily to navigate their browsers to related resources. A web browser can also be defined as an application software or program designed to enable users to access, retrieve and view documents and other resources on the Internet. Precursors to the web browser emerged in the form of hyperlinked applications during the mid and late 1980s, and following these, Tim Berners-Lee is credited with developing, in 1990, both the first web server, and the first web browser, called WorldWideWeb (no spaces) and later renamed Nexus. Many others were soon developed, with Marc Andreessen's 1993 Mosaic (later Netscape), being particularly easy to use and install, and often credited with sparking the internet boom of the 1990s. Today, the major web browsers are Chrome, Safari, Internet Explorer, Firefox, Opera, and Edge. The explosion in popularity of the Web was triggered in September 1993 by NCSA Mosaic, a graphical browser which eventually ran on several popular office and home computers. This was the first web browser aiming to bring multimedia content to non-technical users, and therefore included images and text on the same page, unlike previous browser designs; its founder, Marc Andreessen, also established the company that in 1994, released Netscape Navigator, which resulted in one of the early browser wars, when it ended up in a competition for dominance (which it lost) with Microsoft's Internet Explorer (for Windows). Precursors In 1984, expanding on ideas from futurist Ted Nelson, Neil Larson's commercial DOS Maxthink outline program added angle bracket hypertext jumps (adopted by later web browsers) to and from ASCII, batch, and other Maxthink files up to 32 levels deep. In 1986, he released his DOS Houdini knowledge network program that supported 2500 topics cross-connected with 7500 links in each file along with hypertext links among unlimited numbers of external ASCII, batch, and other Houdini files, these capabilities were included in his then popular shareware DOS file browser programs HyperRez (memory resident) and PC Hypertext (which also added jumps to programs, editors, graphic files containing hot spots jumps, and cross-linked thesaurus/glossary files). These programs introduced many to the browser concept and 20 years later, Google still lists 3,000,000 references to PC Hypertext. In 1989, Larson created both HyperBBS and HyperLan which both allow multiple users to create/edit both topics and jumps for information and knowledge annealing which, in concept, the columnist John C. Dvorak says pre-dated Wiki by many years. From 1987 on, Neil Larson also created TransText (hypertext word processor) and many utilities for rapidly building large scale knowledge systems. In 1989, his software helped produce, for one of the big eight accounting firms, a comprehensive knowledge system (integrated litigation knowledge system) of integrating all accounting laws/regulations into a CDROM containing 50,000 files with 200,000 hypertext jumps. Additionally, the Lynx (a very early web-based browser) development history notes their project origin was based on the browser concepts from Neil Larson and Maxthink. In 1989, he declined joining the Mosaic browser team with his preference for knowledge/wisdom creation over distributing information ... a problem he says is still not solved by today's internet. Another early browser, Silversmith, was created by John Bottoms in 1986. The browser, based on SGML tags, used a tag set from the Electronic Document Project of the AAP with minor modifications and was sold to a number of early adopters. At the time SGML was used exclusively for the formatting of printed documents. The use of SGML for electronically displayed documents signaled a shift in electronic publishing and was met with considerable resistance. Silversmith included an integrated indexer, full text searches, hypertext links between images text and sound using SGML tags and a return stack for use with hypertext links. It included features that are still not available in today's browsers. These include capabilities such as the ability to restrict searches within document structures, searches on indexed documents using wild cards and the ability to search on tag attribute values and attribute names. Peter Scott and Earle Fogel expanded the earlier HyperRez (1988) concept in creating HyTelnet in 1990 which added jumps to telnet sites ... and which offered users instant logon and access to the online catalogs of over 5000 libraries around the world. The strength of Hytelnet was speed and simplicity in link creation/execution at the expense of a centralized worldwide source for adding, indexing, and modifying telnet links. This problem was solved by the invention of the web server. In April 1990, a draft patent application for a mass market consumer device for browsing pages via links "PageLink" was proposed by Craig Cockburn at Digital Equipment Corporation (DEC) whilst working in their Networking and Communications division in Reading, England. This application for a keyboardless touch screen browser for consumers also makes reference to "navigating and searching text" and "bookmarks" was aimed at (quotes paraphrased) "replacing books", "storing a shopping list" "have an updated personalised newspaper updated round the clock", "dynamically updated maps for use in a car" and suggests such a device could have a "profound effect on the advertising industry". The patent was canned by Digital as too futuristic and, being largely hardware based, had obstacles to market that purely software driven approaches lacked. Early 1990s: world wide web The first web browser, WorldWideWeb, was developed in 1990 by Tim Berners-Lee for the NeXT Computer (at the same time as the first web server for the same machine) and introduced to his colleagues at CERN in March 1991. Berners-Lee recruited Nicola Pellow, a math student intern working at CERN, to write the Line Mode Browser, a cross-platform web browser that displayed web-pages on old terminals and was released in May 1991. In 1992, Tony Johnson released the MidasWWW browser. Based on Motif/X, MidasWWW allowed viewing of PostScript files on the Web from Unix and VMS, and even handled compressed PostScript. Another early popular Web browser was ViolaWWW, which was modeled after HyperCard. In the same year the Lynx browser was announced – the only one of these early projects still being maintained and supported today. Erwise was the first browser with a graphical user interface, developed as a student project at Helsinki University of Technology and released in April 1992, but discontinued in 1994. Thomas R. Bruce of the Legal Information Institute at Cornell Law School started 1992, to develop Cello. When released on 8 June 1993 it was one of the first graphical web browsers, and the first to run on Windows: Windows 3.1, NT 3.5, and OS/2. However, the explosion in popularity of the Web was triggered by NCSA Mosaic which was a graphical browser running originally on Unix and soon ported to the Amiga and VMS platforms, and later the Apple Macintosh and Microsoft Windows platforms. Version 1.0 was released in September 1993, and was dubbed the killer application of the Internet. It was the first web browser to display images inline with the document's text. Prior browsers would display an icon that, when clicked, would download and open the graphic file in a helper application. This was an intentional design decision on both parts, as the graphics support in early browsers was intended for displaying charts and graphs associated with technical papers while the user scrolled to read the text, while Mosaic was trying to bring multimedia content to non-technical users. Mosaic and browsers derived from it had a user option to automatically display images inline or to show an icon for opening in external programs. Marc Andreessen, who was the leader of the Mosaic team at NCSA, quit to form a company that would later be known as Netscape Communications Corporation. Netscape released its flagship Navigator product in October 1994, and it took off the next year. IBM presented its own WebExplorer with OS/2 Warp in 1994. UdiWWW was the first web browser that was able to handle all HTML 3 features with the math tags released 1995. Following the release of version 1.2 in April 1996, Bernd Richter ceased development, stating "let Microsoft with the ActiveX Development Kit do the rest." Microsoft, which had thus far not marketed a browser, finally entered the fray with its Internet Explorer product (version 1.0 was released 16 August 1995), purchased from Spyglass, Inc. This began what is known as the "browser wars" in which Microsoft and Netscape competed for the Web browser market. Early web users were free to choose among the handful of web browsers available, just as they would choose any other application—web standards would ensure their experience remained largely the same. The browser wars put the Web in the hands of millions of ordinary PC users, but showed how commercialization of the Web could stymie standards efforts. Both Microsoft and Netscape liberally incorporated proprietary extensions to HTML in their products, and tried to gain an edge by product differentiation, leading to a web by the late 1990s where only Microsoft or Netscape browsers were viable contenders. In a victory for a standardized web, Cascading Style Sheets, proposed by Håkon Wium Lie, were accepted over Netscape's JavaScript Style Sheets (JSSS) by W3C. Late 1990s: Microsoft vs Netscape In 1996, Netscape's share of the browser market reached 86% (with Internet Explorer edging up 10%); but then Microsoft began integrating its browser with its operating system and bundling deals with OEMs. Within 4 years of its release IE had 75% of the browser market and by 1999 it had 99% of the market. Although Microsoft has since faced antitrust litigation on these charges, the browser wars effectively ended once it was clear that Netscape's declining market share trend was irreversible. Prior to the release of Mac OS X, Internet Explorer for Mac and Netscape were also the primary browsers in use on the Macintosh platform. Unable to continue commercially funding their product's development, Netscape responded by open sourcing its product, creating Mozilla. This helped the browser maintain its technical edge over Internet Explorer, but did not slow Netscape's declining market share. Netscape was purchased by America Online in late 1998. 2000s At first, the Mozilla project struggled to attract developers, but by 2002, it had evolved into a relatively stable and powerful internet suite. Mozilla 1.0 was released to mark this milestone. Also in 2002, a spinoff project that would eventually become the popular Firefox was released. Firefox was always downloadable for free from the start, as was its predecessor, the Mozilla browser. Firefox's business model, unlike the business model of 1990s Netscape, primarily consists of doing deals with search engines such as Google to direct users towards them – see Web browser#Business models. In 2003, Microsoft announced that Internet Explorer would no longer be made available as a separate product but would be part of the evolution of its Windows platform, and that no more releases for the Macintosh would be made. AOL announced that it would retire support and development of the Netscape web browser in February 2008. In the second half of 2004, Internet Explorer reached a peak market share of more than 92%. Since then, its market share has been slowly but steadily declining and is around 11.8% as of July 2013. In early 2005, Microsoft reversed its decision to release Internet Explorer as part of Windows, announcing that a standalone version of Internet Explorer was under development. Internet Explorer 7 was released for Windows XP, Windows Server 2003, and Windows Vista in October 2006. Internet Explorer 8 was released on 19 March 2009, for Windows XP, Windows Server 2003, Windows Vista, Windows Server 2008, and Windows 7. Internet Explorer 9, 10 and 11 were later released, and version 11 is included in Windows 10, but Microsoft Edge became the default browser there. Apple's Safari, the default browser on Mac OS X from version 10.3 onwards, has grown to dominate browsing on Mac OS X. Browsers such as Firefox, Camino, Google Chrome, and OmniWeb are alternative browsers for Mac systems. OmniWeb and Google Chrome, like Safari, use the WebKit rendering engine (forked from KHTML), which is packaged by Apple as a framework for use by third-party applications. In August 2007, Apple also ported Safari for use on the Windows XP and Vista operating systems. Opera was first released in 1996. It was a popular choice in handheld devices, particularly mobile phones, but remains a niche player in the PC Web browser market. It was also available on Nintendo's DS, DS Lite and Wii consoles. The Opera Mini browser uses the Presto layout engine like all versions of Opera, but runs on most phones supporting Java MIDlets. The Lynx browser remains popular for Unix shell users and with vision impaired users due to its entirely text-based nature. There are also several text-mode browsers with advanced features, such as w3m, Links (which can operate both in text and graphical mode), and the Links forks such as ELinks. Relationships of browsers A number of web browsers have been derived and branched from source code of earlier versions and products. Web browsers by year Historical web browsers This table focuses on operating system (OS) and browsers of the 1990 to 2000. The year listed for a version is usually the year of the first official release, with an end year being end of development, project change, or relevant termination. Releases of OS and browser from the early 1990s to before 2001–02 time frame are the current focus. Many early browsers can be made to run on later OS (and later browsers on early OS in some cases); however, most of these situations are avoided in the table. Terms are defined below. See also History of the Internet Timeline of web browsers List of web browsers Usage share of web browsers References External links evolt.org – Browser Archive Web browser Web browser History of computing
2950023
https://en.wikipedia.org/wiki/BlackDog
BlackDog
The BlackDog is a pocket-sized, self-contained computer with a built-in biometric fingerprint reader which was developed in 2005 by Realm Systems, which is plugged into and powered by the USB port of a host computer using its peripheral devices for input and output. It is a mobile personal server which allows a user to use Linux, ones applications and data on any computer with a USB port. The host machine's monitor, keyboard, mouse, and Internet connection are used by the BlackDog for the duration of the session. As the system is self-contained and isolated from the host, requiring no additional installation, it is possible to make use of untrusted computers, yet using a secure system. Various hardware iterations exist, and the original developer Realm Systems closed down in 2007, being picked up by the successor Inaura, Inc. Hardware history Original Black Dog & Project BlackDog Skills Contest Identified as the BlackDog, the Project BlackDog, or Original BlackDog, the first hardware version was touted as "unlike any other mobile computing device, BlackDog contains its own processor, memory and storage, and is completely powered by the USB port of a host computer with no external power adapter required." It was created in conjunction with Realm System's Project BlackDog Skills Contest (announced on Oct 27, 2005) which was supposed to raise interest, and create a developer community surrounding the product. The BlackDog was publicly available for purchase from the Project BlackDog website in September 2005 for those who wished to enter the contest or to experiment with the platform. Production ended in mid January 2006 when the contest closed. On 7 February 2006, the winners of the contest were announced for the categories: Security (the Michael Chenetz), Entertainment (Michael King), Productivity (Terry Bayne) and "Dogpile" (Paul Chandler). On Feb 15, 2006, during the Open Source Business Conference, San Franscisco, Terry Bayne was announced the grand prize winner of the contest and received US$50,000 for his creation "Kibble," a tool for building integration solutions between the host PC and the BlackDog device using a SOAP-based RPC mechanism to send arbitrary LUA code to be executed on the host PC from the BlackDog. At this conference, the second iteration of the BlackDog, the K9 was publicly announced. K9 Identified as the K9 Ultra-Mobile Server, or K9, this version was announced at the Open Source Business Conference in February 2006 with expected availability in the third quarter of 2006. However, company turbulences (see Company History below) prevented the K9 from being sold until early 2009 by Inaura, Inc. Promotional literature shows the form factor to be the same as the intermediate iD3 prototype a very thin chrome model resembling an iPod Nano, but all black with a rubberized exterior. Before Realm Systems shut down, there were working prototypes of the K9, the hardware design seemed to be finished, and the software was functional. In terms of hardware, it differed from the Original BlackDog in these aspects: 128 MB RAM 1GB Flash NAND Memory 60 pin Hiroshi connector replaces MMC slot (intended for a USB connection cable, as well as custom cables to support additional peripherals) OLED display replaces indicator LED of first version (1.1 inch display, 96x64 resolution, 4 bit grayscale Black and White) Dimensions, H×W×L: iD3 The iD3 was a variant of the K9, using the same hardware specifications, intended for corporate use with a matching management router/server identified as the iD1200. It was announced as being part of the iDentity product series and was, for instance, showcased on the Embedded Systems Conference in San Jose, CA (April 3–7, 2006). The final Realm Systems iD3 form factor resembled a small Nokia cellphone. Software The software was originally based on Debian until 2008. The project switched to using Olmec Linux. Debian Linux (pre-2008) When plugged into a USB port of a Windows XP machine, the BlackDog initially presented itself to the host as a virtual CD-ROM drive. Via an autorun application the BlackDog then automatically launched the X Window system for Windows Xming and a software NAT router. Once those applications were running, the virtual USB CD-ROM drive disconnected, and the USB presented itself as a virtual Ethernet adapter, enabling network access. Without requiring any installations or user interactions, the user could access the contained applications and data from any Windows computer. With further configuration steps, it was possible to also run the BlackDog on Linux and Mac computers. A short Engadget review stated that "it runs Firefox fine, and should be great for taking your own browser, e-mail, and chat clients for use wherever \[you are\], though that will probably be about all this little 400MHz guy can handle." The first software version was based on Debian Sid running a 2.6.10 Kernel. It contained some sample default applications such as xterm, XBlast, and XGalaga and allowed installation of the Firefox webbrowser, an email client and other additional software available through official and community APT repositories hosted by the project. It was attempted to stimulate the creation of further applications and use cases for the BlackDog by building up a community. The project and discussion infrastructure, termed DogPound, used an installation of the project hosting software (SourceForge). A SDK with a QEMU emulator environment for Windows XP, Linux and Mac OS X was released to facilitate the creation and porting of applications to the BlackDog system. Although most of the BlackDog software was free software, the device contained some proprietary technology and intellectual property developed by Realm Systems Inc., which was later transferred to Echo Identity Systems and finally ended up belonging to Inaura Corporation. The official repository for the project disappeared in mid-February 2007 due to Realm Systems Inc. closing and was reactivated by its successor Inaura Inc. as of late-June 2007. (There does not seem to be a repository in Nov. 2011). Until sometime in 2009, Michael King (winner of the Entertainment category of the contest) maintained an independent backup of the official repositories and discussion groups as well as repositories for other developers at the now-defunct Saint Louis, MO based ArchLUG website. The official website for the project www.projectblackdog.org still appears to be up as of December 2013, but has been defaced by several "quick cash" money lenders that have compromised the site via the WordPress content management system it uses. There does not appear to be any other original content remaining other than the homepage and the advertisements for the money lending sites. Olmec Linux (2008 onwards) Starting in late 2007, Olmec Linux was ported to the Blackdog and K9 devices., which is a Debian-derived Linux distribution geared for small embedded platforms such as the gumstix. When sold as part of the Inaura Inc. product offering, BlackDog/K9 was using the Olmec-based version. Realm Systems Corporate history Realm Systems Inc. was founded in 2002, based in Salt Lake City, Utah, raising $8.5 million led by GMG Capital in its Round A, with CEO Rick White, describing itself as "provid\[ing\] a next generation Mobile Enterprise Platform that simplifies the delivery of applications and services to end-users across the distributed enterprise." During 2006, Realm Systems focuses on their iD3 line of products and the K9 product launch was put off indefinitely. In January 2007, two then-unidentified groups containing former Realm Systems employees and investors attempted, independently, to license or move the K9 hardware and software to a separate company to continue development and production, due to the dissolution of Realm Systems and continued developer community interest in the concept, as well as rumored successful pilot programs. One of Realm Systems' backers then posted a public foreclosure notice, and in a court-supervised foreclosure hearing a number of investors bid on the company's assets in a closed bid. As a result, all of Realm's assets, including the iD3 and K9-series hardware, their operating systems, and the enterprise management router code, were bought out by a new firm, Echo Identity Systems, which was registered as a Salt Lake City company on February 1, 2007. This company claimed to be continuing the enterprise product line, and re-used nearly all of the old Realm Systems website layout and graphics. No mention of the K9 product line was made anywhere on the Echo Identity Systems website. The former Realm Systems website redirected iD3 and BlackDog customers to a transitional support website informing about the asset change, Realm Systems closing, and that product support would be done by Echo Identity Systems (though no explanation as to the extent of the support is provided). It appears that assets were soon bought back from Echo Identity Systems to a group of investors backed by former Realm Systems employees and investors. Based on unconfirmed community reports (September 2007), it appears new developer prototypes of the K9 have been seeded to the Project BlackDog contest winners. In November 2007, the new owners emerged as Inaura Inc, with CEO and president Peter Bookman, who is one of the original co-founders of Realm Systems. CFO of the new company is Rodney Rasmussen, who had both registered Echo Identity Systems and Inaura as a Utah company . They set up a sparse web site lacking specific product descriptions. Inaura Inc. describes itself as "formerly known as" Real Systems Inc. and Echo Identity Systems expired as a company in June 2008. In 2008, the Aurora Inc. website was updated to provide details on the company and product. The K9 device is now being branded as the K9 Ultra Mobile Authentication Key (UMAK) and marketed as "solving the problem of trust within all computing environments". It refers to "the two iterations of UMAKs" as "the K9 and the BlackDog" The K9 product seems to have been publicly sold since early 2009. In February 2010, the company name expired "failing to file for renewal" and was only re-registered September 2011., expiring again in January 2013. References External links Project BlackDog Homepage/SDK site Inaura Company Homepage PC Plus (UK) Review of the Blackdog inc. picture Geek.com review Linux Mobile computers Linux-based devices Computer storage devices
4449835
https://en.wikipedia.org/wiki/WinRoll
WinRoll
WinRoll is an open source, free software utility for Windows 2000, Windows XP and Windows 7 which allows the user to "roll up" windows into their title bars, in addition to other window management related features. It is compiled in assembly code. History WinRoll 1.0 was first released on April 10, 2003. It is unclear if it still maintained by Wil Palma. The most recent version, 2.0, was released on April 7, 2004. Being an open source program, its source code was freely available from the website. The website is now down. Features The purpose of WinRoll is to allow users to have many windows on the screen, while keeping them organized and manageable. The main feature of the program is enabling the user to "roll" windows up into their title bars. It also allows users to minimize programs to the tray, and to adjust the opacity of windows. See also Free software Open source software Assembly language Free software primarily written in assembly language Free system software Windows-only free software
56289559
https://en.wikipedia.org/wiki/2017%E2%80%9318%20Little%20Rock%20Trojans%20women%27s%20basketball%20team
2017–18 Little Rock Trojans women's basketball team
The 2017–18 Little Rock Trojans women's basketball team represents the University of Arkansas at Little Rock during the 2017–18 NCAA Division I women's basketball season. The Trojans, led by fifteenth year head coach Joe Foley, play their home games at the Jack Stephens Center and were members of the Sun Belt Conference. They finished the season 23–10, 17–1 in Sun Belt play to win the Sun Belt regular season and tournament titles to earn an automatic trip to the NCAA Women's Tournament where they lost in the first round to Florida State. Previous season They finished the season 25–9, 17–1 in Sun Belt play to win the Sun Belt regular season title. They advanced to the semifinals of the Sun Belt Women's Tournament where they lost to Louisiana–Lafayette. They received an automatic bid to the WNIT where defeated Southern Miss in the first round before losing to Alabama in the second round. Roster Schedule |- !colspan=9 style=| Non-conference regular season |- !colspan=9 style=| Sun Belt Conference regular season |- !colspan=9 style=| Sun Belt Women's Tournament |- !colspan=9 style=| NCAA Women's Tournament Rankings 2017–18 NCAA Division I women's basketball rankings See also 2017–18 Little Rock Trojans men's basketball team References Little Rock Trojans women's basketball seasons Little Rock Little Rock
810758
https://en.wikipedia.org/wiki/Collins-class%20submarine
Collins-class submarine
The Collins-class submarines are Australian-built diesel-electric submarines operated by the Royal Australian Navy (RAN). The Collins class takes its name from Australian Vice Admiral John Augustine Collins; each of the six submarines is named after significant RAN personnel who distinguished themselves in action during . The six vessels were the first submarines built in Australia, prompting widespread improvements in Australian industry and delivering a sovereign (Australian controlled) sustainment/maintenance capability. Planning for a new design to replace the RAN's submarines began in the late 1970s and early 1980s. Proposals were received from seven companies; two were selected for a funded study to determine the winning design, which was announced in mid-1987. The submarines, enlarged versions of Swedish shipbuilder Kockums' and originally referred to as the Type 471, were constructed between 1990 and 2003 in South Australia by the Australian Submarine Corporation (ASC). The submarines have been the subject of many incidents and technical problems since the design phase, including accusations of foul play and bias during the design selection, improper handling of design changes during construction, major capability deficiencies in the first submarines, and ongoing technical problems throughout the early life of the class. These problems have been compounded by the inability of the RAN to retain sufficient personnel to operate the submarines—by 2008, only three could be manned, and between 2009 and 2012, on average two or fewer were fully operational. The resulting negative press has led to a poor public perception of the Collins class. After 20 years of service issues, the boats have finally provided high availability to the RAN since 2016. The Collins class was expected to be retired about 2026, however, the 2016 Defence White Paper extended this into the 2030s. The Collins class life will now be extended and will receive an unplanned capability upgrade, including sonar and communications. The initial replacement for the Collins class was to be a conventionally-powered version of the Barracuda class SSN proposed by Naval Group of France, dubbed the . On 15 September 2021, in the face of growing delays and cost increases, the Australian government announced the cancellation of the contract with Naval Group, and that the replacement will be a nuclear-powered submarine fleet made in partnership with the United Kingdom and the United States. Development and design The proposal for a new type of submarine to replace the Oberon class of diesel-electric submarines began in July 1978, when the RAN director of submarine policy prepared a paper detailing the need to start considering a replacement for the ageing Oberons. The paper also raised the suggestion that the majority of the submarines be constructed in Australia and that the number of submarines be increased beyond the six Oberons. Building the submarines in Australia was initially met with reactions predicting an impossible task because of the poor state of the Australian shipbuilding industry, and Australian industry in general, although campaigning by several figures in Australian industry who thought it could be done came to the attention of those spearheading the project to design the Oberon-class replacement, and led to the view that it was both possible and feasible. The campaign to build submarines in Australia was also met with support from the Australian Labor Party and several trade unions. The proposal was accepted by the defence operational requirements committee in August 1978, and the project was given the procurement designation of SEA 1114. Approval for the development phase of the project was given in the 1981–82 federal budget. The RAN had four main requirements: that the submarines were tailored to operating conditions in the Australasian region, that they be equipped with a combat system advanced enough to promote a long service life, that appropriate and sustainable infrastructure be established in Australia to construct the boats, then provide maintenance and technical support for their operational lifespan, and that the submarines were capable of peacetime and emergency operations in addition to their hunter-killer role. Ten submarines were envisioned, a number which was revised to between four and eight boats by the start of 1983, and later settled on the acquisition of six submarines, with the option to order two more. Requests for tenders The development of the submarine commenced in May 1983, when the government released a request for tender and approached seven of the world's nine diesel-electric submarine manufacturers for submissions. The submissions would be narrowed down to two based on the provided information, with these undergoing a funded study to determine the winning design. Tendering companies had to demonstrate how Australian industries would be incorporated into the project, and that they were willing to establish an Australia-based consortium to construct the submarines. All seven companies responded by the end of the year: the combined submissions totalling four tonnes (9,000 lb) of paper. Directions Techniques Des Constructions Naval of France originally supplied a design modified from the , but the submission review board did not view this favourably, as the submarine was of the same vintage as the Oberons. Their submission was altered to a conventionally powered version of the Rubis-class nuclear submarine. The German companies Ingenieur Kontor Lübeck (IKL) and Howaldtswerke-Deutsche Werft (HDW) collaborated to offer an enlarged version of the Type 209 submarine, designated the Type 2000. Submarines based on the Type 209 design had been exported to several nations, but were not operated by the German Navy. Thyssen Nordseewerke, another German company, offered their . Like the Type 209, the TR-1700 was an export-only submarine design. Cantieri Navali Riuniti of Italy proposed a design based on their , scaled up by 25%. The age of the early 1970s design was a concern, and the proposal was withdrawn early in the process. The Dutch partnership of United Shipbuilder Bureaux and Rotterdamsche Droogdok Maatschappij submitted the . Their offer was identical to that constructed for the Royal Netherlands Navy, minus the Dutch combat system. Swedish shipbuilder Kockums submitted the Type 471 design, an enlarged version of the operated by the Swedish Navy. The United Kingdom company Vickers Shipbuilding & Engineering offered a design referred to as the Type 2400, that later became the Upholder class. The review board concluded that the IKL/HDW Type 2000 was the best design offered, the Walrus class was rated as 'fair', while Kockums' and Vickers' proposals were considered 'marginal' contenders. However, none of the tenders completely matched the desired RAN specifications, and the two proposals selected would have to be redesigned during the funded study. The combat data system was procured separately to the submarine design; 14 companies were identified as capable of providing what the RAN wanted, from which eight were approached in January 1983 with a separate request for tender. Five responded: a consortium led by Rockwell International of the United States, Plessey of the United Kingdom, Signaal of the Netherlands, Sintra Alcatel of France, and a collaboration between the German Krupp Atlas Elektronik and the British Ferranti. Each tender was required to offer a system with a distributed architecture, despite the absence of an accepted definition for 'distributed computing' at that time, and had to show the cost of programming the software in Ada, although they could offer additional cost breakdowns for other programming languages. Funded studies By May 1985, three months behind schedule, the review board narrowed the tenders down to two contenders in each group: IKL/HDW and Kockums for the submarine, Rockwell and Signaal for the combat system. The Walrus and Type 2400 submarine designs were considered to be too expensive to manufacture because of inefficient building practices, while the combat data system tenders had been narrowed down by unjustified development risk in the Plessey and Krupp/Ferranti proposals, and the dual problems in the Sintra Alcatel tender of excessive power usage and incompatibility with the proposed American weapons system. On 9 May, the Australian cabinet approved the selections for the funded studies and decided that six submarines would be built, with the option for two more, all in Australia. The companies were granted funding for project definition studies, from which the final selections would be made. Liaison teams were sent to each of the four companies to observe the development of the concepts presented in the initial proposals. As part of this process, the two submarine designers were required to establish a consortium with at least 50% Australian ownership: IKL/HDW joined with Eglo Engineering to form Australian Marine Systems, while Kockums (which had originally planned to work with Eglo) became part of a joint venture with the Australian branch of Chicago Bridge & Iron, Wormald International, and the Australian Industry Development Corporation to create the Australian Submarine Corporation. During the study, various accusations of foul play by or unsuitability of both submarine designers were made by Australian politicians and the media. These included claims that the centre-left Australian Labor Party (ALP) and the Swedish Social Democratic Party, both in power at the time, would lead to a pro-Kockums bias, investigations into perceived coaching of IDL/HDW representatives in the questions to be asked at an ALP Caucus briefing session on the project, and public emphasis on security incidents in both Sweden and West Germany. These incidents either lacked supporting evidence or were proven false, and were the result of the Liberal Party attempting to discredit the Labor government, or pro-British politicians and organisations who believed both submarines were inferior to the Vickers Type 2400 offering. The Dibb Report on the state of the Australian Defence Force was released in March 1986; it included advice that if the submarine project cost increased too much, the boats' capabilities should be scaled back to save money. Around the same time, Federal Treasurer Paul Keating began efforts to tighten fiscal policy and cut government spending across all portfolios. Consequently, despite his enthusiastic support for the project as a means to improve Australia's defence and industrial capabilities, Minister for Defence Kim Beazley advised the project heads that he would not be able to secure Cabinet approval for construction of the submarines if the predicted cost "started with a 4 [A$4 billion]". Evaluation and final selection The four tenders resulting from the study were submitted during October and November 1986. Although the IKL/HDW design was rated highest during the initial inspection, the evaluation team found that the German proposal was less attractive than previously thought. Although IKL/HDW claimed that their boat could meet the RAN's performance requirements, the evaluators concluded from the information provided that doing so would require the deactivation of all non-essential and some essential systems. Conversely, Kockums' proposal conceded that they did not meet the requirements, although evaluators found that the figures failed by only narrow margins, and believed that these were conservative. The evaluation team recalculated the capability statistics for both submarines to a common baseline, portraying the predicted Australian operating conditions, which generally saw Kockums' figures revised upwards, and those from IKL/HDW downwards. This resulted in growing support for the Type 471 bid, and outcries from the IKL and HDW groups, who questioned the validity of the recalculations and if the Australian evaluators had the experience to do this correctly. Analysis of the two combat system proposals saw Signaal fall out of favour with the tender reviewers. This was primarily attributed to a cost-reducing re-design late in the process: the changes were not fully documented because of time constraints. Supporting documentation was further criticised by the reviewers for being vaguely worded and not using milspec terminology and standards. In addition, the system proposed by Rockwell appeared to have greater performance capabilities, and would be cheaper to implement. On 18 May 1987, the Australian Cabinet approved the final design: Kockums' Type 471 submarine, fitted with the Rockwell combat system and Diesel-Electric propulsion units provided by the French engineering firm Jeumont-Schneider. The contract for construction of six submarines was signed on 3 June and valued at A$3.9 billion in 1986 prices, with allowances for inflation and the changing value of the Australian dollar. The submarine acquisition project was at the time the most expensive project ever undertaken by the Australian Defence Force, but was unseated from this title by the Anzac-class frigate project a few years later. Construction The Australian Submarine Corporation construction facility was established on previously undeveloped land on the bank of the Port River, at Osborne, South Australia. Work on the site began on 29 June 1987, and it was opened in November 1989. South Australia was selected as the site of the construction facility based on the proposed location of the facility and promises by the State Government to help minimise any problems caused by workers' unions. The state's bid was aided by careful promotion to both Kockums and IKL/HDW during early in the project, and problems with the other states' proposals: Tasmania and Western Australia lacked the necessary industrial base, New South Wales could not decide on the location of the construction facility, Victoria's proposed site was poorly sited, and building in Liberal-led Queensland would have been politically unwise for the project when Labor was in power both federally and in all other states. Each submarine was constructed in six sections, each consisting of several sub-sections. One of the main criteria of the project was that Australian industries contribute to at least 60% of the work; by the conclusion of the project 70% of the construction and 45% of the software preparation had been completed by Australian-owned companies. Work was sub-contracted out to 426 companies across twelve countries, plus numerous sub-sub-contractors. In many cases, components for the first submarine were constructed by companies outside Australia, while those for the following five boats were replicated by an Australian-owned partner or subsidiary. The project prompted major increases in quality control standards across Australian industries: in 1980, only 35 Australian companies possessed the appropriate quality control certifications for Defence projects, but by 1998 this had increased to over 1,500. Although the acquisition project organisers originally planned for the first submarine to be constructed overseas, the Cabinet decided as part of the project's approval that all six submarines would be built in Australia; the increases in construction time and cost from not building the lead ship in the winning designer's home shipyard was considered to be offset by the additional experience provided to Australian industries. Even so, two sections of the first submarine were constructed by Kockums' shipyard in Malmo, Sweden. By the end of 1990, Chicago Bridge & Iron and Wormald International had both sold their shares in ASC. The shares were bought up by Kockums and the Australian Industry Development Corporation, with some of Kockums' shares then sold to James Hardie Industries to maintain an Australian majority ownership of the company. On 5 April 2000, the shares in ASC held by Kockums were bought out and the company was nationalised, despite a trend at the time to privatise government-owned companies. At the end of 2003, a contract to maintain the Collins class worth $3.5 billion over 25 years was awarded to ASC. As of April 1996, the option to order the seventh and eighth submarines was still under consideration, but was looked on unfavourably by the Department of Defence at the time, as the additional cost would require the diversion of funding from the Australian Army and Royal Australian Air Force, resulting in an imbalance in the capabilities of the Australian Defence Force. The option was cancelled outright by late 2001. Entry into service The first submarine, , was laid down in February 1990. Collins launch was originally planned for 1994, but was later set for 28 August 1993. Although launched on schedule, she was not complete: the design of the submarine had not been finalised, important internal pipes and fittings were not installed, the components of the combat system had yet to be delivered, and some hull sections were actually sheets of timber painted black so the submarine would appear complete in photographs of the launching ceremony. Within weeks of the launch, Collins was removed from the water, and it was not until June 1994 that the submarine was completed. Progress on the other five submarines was delayed by the extra effort required to meet Collins launching date and the subsequent work to complete her. Collins was not commissioned into the RAN until 27 July 1996; eighteen months behind schedule, because of several delays and problems, most relating to the provision and installation of the combat data system software. Collins was not approved for operational deployments until 2000. The other five submarines were scheduled for completion at 12-month intervals. However, the series of defects and problems encountered during sea trials of the submarines (particularly Collins) resulted in the repeated diversion of resources from those still under construction, adding to delays. Consequently, delivery of the submarines ran significantly behind schedule; submarines were presented to the RAN between 21 and 41 months late, and the entire class was not cleared for full operational service until March 2004, a year after the last boat was commissioned. These delays forced the RAN to keep several Oberon-class submarines and the submarine base HMAS Platypus in service beyond their planned decommissioning dates. McIntosh-Prescott Report and Fast Track program Following his appointment as Minister for Defence following the 1998 federal election, John Moore decided that the only way to solve the various problems of the Collins class was for an independent report to be prepared on them. He appointed Malcolm McIntosh, chief executive of the CSIRO and an unofficial advisor to Moore, and John Prescott, a former BHP director, to investigate the project, uncover the problems with the submarines, and suggest ways of solving them. The Report to the Minister for Defence on the Collins class submarine and related matters (commonly referred to as the McIntosh-Prescott Report) was compiled in ten weeks, and released on 1 June 1999. This report concluded that the Collins class was incapable of performing at the required level for military operations. Although the report highlighted several elements of the submarine design that performed to or beyond expectations, and acknowledged that many of the publicised problems had been or were in the process of being fixed, it presented the propulsion system, combat system, and excessive noise as ongoing problems across the class. After identifying the combat system as the central problem, McIntosh and Prescott recommended that it be scrapped entirely and replaced with a system based on commercially available equipment and software. They also claimed that these problems were caused by poor design and manufacture; inappropriate design requirements; deficiencies in the structure of the contract, particularly with regards to modifying the contract to meet changing requirements; and problems between the various parties involved in the construction of the submarines, with a lack of overall direction and conflicts of interest causing avoidable hostility and uncooperativeness. Despite the report being promoted by the government as 'ground-breaking', many people involved with the Collins-class project later claimed that large sections of the report could have been copied from reports previously submitted by the RAN or ASC. The report, along with the planned December 2000 decommissioning of the final Oberon-class submarine, , prompted the establishment of an A$1 billion program to bring the fourth and fifth submarines ( and ) up to operational standards, then retrofit the modifications to the other boats. Referred to as the "fast track" or "get well" program, the program also included solving the problems preventing various parties from cooperating fully, and improving the negative media coverage and public perception of the class by responding to criticism and providing more information to reporters. Submarines in class Problems during construction and trials The Collins-class submarines experienced a wide range of problems during their construction and early service life. Many of these were attributed to the submarines being a new, untested design, and were successfully addressed as they were discovered. Most systems and features worked with few or no problems, while the boats' maximum speed, manoeuvrability, and low-speed submerged endurance were found to exceed specifications. The ship control system, which during development had been marked as a major potential problem, functioned beyond positive expectation: for example, the autopilot (which aboard Collins was nicknamed 'Sven') was found to be better at maintaining depth during snorting than most helmsmen. However, problems with the combat system, excessive noise, and engine breakdowns were recurring and appeared across the entire class. These and other shortcomings were often made harder to solve by disagreements between Kockums, ASC, Rockwell, the RAN, and the Australian Government over the nature of problems, their causes, and who was responsible for solving them. Media reporting of the problems during the mid-1990s was often negative and exaggerated, creating poor public perception. This was aided by politicians, who used the shortcomings to politically attack the Labor Party and Kim Beazley, particularly after Labor was defeated by the Liberal-National Coalition in the 1996 federal election, and Beazley became Leader of the Opposition. During the mid-1990s, it was recommended on several occasions that the submarine project be abandoned, and the completed submarines and incomplete hulls be broken up for scrap. Following the McIntosh-Prescott Report, which indicated the long-term faults with the class that still required solving, successful efforts were made to bring the submarines to operational standard. As part of this, a public relations plan was implemented to provide up-to-date information on the submarines to the media, to improve the public perception of the class by providing factual information on the status of the project and responding to queries and incidents. This same period saw the dispelling of the idea, widely held within the RAN, that the Collins-class boats would be like any other vessel previously ordered by the RAN: in service with another navy, well tested, and with all the problems solved before they entered Australian hands. The RAN began to realise that as the parent navy for the class, they had a greater responsibility than normal in ensuring that the boats were at an operational standard. Welding of Collins During assembly of Collins bow and escape tower sections in Sweden, multiple defects in the hull welding were discovered. Different reasons were given by different parties for the problems: To speed production, Kockums employed welders who were not qualified to work on high strength steels; the Qualified Welding Procedures developed by Kockums for these steels were not followed in production; the steel alloy used for the hull required different welding techniques to those normally used by Kockums; the Swedish navy always requested partial penetration welds for their submarines, while the RAN wanted full penetration welding, but had not made this clear; delays in delivering the steel plates to Kockums resulted in rushed work and a resulting drop in quality. Kockums engineers proposed that the section be kept in Sweden for repairs, but to minimise delays it was accepted as-is, with repairs attempted at ASC during full assembly of the first boat. Kockums sent welders and inspection technicians to ASC in order to assist in undertaking these repairs. However, when Collins returned to the ASC facility in April 2001 for a year-long maintenance docking, multiple welding defects were found in the bow and escape tower sections of the submarine (the two sections constructed by Kockums), while almost no problems were found in the welding of the four Australian-built sections. Repairing these welds quadrupled the time Collins spent in dock. Noise signature The noise made by the submarines, which compromised their ability to stay hidden, was another major problem with the design. In the original requisition, the RAN guidelines for the noise signature of the new submarines were vague; for example, asking that they be "twice as quiet" as the Oberons. Expectations and operational requirements also changed between the 1987 contract signing and when the submarines began operating in the late 1990s. The major element of the noise signature for the Oberon class was machinery noise transmitted through the hull; this was successfully avoided during construction of the Collins class by mounting machinery on platforms isolated from the hull. Noise testing during 1996 and 1997 found that the hydrodynamic noise signature—the noise made by a submarine passing through the water—was excessive, particularly at high speed. The shape of the hull was the main cause: although a scale model of the design had been tested during the funded study and was found to have a minimal signature, the hull shape was changed after the contract was signed, primarily by a lengthening of the submarine and a redesign of the bow dome to accommodate the larger-than-expected main sonar and reduce its blind spot (the baffles). The design had not been retested, as who would pay for this could not be agreed on. Propeller cavitation, caused by water flow over control surfaces onto the propeller at certain speeds, was the other main noisemaker. Cavitation had not been a problem with earlier Swedish submarine designs or during early testing of the Type 471 design, but the propeller had to be redesigned late in the process to provide more power, and like the redesigned hull, was not retested. During the year 2000, an unusual meeting took place with a next door neighbor (Francis 'Frank' Smith) of the then HMAS Stirling Naval Base commander. He was an Aircraft Maintenance Engineer (originally trained at Government Aircraft Factories Fisherman's bend) who had been aware of the fluid dynamics issues of the Collins class for some time, purely by interest and observation on television. After a lengthy discussion, he was invited to discuss and demonstrate where possible, his observations at the Stirling Naval Base with Navy and Defence Science and Technology Organisation (DSTO) staff who were there at that time as part of an investigative group. He showed on a white board, the aerofoil issue with the Dorsal – Sail conning tower structure showing that the aspect ratio (span (height) to chord (width)) was too short and that severe turbulence / cavitation would be generated by such a design. This was demonstrated again on the white board using aircraft aerofoil wing shapes as a basis for the discussion. That the turbulence / cavitation generated would, by natural rearward flow, move down the rear upper surface deck of the hull and be drawn into the propeller. He was also able to demonstrate that the design of the bow section would not pass a flow test for generated turbulence / cavitation, with the change in shape from circular bow section to long hull, being ill-conceived. He made several recommendations during the lecture that would be cost-effective and possible. 1) To lengthen and taper the dorsal fin and create a more streamlined integration of the dorsal to flat upper Hull deck section. and 2) To 'fill in' the hollow section of hull aft of the bow curvature. Both these could be achieved with Carbon Fibre or Fibreglass covers as no load bearing strength would be required. Subsequent studies by the DSTO showed that the submarine's hull shape, particularly the redesigned sonar dome, the fin, and the rear of the submarine, focused the displaced water into two turbulent streams; when the seven propeller blades hit these streams, the propeller's vibration was increased, causing cavitation. These problems were fixed by modifying the casing of the submarine with fiberglass fairings. Propulsion system During trials of the first submarines, the propulsion system was found to be prone to failure for a variety of reasons. Most failures were attributed to the fifteen-tank diesel fuel system: the tanks were designed to fill with salt water as they were emptied to maintain neutral buoyancy, but water would regularly enter the engines due to a combination of poor design, gravity separation of the fuel and water being insufficient, and operator error resulting from poor training. Problems were also caused by bacterial contamination of the diesel fuel, which, along with the salt water, would cause the fuel pumps to rust and other components to seize. The fuel-related issues were solved by installing coalescers, improving training and operational procedures, and adding biocides to the fuel. Propeller shaft seals were a significant problem on Collins and Farncomb. Although designed to allow for a leak of per hour, during trials it was found that the seals would regularly misalign and allow hundreds of litres per hour into the boat—during one deep diving test the flow rate was measured at approximately a minute. ASC claimed that solving these problems could be done by manually adjusting the seals as the submarine dived and rose, but this would have required a sailor dedicated solely to that task, affecting efforts to minimise the required number of personnel. It was found that the problem could be temporarily alleviated by running the propeller in reverse for 100 revolutions, pulling the seal back into alignment, although a permanent solution could initially not be found, as ASC refused to accept responsibility for the problem, and the original manufacturer of the seals had closed down. New suppliers were found, with modified seals fitted to the first two submarines in late 1996, before completely re-designed seals were fitted to the boats in late 1997, solving the problem. The propellers themselves were also found to be poorly manufactured, having been shaped by hand, with at least one cast at the wrong pitch. This was rectified by using a five-axis milling machine for future shaping work and replacing the miscast propeller. The material used for the propellers was also found to be weaker than expected, developing fatigue cracks after only a few years of use. Instead of going to Kockums, which had started to go into decline after the end of the Cold War, the submarine project office sent the propeller to the United States Navy for redesigning. Despite the Americans fixing the problems with the propeller design, resulting in significant performance improvements, the Swedish company was dissatisfied with the Australian actions; the dispatch of the propellers was one of the points of contention in the company's legal action in the mid-2000s against the Australian government over ownership of the intellectual property rights to the submarine's design. Other propulsion problems included excessive motor vibrations at certain speeds which damaged various components (which was attributed to the removal of a flywheel and to corrosion caused by the fuel problems), and excessive fuel consumption in Collins at high speed (found to be caused by manufacturing problems with the turbines and turbochargers). The propulsion system was also found to be a secondary source of noise: poor design of the exhaust mufflers, weight-saving measures in the generator mountings, and an incorrect voltage supply to the battery compartment exhaust fans were noise-creating factors found and eliminated during studies by the DSTO. In March 2010, the Department of Defence revealed that the generators in five of the submarines were flawed and had to be replaced. The three generators aboard each of the five submarines are to be replaced in the submarines as they come in for their next maintenance docking. Periscopes and masts The periscopes had two problems, the first of which was shared with the other masts. They were not streamlined; raising a periscope while moving would create enough drag and turbulence to shake the entire submarine. As with many elements of the submarine, there were disagreements as to who was responsible for the problem. It was solved by modifying the masts to redirect the water flow around them (for example, a spiral wrap was fixed around the head of each periscope). The periscopes also had problems with their optics: periscope users reported difficulty in refocusing after changing magnification, duplication of images, and bands across the field of vision. These problems were attributed to RAN demands that the optical view be the first exposed when a periscope was raised above the water, instead of placing the infrared sensor and single-pulse radar at the head as on other submarines, requiring the optical path to be routed around these components. The periscopes were gradually improved, and were no longer a problem by the time the fast track submarines entered service. Combat system Despite the public focus on the various physical issues with the boats, the major problem with the submarines was the development of the Rockwell combat system. The problems had started during the funded study, when Singer Librascope and Thomson CSF, who were partnering with Rockwell to develop the combat system, refused to release their intellectual property or their software code for Rockwell to sell. It was proposed that Computer Sciences of Australia, a division of Computer Sciences Corporation and a minor partner in the consortium, take over the role of writing the software for the combat system, although this meant that Singer Librascope, which had prior experience in creating submarine combat systems, was reduced to a minor role in the project. Other major problems with the system, to which most of the later difficulties were attributed, were that the original concept was beyond the technology of the day, and that the system architecture required by the RAN was both overly ambitious and flawed. This was compounded by the rate of advancement in computer technology: equipment had to be designed from scratch and custom manufactured at the start of the project, but by the time these were installed, they were obsolete compared to commercially available hardware and software. Australian Submarine Corporation was made responsible for the delivery of the Rockwell combat system, but had little ability to enforce this. Rockwell was contracted to deliver the combat system by 9 September 1993, but was unlikely to do so. ASC's management board voted to issue a default notice to Rockwell as the American company had defaulted on the contract, but was ordered by the Department of Defence to retract the default notice and accept gradual delivery of partially completed versions of the combat system—referred to as 'releases' and 'drops'—until the complete system had been delivered. Sea trials of Collins were unable to commence until Release 1.5 of the combat system software was delivered; because of ongoing delays in the provision of the software, the early phases of the trials were completed using stand-alone equipment By March 1994, the combat system had become the major area of concern for the submarine project: assembly of the system was almost nine months behind schedule, and at least 20% of the software had not been compiled. The combat system continued to be a problem during the next few years, with progressive drops offering little improvements in performance over the previous version, and the completion date of Release 2—the designation for the full contractual realisation of the combat system software—was continually postponed. In 1996, Rockwell sold its military and aerospace division, including responsibility for the Collins combat system, to Boeing. Boeing attempted to produce a workable combat system, but believed that this could only be done if the changes in technology were accounted for in a contract alteration, which the RAN and the Australian Government initially refused to do. Boeing then requested assistance from Raytheon, and after further negotiations with the Government resulted in a reduction of the system capabilities, the companies were able to stabilise the system and deliver Release 2.0 at the end of 1999. Boeing sold its naval systems division to Raytheon in May 2000, making the latter company solely responsible for completion of the combat system. After this, the submarine project began investigating ideas for a new combat system. Because there was not enough time to evaluate the replacement system to include it in the "fast track" program, Dechaineux and Sheean were fitted with the old Rockwell combat system, which was enhanced by the addition of sub-systems developed during the early 1980s for the Oberon-class mid-life upgrade and commercial off-the-shelf components. Even with the enhanced system, it was believed that the capabilities of the fast track Collins boats was at best equivalent to the Oberons. Lockheed Martin, Thales, STN Atlas, and Raytheon were approached to provide tenders to design and assemble a new combat system for the submarines, with all four submitting proposals during early 2000. In May 2000, after the DSTO tested operational versions of the proposed combat software packages, the Lockheed and Thales tenders were eliminated, despite the Thales proposal being rated better than Raytheon's. After indepth testing of the remaining systems and observations of the systems in action, the German STN Atlas ISUS 90-55 aboard an Israeli Dolphin-class submarine and the American Raytheon CCS Mk2 aboard a USN Los Angeles-class submarine, it was decided that the STN Atlas system was the best for the class. However, political pressure from both the United States and Australia, questions about the security problems and possible leaks involved with a European combat system linked to American weapons, and desires to increase the political and military ties between Australia and the United States resulted in the cancellation of the tender program in July 2001 and the decision to enter a joint development program with the United States, with a formal agreement signed on 10 September 2001 at the Pentagon. The second combat system development program proceeded with far fewer problems, and took the tactical and fire control components from the CCS Mk2 system, and the sonar interface component from the fast track program. The system is the AN/BYG-1 that was developed for the new USN Virginia-class submarine and has since be retrofitted to the whole USN fleet. The new combat system was installed in Waller in 2008, Farncomb in 2009, Dechaineux in 2010, Sheean in 2012, Rankin in 2014 and Collins is scheduled for 2018. The system can receive new software releases and hardware can be upgraded with new versions of the system regularly released with the version operated by a boat dependent on its fully cycle docking schedule. Budget Several newspaper articles and commentators have incorrectly claimed that the project ran significantly over the contract cost. As of the launch of the first submarine, the project cost had increased from A$3.892 billion in 1986 dollars to A$4.989 billion in 1993 dollars, which corresponded to the rate of inflation during that period. By 2006, A$5.071 billion had been spent to build the submarines (excluding the fast track program); after taking inflation into account, the project had run less than A$40 million over contract. Of the A$1.17 billion allocated to the fast track program, only A$143 million was required to fix problems where the submarines did not correspond with the original contract: the rest was used to update components that were technologically obsolete and make changes to the submarines beyond the contract specifications. When the fast track program is factored in, the Collins class cost just under 20% more than the inflation-adjusted contract value; a smaller increase than other contemporary defence projects. Characteristics The Collins class is an enlarged version of the Kockums Västergötland-class submarine. The design was referred to as the Type 471 Submarine until it was decided to name the lead boat, HMAS Collins, after RAN Vice Admiral Sir John Augustine Collins. The names of the six submarines were first announced during Collins laying down ceremony: Collins, Farncomb, Waller, Dechaineux, Sheean, and Rankin; all named after Australian naval personnel who distinguished themselves during World War II. The Collins-class submarines are classified by the RAN as SSGs, or guided missile carrying submarines, although some defence industry websites refer to the boats as hunter-killer submarines, or SSKs. At in length, with a beam of and a waterline depth of , the six boats were the largest conventionally powered submarines in the world at the time of their commissioning. The submarines are single-hulled, and have two continuous decks. Each boat displaces when surfaced, and when submerged. The depth that the submarines can dive to is classified: most sources claim that the diving depth is in excess of , although some give the maximum depth as over . Following the near-loss of Dechaineux in 2003 when a seawater hose burst during a deep dive, the diving depth was reduced. The hull is constructed from a high-tensile micro-alloy steel, developed by Swedish steel manufacturer SSAB, and improved by BHP of Australia, which was lighter and easier to weld than the HY-80 or HY-100 nickel-alloy steel used in contemporary submarine construction projects, while providing better results in explosion bulge testing. The submarines are covered in a skin of anechoic tiles to minimise detection by sonar: Collins was retrofitted with the tiles after the standard sonar signature of the submarine had been established, while the other five boats were covered during construction. These tiles were developed by the Australian Defence Science and Technology Organisation (DSTO) as the United States and United Kingdom would not share their information on the tiles used on their nuclear submarines, Australian researchers had to develop the tiles from scratch. The tiles were moulded in the shape of the hull, and are secured by a commercial adhesive normally used to fix cat's eyes to road surfaces: although British and American submarines are often seen with missing tiles, as of March 2007, none have been lost from a Collins-class boat. Armament The Collins-class submarines are armed with six torpedo tubes, and carry a standard payload of 22 torpedoes. Originally, the payload was a mixture of Gould Mark 48 Mod 4 torpedoes and UGM-84C Sub-Harpoon anti-ship missiles; previously carried by the Oberon-class boats. In 2006, the Mark 48 torpedoes were upgraded to the Mod 7 Common Broadband Advanced Sonar System (CBASS) version, which was jointly developed with the United States Navy. Waller was the first vessel of either navy to fire an armed Mod 7, sinking the decommissioned destroyer on 16 July 2008, during RIMPAC 08. Some or all of the torpedo payload can be replaced with up to 44 Stonefish Mark III mines. During the construction phase, consideration was given to acquiring submarine-launchable Tomahawk cruise missiles; giving the boats the capability to attack land targets after minor modifications. Plans to acquire Tomahawk or similar land-attack missiles remained under consideration until 2009, when the Defending Australia in the Asia Pacific Century: Force 2030 white paper was released; stating that land-attack missiles will instead be incorporated into the armament of the Collins-class replacement. The Collins class was not designed to support special forces operations providing a limited capability similar to the Oberon class. In 2005, Collins received a special forces upgrade to provide three capabilities of multi swimmer release, float on/float off and exit and reentry. However, there were issues with exit and reentry during sea trials. Originally only one submarine was planned to receive the upgrade. In 2014, Dechaineux was upgraded and the issue with exit and reentry was rectified. Collins is scheduled on its next maintenance docking to receive the safety upgrade for exit and reentry. However, the full special forces upgrade is yet to be reached with outboard stowage of equipment, such as for inflatable boats, still in the design phase. Propulsion Each submarine is equipped with three Garden Island-Hedemora HV V18b/15Ub (VB210) 18-cylinder diesel engines, which are each connected to a 1,400 kW, 440-volt DC Jeumont-Schneider generator. The combined electrical generation capability of each submarine is 4.2 megawatts. The Hedemora diesels were chosen because of modular construction, which made servicing easier; they could be installed three across in the available space, while other contenders required at least two banks of two; and they had turbochargers driven by the exhaust gas. Fifteen fuel tanks are located throughout the submarine: they must be used in specific sequences to preserve the submarine's buoyancy and trim. Electricity is stored in four lead-acid battery packs, totalling 400 tonnes, assembled by Pacific Marine Batteries, a joint venture between VARTA of Germany and Pacific Dunlop of Australia. These supply a single Jeumont Schneider DC motor, which provides 7,200 shaft horsepower to a seven-bladed, diameter skewback propeller. The propeller design is classified Top Secret, and must be covered before a Collins-class submarine can be removed from the water for maintenance. Emergency propulsion is provided by a MacTaggart Scott DM 43006 retractable hydraulic motor. The aft control surfaces are mounted on an X-shaped structure, giving the boats the ability to outmanoeuvre most warship and submarine classes. The Collins class has a speed of when surfaced and at snorkel depth, and can reach underwater. When travelling at , the submarines have a range of along the surface, or at snorkel depth. When fully submerged, a Collins-class submarine can travel at . Each boat has an endurance of 70 days. Nuclear propulsion was ruled out at an early stage of the project, because supporting nuclear submarines without a nuclear power industry in Australia and public opposition to such infrastructure would be extremely difficult. Air-independent propulsion (AIP) was also considered for the class, and the submarines were designed to be retrofitted with an AIP system. The AIP plan was cancelled in July 1996, after it was demonstrated during sea trials that during constant operations, the boat's snorkel was exposed for only a few minutes in a 24-hour period; officials from ASC claimed that any Collins-class submarine spotted while snorting would be because the boat was "dead unlucky". Installation of AIP was not believed to provide enough of an improvement on this to justify the predicted A$100 million cost. Sensors and systems The main sonar array is a Thomson Sintra Scylla active/passive bow sonar, linked to a passive intercept and ranging array distributed along the flanks of the submarine; three panels on each side. Collins and Farncomb were originally fitted with Thales Karriwarra passive towed sonar arrays, while the other four boats could be fitted with the Karriwarra or Thales' Namara array. These were later replaced across the class with the Thales SHOR-TAS towed passive array, deployed through the horizontal 'pipe' at the stern. When surfaced or at periscope depth, the Collins-class boats can use a Kelvin Hughes Type 1007 surface search radar, which is situated in a retractable mast on the fin. Each submarine is fitted with a CK043 search periscope and CH093 attack periscope. The periscopes were manufactured by Pilkington Optronics (now Thales Optronics), and experienced several problems early in the submarines' service lives. The hardware for the original combat system was based around the Motorola 68000 family of processors. The replacement combat system consists of the tactical and fire control components from the Raytheon CCS Mk2 system, combined with the sonar interfaces developed for the improved combat system used aboard Sheean and Dechaineux. Countermeasures include a Condor CS-5600 ESM intercept and warning unit, and two SSE decoys. The boats are fitted with a Marconi SDG-1802 degaussing system, and a receive-only Link 11 combat information exchange datalink. In October 2006, Sagem Défense Sécurité was selected to fit the Collins class with SIGMA 40XP gyrolaser inertial navigation systems. Ship's company Originally, the standard complement of each submarine was six officers and thirty-six sailors, with facilities to carry an additional twelve personnel (usually trainees). This number was minimised by the RAN during design, which insisted that functions be automated where possible; the RAN also requiring that each sailor have his own rack and did not need to 'hot bunk'. It was originally intended that multiple ship's companies be established per submarine, and that these be rotated to maximise the submarines' time at sea without adversely affecting personnel, but difficulties in maintaining submariner numbers made this plan unworkable. Enlisted submariners are accommodated in six-bunk cabins. In May 1997, two groups of six female sailors were posted to Collins and Farncomb to test the feasibility of mixed-sex submarine companies. Following the trial's success, eleven female sailors and one female officer commenced submarine training in 1998. Officers and senior enlisted submariners slept in mixed accommodation, but junior enlisted submariners could be deployed in groups of only six: one of the enlisted cabins was set aside, and all six bunks in the cabin had to be filled. Mixed accommodation for all female submariners was approved in June 2011, in order to increase posting opportunities and help make up shortfalls in submarine complements. During the late 1990s, a combination of low recruitment and retention rates across the RAN resulted in the number of trained submariners falling below 40% of that required. As an attempt to retain submariners, the RAN offered a one-off A$35,000 bonus in 1999. Other measures introduced around the same time included priority transfer of volunteers for submarine training and rotating submariners between sea and shore assignments to relieve them from continual sea service and prevent burnout. A year later, these measures had increased submariner numbers to 55% of requirements. However, the problem with submarine crewing continued; by 2008 the RAN could provide complete companies for only three of the six submarines. A review by Rear Admiral Rowan Moffitt during 2008 (the Submarine Workforce Sustainability Review or Moffitt Report) found that poor leadership and a culture of "mission achievement at almost any cost" resulted in submariners who were regularly stressed and fatigued from working for up to 22 hours in a stretch, under conditions worse than those experienced by the Special Air Service during the Afghanistan conflict. Submariners were also found to have lower morale and job satisfaction levels than any other position in the RAN, with these factors combining to cause a high rate of personnel burnout, while resignations meant that the average experience level in those remaining decreased. The report, publicly released in April 2009, made 29 recommendations to improve conditions and stabilise or increase submariner numbers; all of which the RAN agreed to adopt. These measures included increasing each boat's complement to 58 to spread workload (a practice successfully employed aboard Farncomb since December 2008), reducing the length of patrols and increasing shore leave, paying bonuses for submariners who remain in the submarine service for at least eighteen months, and providing internet access aboard the submarines. A dedicated recruiting program was also suggested, promoting the submarine service as an elite unit, and targeting RAN personnel aboard surface ships, former submariners whose civilian jobs may have been affected by the global financial crisis, and submariners in foreign navies. The program was successful; by June 2010, three expanded ship's companies were active, while a fourth was undergoing training. By December 2012, the fourth company was active, and was preparing to bring a submarine out of deep maintenance in 2013. Sustainment, maintenance and upgrade The sustainment, maintenance and upgrade of the submarines is undertaken by the platform system integrator, ASC Pty Ltd, in conjunction with the Australian Submarine Enterprise, made up of the Department of Defence, Raytheon Australia (combat system integrator) and the Royal Australian Navy. ASC is also responsible for supply chain management, carries out in-service rectification tasks and is also the design authority for the submarines, with the ability to assess and action changes to the platform design. Under the RAN's revised usage-upkeep cycle each submarine spends ten years on operations and two years in deep maintenance at ASC's facility in Osborne, South Australia. During a submarine's ten-year operational period it undergoes regular planned maintenance activities at ASC's Western Australian operations at Henderson, adjacent to Fleet Base West. These include 12-month-long mid-cycle docking and several shorter duration maintenance activities. ASC and the Submarine Enterprise manages the upgrades to the Collins capability under the Collins Continuous Improvement Program (part of Defence procurement project SEA 1439). The sustainment, maintenance and upgrade of the Collins-class fleet underwent a Federal Government-commissioned root-and-branch review from 2011 by Dr John Coles, and major reforms were instituted in the following years, including an innovation program across deep maintenance operations at ASC in Osborne. ASC later was recognised by Engineers Australia with an award for the innovation and effectiveness of its improvements to Collins sustainment. The result of the system-wide reform by the Submarine Enterprise has been a "dramatic turnaround" in submarine availability for the RAN and the Collins-class program performing as an "exemplar". The latest review by Dr Coles found that ASC and the Submarine Enterprise were achieving submarine sustainment and availability at or exceeding international benchmarks. Operations and deployments The entire class is based at , also known as Fleet Base West, which is located on Garden Island, off the coast of Western Australia. The decision to locate all six submarines at Stirling was prompted by the lack of suitable long-term facilities on the east coast of Australia (although individual submarines can use Fleet Base East in Sydney Harbour as a forward staging facility), and the proximity to Australian offshore interests, including most of the nation's external territories, the oil and natural gas resources of the North West Shelf, and the Indian Ocean sea lines of communication, through which the majority of Australia's seaborne trade passes. The submarines' primary missions are patrolling the waters of Australia and nearby nations, and gathering intelligence through the interception of electronic communications by foreign nations and the deployment/retrieval of special forces operatives. Operational history Two boats, including Waller, reportedly operated in support of the International Force for East Timor (INTERFET) in 1999 providing an escort for transport ships and monitored Indonesian communications. Navy clearance divers who infiltrated into the Oecussi Enclave to conduct a covert beach reconnaissance ahead of an amphibious landing were reportedly inserted from Waller. During several multinational exercises and wargames, the Collins class has demonstrated its effectiveness in the hunter-killer role by successfully attacking both surface warships and other submarines. In late May 2000, Waller became the first Australian submarine to operate as a fully integrated component of a USN carrier battle group during wargames. Wallers role was to search for and engage opposing submarines hunting the aircraft carrier , a role in which she performed better than expected. A few days later, as part of the multinational exercise RIMPAC 2000, Waller was assigned to act as an 'enemy' submarine, and was reported to have successfully engaged two USN nuclear submarines before almost coming into attacking range of Abraham Lincoln. Waller performed similarly during the Operation Tandem Thrust wargames in 2001, when she 'sank' two USN amphibious assault ships in waters just over deep, although the submarine was 'destroyed' herself later in the exercise. Wallers second feat was repeated by Sheean during RIMPAC 02, when the boat was able to penetrate the air and surface anti-submarine screens of an eight-ship amphibious task force, then successfully carry out simulated attacks on both the amphibious assault ship and the dock landing ship . Later that year, during two weeks of combat trials in August, Sheean demonstrated that the class was comparable in the underwater warfare role to the Los Angeles-class nuclear-powered attack submarine . The two submarines traded roles during the exercise and were equally successful in the attacking role, despite Olympia being larger, more powerful, and armed with more advanced torpedoes. In 2003, a Collins-class boat carried out successful attacks on two USN nuclear submarines and an aircraft carrier during a multinational exercise. The repeated successes of the class in wargames and multinational exercises earned the Collins class praise from foreign military officers for being "a very capable and quiet submarine", and recognition of the boats as a clear example of the threat posed to navies by modern diesel submarines. On 12 February 2003, Dechaineux was operating near her maximum safe diving depth off the coast of Western Australia when a seawater hose burst. The high-pressure seawater flooded the lower engine room before the hose was sealed off: it was estimated that if the inflow had continued for another twenty seconds, the weight of the water would have prevented Dechaineux from returning to the surface. The RAN recalled the Collins-class submarines to base after the incident; after engineers were unable to determine any flaws in the pipes that could have caused the incident, the maximum safe diving depth of the class was reduced. On 10 June 2005, Rankin became the first submarine since in 1987 to receive the Gloucester Cup, an award presented to the RAN vessel with the greatest overall efficiency during the previous year. The award was subsequently presented to Sheean in 2006, and again to Rankin in 2008. In March 2007, Farncomb had an emergency when crew were washed overboard while attempting to remove fishing line from the propeller. The boat was reportedly conducting surveillance on Chinese Navy submarines in the South China Sea. In 2008 and 2009, personnel shortages reduced the number of submarines able to be deployed to three; the maintenance cycles of Sheean, Rankin, and Dechaineux, and problems with Collins and Waller further reducing this to one, Farncomb, in mid-2009. Farncomb was docked for repair after a generator malfunction in February 2010, by which point Collins and Waller were active (the former on limited duties because of defects), and Dechaineux was slated to re-enter service by May 2010. Workforce shortages and malfunctions on other submarines during the preceding two years impacted heavily on the maintenance of Sheean and Rankin, with RAN and ASC officials predicting that they would not be active until 2012 and 2013, respectively. In June 2011, The Australian newspaper claimed despite two submarines (Waller and Dechaineux) designated as operational, neither was in sailable condition. The initial findings from the Coles Review revealed significant, systemic problems with the submarines and noted the need for their management to be reformed. A 2014 statement by Vice Admiral Ray Griggs indicated that up to four submarines had been operational on most occasions since 2012. Replacement The submarines originally had a predicted operational life of around 30 years, with Collins to decommission around 2025. The Submarine Institute of Australia released a report in July 2007 arguing that planning for the next generation of Australian submarines had to begin soon. In December 2007, shortly after the 2007 federal election, the Australian government announced that planning for a Collins-class replacement (procurement project SEA 1000) had commenced. The 2009 Defending Australia in the Asia Pacific Century: Force 2030 white paper confirmed the replacement project, and announced that the submarine fleet would be increased to twelve vessels to sustain submarine operations in any conflict, and counter the growing potency of Asian-Pacific naval forces. The 2009 white paper outlined the replacement submarine as a 4,000-ton vessel fitted with land-attack cruise missiles in addition to torpedoes and anti-ship missiles, capable of launching and recovering covert operatives while submerged, and carrying surveillance and intelligence-gathering equipment. The project initially had four options: a Military-Off-The-Shelf (MOTS) design without modification, a MOTS design modified for Australian conditions, an evolution of an existing submarine, or a newly designed submarine. Nuclear propulsion was ruled out because of the lack of nuclear infrastructure and public opposition to nuclear technology. Designs initially considered for purchase or modification included the Spanish , the French-designed , the German-designed Type 214, and Japan's , along with an evolution of the Collins. There were long delays in organising the replacement project. Originally, preliminary designs were to be established for selection by 2013, with detailed design work completed by 2016. However, meetings to clarify concepts and intended capabilities did not occur until March 2012, and initial design phase funding was not approved until May 2012, pushing construction start out to 2017. By November 2014, initial capabilities still had not been decided on, with recommendations to be made across 2015. The best case prediction for seeing the first new submarine enter service, made in 2012, was "after 2030", with the lack of decision making partly attributed to politicians fearing being held responsible for a repeat of the issues surrounding the Collins class. Throughout 2014, there was increasing speculation that the Sōryū class (or a derivative) was the most likely candidate for the replacement. Defence technology sharing deals between Japan and Australia, along with the loosening of Japanese defence export restrictions, were seen as preliminary steps towards such a deal. The close personal relationship between the then-Australian Prime Minister Tony Abbott and Japanese Prime Minister Shinzō Abe was also cited as a factor in the likeliness of such a deal. In response to the rumours of the Japanese deal, unsolicited proposals were made by ThyssenKrupp Marine Systems (its Type 216 submarine concept), Saab (an enlarged version of the A26 submarine), and Thales and DCNS (a diesel-electric variant of the Barracuda-class submarine). In January 2015, a three-way "competitive evaluation process" between the Japanese proposal, ThyssenKrupp's plan, and the Thales-DCNS offer was announced. A 2012 study of the Collins class concluded that the submarines' lifespan could be extended by one maintenance cycle (seven years) to cover any capability gap, with lead submarine Collins to be retired in the early 2030s. On 26 April 2016, Prime Minister Malcolm Turnbull announced the Shortfin Barracuda by French firm DCNS as the winner. On 15 September 2021, news came out that, following the signing of a new trilateral security partnership named AUKUS between Australia, the United States and the United Kingdom, which would include alignment of technologies, the troubled Attack-class programme would be cancelled, with Australia instead investing in the procurement of new nuclear-powered submarines, which would incorporate existing American and British technology. See also List of submarine classes in service Citations References Books Journal articles Newspaper articles Websites and other media External links Virtual Fleet – Virtual tour of RAN warships, including the Collins class submarine. Submarine Names – RAN webpage providing histories of the six personnel the submarines are named after. Reviews and reports Report to the Minister for Defence on the Collins class submarine and related matters – the 1999 report by McIntosh and Prescott on the state of the Collins class project. A brief on the issues arising from consideration of the requirements for a future submarine capability for Australia – the 2007 report by the Submarine Institute of Australia which prompted the commencement of the Collins class replacement project. Submarine Workforce Sustainability Review – declassified text of the 2008 review by Moffitt on providing ship's complements for the Collins class submarines. Submarine classes
22641
https://en.wikipedia.org/wiki/Oxford%20English%20Dictionary
Oxford English Dictionary
The Oxford English Dictionary (OED) is the principal historical dictionary of the English language, published by Oxford University Press (OUP). It traces the historical development of the English language, providing a comprehensive resource to scholars and academic researchers, as well as describing usage in its many variations throughout the world. Work began on the dictionary in 1857, but it was only in 1884 that it began to be published in unbound fascicles as work continued on the project, under the name of A New English Dictionary on Historical Principles; Founded Mainly on the Materials Collected by The Philological Society. In 1895, the title The Oxford English Dictionary was first used unofficially on the covers of the series, and in 1928 the full dictionary was republished in ten bound volumes. In 1933, the title The Oxford English Dictionary fully replaced the former name in all occurrences in its reprinting as twelve volumes with a one-volume supplement. More supplements came over the years until 1989, when the second edition was published, comprising 21,728 pages in 20 volumes. Since 2000, compilation of a third edition of the dictionary has been underway, approximately half of which was complete by 2018. The first electronic version of the dictionary was made available in 1988. The online version has been available since 2000, and by April 2014 was receiving over two million visits per month. The third edition of the dictionary most likely will appear only in electronic form; the Chief Executive of Oxford University Press has stated that it is unlikely that it will ever be printed. Historical nature As a historical dictionary, the Oxford English Dictionary features entries in which the earliest ascertainable recorded sense of a word, whether current or obsolete, is presented first, and each additional sense is presented in historical order according to the date of its earliest ascertainable recorded use. Following each definition are several brief illustrating quotations presented in chronological order from the earliest ascertainable use of the word in that sense to the last ascertainable use for an obsolete sense, to indicate both its life span and the time since its desuetude, or to a relatively recent use for current ones. The format of the OEDs entries has influenced numerous other historical lexicography projects. The forerunners to the OED, such as the early volumes of the Deutsches Wörterbuch, had initially provided few quotations from a limited number of sources, whereas the OED editors preferred larger groups of quite short quotations from a wide selection of authors and publications. This influenced later volumes of this and other lexicographical works. Entries and relative size According to the publishers, it would take a single person 120 years to "key in" the 59 million words of the OED second edition, 60 years to proofread them, and 540 megabytes to store them electronically. As of 30 November 2005, the Oxford English Dictionary contained approximately 301,100 main entries. Supplementing the entry headwords, there are 157,000 bold-type combinations and derivatives; 169,000 italicized-bold phrases and combinations; 616,500 word-forms in total, including 137,000 pronunciations; 249,300 etymologies; 577,000 cross-references; and 2,412,400 usage quotations. The dictionary's latest, complete print edition (second edition, 1989) was printed in 20 volumes, comprising 291,500 entries in 21,730 pages. The longest entry in the OED2 was for the verb set, which required 60,000 words to describe some 430 senses. As entries began to be revised for the OED3 in sequence starting from M, the longest entry became make in 2000, then put in 2007, then run in 2011. Despite its considerable size, the OED is neither the world's largest nor the earliest exhaustive dictionary of a language. Another earlier large dictionary is the Grimm brothers' dictionary of the German language, begun in 1838 and completed in 1961. The first edition of the Vocabolario degli Accademici della Crusca is the first great dictionary devoted to a modern European language (Italian) and was published in 1612; the first edition of Dictionnaire de l'Académie française dates from 1694. The official dictionary of Spanish is the Diccionario de la lengua española (produced, edited, and published by the Real Academia Española), and its first edition was published in 1780. The Kangxi Dictionary of Chinese was published in 1716. The largest dictionary by number of pages is believed to be the Dutch Woordenboek der Nederlandsche Taal. History Origins The dictionary began as a Philological Society project of a small group of intellectuals in London (and unconnected to Oxford University): Richard Chenevix Trench, Herbert Coleridge, and Frederick Furnivall, who were dissatisfied with the existing English dictionaries. The society expressed interest in compiling a new dictionary as early as 1844, but it was not until June 1857 that they began by forming an "Unregistered Words Committee" to search for words that were unlisted or poorly defined in current dictionaries. In November, Trench's report was not a list of unregistered words; instead, it was the study On Some Deficiencies in our English Dictionaries, which identified seven distinct shortcomings in contemporary dictionaries: Incomplete coverage of obsolete words Inconsistent coverage of families of related words Incorrect dates for earliest use of words History of obsolete senses of words often omitted Inadequate distinction among synonyms Insufficient use of good illustrative quotations Space wasted on inappropriate or redundant content. The society ultimately realized that the number of unlisted words would be far more than the number of words in the English dictionaries of the 19th century, and shifted their idea from covering only words that were not already in English dictionaries to a larger project. Trench suggested that a new, truly comprehensive dictionary was needed. On 7 January 1858, the society formally adopted the idea of a comprehensive new dictionary. Volunteer readers would be assigned particular books, copying passages illustrating word usage onto quotation slips. Later the same year, the society agreed to the project in principle, with the title A New English Dictionary on Historical Principles (NED). Early editors Richard Chenevix Trench (1807–1886) played the key role in the project's first months, but his appointment as Dean of Westminster meant that he could not give the dictionary project the time that it required. He withdrew and Herbert Coleridge became the first editor. On 12 May 1860, Coleridge's dictionary plan was published and research was started. His house was the first editorial office. He arrayed 100,000 quotation slips in a 54 pigeon-hole grid. In April 1861, the group published the first sample pages; later that month, Coleridge died of tuberculosis, aged 30. Thereupon Furnivall became editor; he was enthusiastic and knowledgeable, but temperamentally ill-suited for the work. Many volunteer readers eventually lost interest in the project, as Furnivall failed to keep them motivated. Furthermore, many of the slips were misplaced. Furnivall believed that, since many printed texts from earlier centuries were not readily available, it would be impossible for volunteers to efficiently locate the quotations that the dictionary needed. As a result, he founded the Early English Text Society in 1864 and the Chaucer Society in 1868 to publish old manuscripts. Furnivall's preparatory efforts lasted 21 years and provided numerous texts for the use and enjoyment of the general public, as well as crucial sources for lexicographers, but they did not actually involve compiling a dictionary. Furnivall recruited more than 800 volunteers to read these texts and record quotations. While enthusiastic, the volunteers were not well trained and often made inconsistent and arbitrary selections. Ultimately, Furnivall handed over nearly two tons of quotation slips and other materials to his successor. In the 1870s, Furnivall unsuccessfully attempted to recruit both Henry Sweet and Henry Nicol to succeed him. He then approached James Murray, who accepted the post of editor. In the late 1870s, Furnivall and Murray met with several publishers about publishing the dictionary. In 1878, Oxford University Press agreed with Murray to proceed with the massive project; the agreement was formalized the following year. 20 years after its conception, the dictionary project finally had a publisher. It would take another 50 years to complete. Late in his editorship, Murray learned that one especially prolific reader named W. C. Minor was confined to a mental hospital for (in modern terminology) schizophrenia. Minor was a Yale University-trained surgeon and a military officer in the American Civil War who had been confined to Broadmoor Asylum for the Criminally Insane after killing a man in London. Minor invented his own quotation-tracking system, allowing him to submit slips on specific words in response to editors' requests. The story of how Murray and Minor worked together to advance the OED has recently been retold in a book, The Surgeon of Crowthorne (US title: The Professor and the Madman), later the basis for a 2019 film The Professor and the Madman, starring Mel Gibson and Sean Penn. Oxford editors During the 1870s, the Philological Society was concerned with the process of publishing a dictionary with such an immense scope. They had pages printed by publishers, but no publication agreement was reached; both the Cambridge University Press and the Oxford University Press were approached. The OUP finally agreed in 1879 (after two years of negotiating by Sweet, Furnivall, and Murray) to publish the dictionary and to pay Murray, who was both the editor and the Philological Society president. The dictionary was to be published as interval fascicles, with the final form in four volumes, totalling 6,400 pages. They hoped to finish the project in ten years. Murray started the project, working in a corrugated iron outbuilding called the "Scriptorium" which was lined with wooden planks, bookshelves, and 1,029 pigeon-holes for the quotation slips. He tracked and regathered Furnivall's collection of quotation slips, which were found to concentrate on rare, interesting words rather than common usages. For instance, there were ten times as many quotations for abusion as for abuse. He appealed, through newspapers distributed to bookshops and libraries, for readers who would report "as many quotations as you can for ordinary words" and for words that were "rare, obsolete, old-fashioned, new, peculiar or used in a peculiar way". Murray had American philologist and liberal arts college professor Francis March manage the collection in North America; 1,000 quotation slips arrived daily to the Scriptorium and, by 1880, there were 2,500,000. The first dictionary fascicle was published on 1 February 1884—twenty-three years after Coleridge's sample pages. The full title was A New English Dictionary on Historical Principles; Founded Mainly on the Materials Collected by The Philological Society; the 352-page volume, words from A to Ant, cost 12s 6d (). The total sales were only 4,000 copies. The OUP saw that it would take too long to complete the work with unrevised editorial arrangements. Accordingly, new assistants were hired and two new demands were made on Murray. The first was that he move from Mill Hill to Oxford, which he did in 1885. Murray had his Scriptorium re-erected on his new property. Murray resisted the second demand: that if he could not meet schedule, he must hire a second, senior editor to work in parallel to him, outside his supervision, on words from elsewhere in the alphabet. Murray did not want to share the work, feeling that he would accelerate his work pace with experience. That turned out not to be so, and Philip Gell of the OUP forced the promotion of Murray's assistant Henry Bradley (hired by Murray in 1884), who worked independently in the British Museum in London beginning in 1888. In 1896, Bradley moved to Oxford University. Gell continued harassing Murray and Bradley with his business concerns—containing costs and speeding production—to the point where the project's collapse seemed likely. Newspapers reported the harassment, particularly the Saturday Review, and public opinion backed the editors. Gell was fired, and the university reversed his cost policies. If the editors felt that the dictionary would have to grow larger, it would; it was an important work, and worth the time and money to properly finish. Neither Murray nor Bradley lived to see it. Murray died in 1915, having been responsible for words starting with A–D, H–K, O–P, and T, nearly half the finished dictionary; Bradley died in 1923, having completed E–G, L–M, S–Sh, St, and W–We. By then, two additional editors had been promoted from assistant work to independent work, continuing without much trouble. William Craigie started in 1901 and was responsible for N, Q–R, Si–Sq, U–V, and Wo–Wy. The OUP had previously thought London too far from Oxford but, after 1925, Craigie worked on the dictionary in Chicago, where he was a professor. The fourth editor was Charles Talbut Onions, who compiled the remaining ranges starting in 1914: Su–Sz, Wh–Wo, and X–Z. In 1919–1920, J. R. R. Tolkien was employed by the OED, researching etymologies of the Waggle to Warlock range; later he parodied the principal editors as "The Four Wise Clerks of Oxenford" in the story Farmer Giles of Ham. By early 1894, a total of 11 fascicles had been published, or about one per year: four for A–B, five for C, and two for E. Of these, eight were 352 pages long, while the last one in each group was shorter to end at the letter break (which eventually became a volume break). At this point, it was decided to publish the work in smaller and more frequent instalments; once every three months beginning in 1895 there would be a fascicle of 64 pages, priced at 2s 6d. If enough material was ready, 128 or even 192 pages would be published together. This pace was maintained until World War I forced reductions in staff. Each time enough consecutive pages were available, the same material was also published in the original larger fascicles. Also in 1895, the title Oxford English Dictionary was first used. It then appeared only on the outer covers of the fascicles; the original title was still the official one and was used everywhere else. Completion of first edition and first supplement The 125th and last fascicle covered words from Wise to the end of W and was published on 19 April 1928, and the full dictionary in bound volumes followed immediately. William Shakespeare is the most-quoted writer in the completed dictionary, with Hamlet his most-quoted work. George Eliot (Mary Ann Evans) is the most-quoted female writer. Collectively, the Bible is the most-quoted work (in many translations); the most-quoted single work is Cursor Mundi. Additional material for a given letter range continued to be gathered after the corresponding fascicle was printed, with a view towards inclusion in a supplement or revised edition. A one-volume supplement of such material was published in 1933, with entries weighted towards the start of the alphabet where the fascicles were decades old. The supplement included at least one word (bondmaid) accidentally omitted when its slips were misplaced; many words and senses newly coined (famously appendicitis, coined in 1886 and missing from the 1885 fascicle, which came to prominence when Edward VII's 1902 appendicitis postponed his coronation); and some previously excluded as too obscure (notoriously radium, omitted in 1903, months before its discoverers Pierre and Marie Curie won the Nobel Prize in Physics.). Also in 1933 the original fascicles of the entire dictionary were re-issued, bound into 12 volumes, under the title "The Oxford English Dictionary". This edition of 13 volumes including the supplement was subsequently reprinted in 1961 and 1970. Second supplement In 1933, Oxford had finally put the dictionary to rest; all work ended, and the quotation slips went into storage. However, the English language continued to change and, by the time 20 years had passed, the dictionary was outdated. There were three possible ways to update it. The cheapest would have been to leave the existing work alone and simply compile a new supplement of perhaps one or two volumes; but then anyone looking for a word or sense and unsure of its age would have to look in three different places. The most convenient choice for the user would have been for the entire dictionary to be re-edited and retypeset, with each change included in its proper alphabetical place; but this would have been the most expensive option, with perhaps 15 volumes required to be produced. The OUP chose a middle approach: combining the new material with the existing supplement to form a larger replacement supplement. Robert Burchfield was hired in 1957 to edit the second supplement; Charles Talbut Onions turned 84 that year but was still able to make some contributions as well. The work on the supplement was expected to take about seven years. It actually took 29 years, by which time the new supplement (OEDS) had grown to four volumes, starting with A, H, O, and Sea. They were published in 1972, 1976, 1982, and 1986 respectively, bringing the complete dictionary to 16 volumes, or 17 counting the first supplement. Burchfield emphasized the inclusion of modern-day language and, through the supplement, the dictionary was expanded to include a wealth of new words from the burgeoning fields of science and technology, as well as popular culture and colloquial speech. Burchfield said that he broadened the scope to include developments of the language in English-speaking regions beyond the United Kingdom, including North America, Australia, New Zealand, South Africa, India, Pakistan, and the Caribbean. Burchfield also removed, for unknown reasons, many entries that had been added to the 1933 supplement. In 2012, an analysis by lexicographer Sarah Ogilvie revealed that many of these entries were in fact foreign loanwords, despite Burchfield's claim that he included more such words. The proportion was estimated from a sample calculation to amount to 17% of the foreign loan words and words from regional forms of English. Some of these had only a single recorded usage, but many had multiple recorded citations, and it ran against what was thought to be the established OED editorial practice and a perception that he had opened up the dictionary to "World English". Revised American edition This was published in 1968 at $300. There were changes in the arrangement of the volumes – for example volume 7 covered only N–Poy, the remaining "P" entries being transferred to volume 8. Second edition By the time the new supplement was completed, it was clear that the full text of the dictionary would need to be computerized. Achieving this would require retyping it once, but thereafter it would always be accessible for computer searching—as well as for whatever new editions of the dictionary might be desired, starting with an integration of the supplementary volumes and the main text. Preparation for this process began in 1983, and editorial work started the following year under the administrative direction of Timothy J. Benbow, with John A. Simpson and Edmund S. C. Weiner as co-editors. In 2016, Simpson published his memoir chronicling his years at the OED: The Word Detective: Searching for the Meaning of It All at the Oxford English Dictionary – A Memoir (New York: Basic Books). Thus began the New Oxford English Dictionary (NOED) project. In the United States, more than 120 typists of the International Computaprint Corporation (now Reed Tech) started keying in over 350,000,000 characters, their work checked by 55 proof-readers in England. Retyping the text alone was not sufficient; all the information represented by the complex typography of the original dictionary had to be retained, which was done by marking up the content in SGML. A specialized search engine and display software were also needed to access it. Under a 1985 agreement, some of this software work was done at the University of Waterloo, Canada, at the Centre for the New Oxford English Dictionary, led by Frank Tompa and Gaston Gonnet; this search technology went on to become the basis for the Open Text Corporation. Computer hardware, database and other software, development managers, and programmers for the project were donated by the British subsidiary of IBM; the colour syntax-directed editor for the project, LEXX, was written by Mike Cowlishaw of IBM. The University of Waterloo, in Canada, volunteered to design the database. A. Walton Litz, an English professor at Princeton University who served on the Oxford University Press advisory council, was quoted in Time as saying "I've never been associated with a project, I've never even heard of a project, that was so incredibly complicated and that met every deadline." By 1989, the NOED project had achieved its primary goals, and the editors, working online, had successfully combined the original text, Burchfield's supplement, and a small amount of newer material, into a single unified dictionary. The word "new" was again dropped from the name, and the second edition of the OED, or the OED2, was published. The first edition retronymically became the OED1. The Oxford English Dictionary 2 was printed in 20 volumes. Up to a very late stage, all the volumes of the first edition were started on letter boundaries. For the second edition, there was no attempt to start them on letter boundaries, and they were made roughly equal in size. The 20 volumes started with A, B.B.C., Cham, Creel, Dvandva, Follow, Hat, Interval, Look, Moul, Ow, Poise, Quemadero, Rob, Ser, Soot, Su, Thru, Unemancipated, and Wave. The content of the OED2 is mostly just a reorganization of the earlier corpus, but the retypesetting provided an opportunity for two long-needed format changes. The headword of each entry was no longer capitalized, allowing the user to readily see those words that actually require a capital letter. Murray had devised his own notation for pronunciation, there being no standard available at the time, whereas the OED2 adopted the modern International Phonetic Alphabet. Unlike the earlier edition, all foreign alphabets except Greek were transliterated. The British quiz show Countdown has awarded the leather-bound complete version to the champions of each series since its inception in 1982. When the print version of the second edition was published in 1989, the response was enthusiastic. Author Anthony Burgess declared it "the greatest publishing event of the century", as quoted by the Los Angeles Times. Time dubbed the book "a scholarly Everest", and Richard Boston, writing for The Guardian, called it "one of the wonders of the world". Additions series The supplements and their integration into the second edition were a great improvement to the OED as a whole, but it was recognized that most of the entries were still fundamentally unaltered from the first edition. Much of the information in the dictionary published in 1989 was already decades out of date, though the supplements had made good progress towards incorporating new vocabulary. Yet many definitions contained disproven scientific theories, outdated historical information, and moral values that were no longer widely accepted. Furthermore, the supplements had failed to recognize many words in the existing volumes as obsolete by the time of the second edition's publication, meaning that thousands of words were marked as current despite no recent evidence of their use. Accordingly, it was recognized that work on a third edition would have to begin to rectify these problems. The first attempt to produce a new edition came with the Oxford English Dictionary Additions Series, a new set of supplements to complement the OED2 with the intention of producing a third edition from them. The previous supplements appeared in alphabetical installments, whereas the new series had a full A–Z range of entries within each individual volume, with a complete alphabetical index at the end of all words revised so far, each listed with the volume number which contained the revised entry. However, in the end only three Additions volumes were published this way, two in 1993 and one in 1997, each containing about 3,000 new definitions. The possibilities of the World Wide Web and new computer technology in general meant that the processes of researching the dictionary and of publishing new and revised entries could be vastly improved. New text search databases offered vastly more material for the editors of the dictionary to work with, and with publication on the Web as a possibility, the editors could publish revised entries much more quickly and easily than ever before. A new approach was called for, and for this reason it was decided to embark on a new, complete revision of the dictionary. Oxford English Dictionary Additions Series Volume 1 (): Includes over 20,000 illustrative quotations showing the evolution of each word or meaning. ?th impression (1994-02-10) Oxford English Dictionary Additions Series Volume 2 () ?th impression (1994-02-10) Oxford English Dictionary Additions Series Volume 3 (): Contains 3,000 new words and meanings from around the English-speaking world. Published by Clarendon Press. ?th impression (1997-10-09) Third edition Beginning with the launch of the first OED Online site in 2000, the editors of the dictionary began a major revision project to create a completely revised third edition of the dictionary (OED3), expected to be completed in 2037 at a projected cost of about £34 million. Revisions were started at the letter M, with new material appearing every three months on the OED Online website. The editors chose to start the revision project from the middle of the dictionary in order that the overall quality of entries be made more even, since the later entries in the OED1 generally tended to be better than the earlier ones. However, in March 2008, the editors announced that they would alternate each quarter between moving forward in the alphabet as before and updating "key English words from across the alphabet, along with the other words which make up the alphabetical cluster surrounding them". With the relaunch of the OED Online website in December 2010, alphabetical revision was abandoned altogether. The revision is expected roughly to double the dictionary in size. Apart from general updates to include information on new words and other changes in the language, the third edition brings many other improvements, including changes in formatting and stylistic conventions for easier reading and computerized searching, more etymological information, and a general change of focus away from individual words towards more general coverage of the language as a whole. While the original text drew its quotations mainly from literary sources such as novels, plays, and poetry, with additional material from newspapers and academic journals, the new edition will reference more kinds of material that were unavailable to the editors of previous editions, such as wills, inventories, account books, diaries, journals, and letters. John Simpson was the first chief editor of the OED3. He retired in 2013 and was replaced by Michael Proffitt, who is the eighth chief editor of the dictionary. The production of the new edition exploits computer technology, particularly since the inauguration in June 2005 of the "Perfect All-Singing All-Dancing Editorial and Notation Application", or "Pasadena". With this XML-based system, lexicographers can spend less effort on presentation issues such as the numbering of definitions. This system has also simplified the use of the quotations database, and enabled staff in New York to work directly on the dictionary in the same way as their Oxford-based counterparts. Other important computer uses include internet searches for evidence of current usage and email submissions of quotations by readers and the general public. New entries and words Wordhunt was a 2005 appeal to the general public for help in providing citations for 50 selected recent words, and produced antedatings for many. The results were reported in a BBC TV series, Balderdash and Piffle. The OEDs readers contribute quotations: the department currently receives about 200,000 a year. OED currently contains over 600,000 entries. They update the OED on a quarterly basis to make up for its Third Edition revising their existing entries and adding new words and senses. Formats Compact editions In 1971, the 13-volume OED1 (1933) was reprinted as a two-volume Compact Edition, by photographically reducing each page to one-half its linear dimensions; each compact edition page held four OED1 pages in a four-up ("4-up") format. The two-volume letters were A and P; the first supplement was at the second volume's end. The Compact Edition included, in a small slip-case drawer, a Bausch & Lomb magnifying glass to help in reading reduced type. Many copies were inexpensively distributed through book clubs. In 1987, the second supplement was published as a third volume to the Compact Edition. In 1991, for the 20-volume OED2 (1989), the compact edition format was re-sized to one-third of original linear dimensions, a nine-up ("9-up") format requiring greater magnification, but allowing publication of a single-volume dictionary. It was accompanied by a magnifying glass as before and A User's Guide to the "Oxford English Dictionary", by Donna Lee Berg. After these volumes were published, though, book club offers commonly continued to sell the two-volume 1971 Compact Edition. The Compact Oxford English Dictionary (second edition, 1991, ): Includes definitions of 500,000 words, 290,000 main entries, 137,000 pronunciations, 249,300 etymologies, 577,000 cross-references, over 2,412,000 illustrative quotations, and is again accompanied by a magnifying glass. ?th impression (1991-12-05) Electronic versions Once the dictionary was digitized and online, it was also available to be published on CD-ROM. The text of the first edition was made available in 1987. Afterward, three versions of the second edition were issued. Version 1 (1992) was identical in content to the printed second edition, and the CD itself was not copy-protected. Version 2 (1999) included the Oxford English Dictionary Additions of 1993 and 1997. Version 3.0 was released in 2002 with additional words from the OED3 and software improvements. Version 3.1.1 (2007) added support for hard disk installation, so that the user does not have to insert the CD to use the dictionary. It has been reported that this version will work on operating systems other than Microsoft Windows, using emulation programs. Version 4.0 of the CD has been available since June 2009 and works with Windows 7 and Mac OS X (10.4 or later). This version uses the CD drive for installation, running only from the hard drive. On 14 March 2000, the Oxford English Dictionary Online (OED Online) became available to subscribers. The online database containing the OED2 is updated quarterly with revisions that will be included in the OED3 (see above). The online edition is the most up-to-date version of the dictionary available. The OED website is not optimized for mobile devices, but the developers have stated that there are plans to provide an API to facilitate the development of interfaces for querying the OED. The price for an individual to use this edition is £195 or US$295 a year, even after a reduction in 2004; consequently, most subscribers are large organizations such as universities. Some public libraries and companies have also subscribed, including public libraries in the United Kingdom, where access is funded by the Arts Council, and public libraries in New Zealand. Individuals who belong to a library which subscribes to the service are able to use the service from their own home without charge. Oxford English Dictionary Second edition on CD-ROM Version 3.1: Upgrade version for 3.0 (): ?th impression (2005-08-18) Oxford English Dictionary Second edition on CD-ROM Version 4.0: Includes 500,000 words with 2.5 million source quotations, 7,000 new words and meanings. Includes Vocabulary from OED 2nd Edition and all 3 Additions volumes. Supports Windows 2000-7 and Mac OS X 10.4–10.5). Flash-based dictionary. Full version (/) ?th impression (2009-06-04) Upgrade version for 2.0 and above (/): Supports Windows only. ?th impression (2009-07-15) Print+CD-ROM version (): Supports Windows Vista and Mac OS). ?th impression (2009-11-16) Relationship to other Oxford dictionaries The OEDs utility and renown as a historical dictionary have led to numerous offspring projects and other dictionaries bearing the Oxford name, though not all are directly related to the OED itself. The Shorter Oxford English Dictionary, originally started in 1902 and completed in 1933, is an abridgement of the full work that retains the historical focus, but does not include any words which were obsolete before 1700 except those used by Shakespeare, Milton, Spenser, and the King James Bible. A completely new edition was produced from the OED2 and published in 1993, with revisions in 2002 and 2007. The Concise Oxford Dictionary is a different work, which aims to cover current English only, without the historical focus. The original edition, mostly based on the OED1, was edited by Francis George Fowler and Henry Watson Fowler and published in 1911, before the main work was completed. Revised editions appeared throughout the twentieth century to keep it up to date with changes in English usage. The Pocket Oxford Dictionary of Current English was originally conceived by F. G. Fowler and H. W. Fowler to be compressed, compact, and concise. Its primary source is the Oxford English Dictionary, and it is nominally an abridgment of the Concise Oxford Dictionary. It was first published in 1924. In 1998 the New Oxford Dictionary of English (NODE) was published. While also aiming to cover current English, NODE was not based on the OED. Instead, it was an entirely new dictionary produced with the aid of corpus linguistics. Once NODE was published, a similarly brand-new edition of the Concise Oxford Dictionary followed, this time based on an abridgement of NODE rather than the OED; NODE (under the new title of the Oxford Dictionary of English, or ODE) continues to be principal source for Oxford's product line of current-English dictionaries, including the New Oxford American Dictionary, with the OED now only serving as the basis for scholarly historical dictionaries. Spelling The OED lists British headword spellings (e.g., labour, centre) with variants following (labor, center, etc.). For the suffix more commonly spelt -ise in British English, OUP policy dictates a preference for the spelling -ize, e.g., realize vs. realise and globalization vs. globalisation. The rationale is etymological, in that the English suffix is mainly derived from the Greek suffix -ιζειν, (-izein), or the Latin -izāre. However, -ze is also sometimes treated as an Americanism insofar as the -ze suffix has crept into words where it did not originally belong, as with analyse (British English), which is spelt analyze in American English. Reception British prime minister Stanley Baldwin described the OED as a "national treasure". Author Anu Garg, founder of Wordsmith.org, has called it a "lex icon". Tim Bray, co-creator of Extensible Markup Language (XML), credits the OED as the developing inspiration of that markup language. However, despite its claims of authority, the dictionary has been criticized since at least the 1960s from various angles. It has become a target precisely of its scope, its claims to authority, its British-centredness and relative neglect of World Englishes, its implied but not acknowledged focus on literary language and, above all, its influence. The OED, as a commercial product, has always had to manoeuvre a thin line between PR, marketing and scholarship and one can argue that its biggest problem is the critical uptake of the work by the interested public. In his review of the 1982 supplement, University of Oxford linguist Roy Harris writes that criticizing the OED is extremely difficult because "one is dealing not just with a dictionary but with a national institution", one that "has become, like the English monarchy, virtually immune from criticism in principle". He further notes that neologisms from respected "literary" authors such as Samuel Beckett and Virginia Woolf are included, whereas usage of words in newspapers or other less "respectable" sources hold less sway, even though they may be commonly used. He writes that the OEDs "[b]lack-and-white lexicography is also black-and-white in that it takes upon itself to pronounce authoritatively on the rights and wrongs of usage", faulting the dictionary's prescriptive rather than descriptive usage. To Harris, this prescriptive classification of certain usages as "erroneous" and the complete omission of various forms and usages cumulatively represent the "social bias[es]" of the (presumably well-educated and wealthy) compilers. However, the identification of "erroneous and catachrestic" usages is being removed from third edition entries, sometimes in favour of usage notes describing the attitudes to language which have previously led to these classifications. Harris also faults the editors' "donnish conservatism" and their adherence to prudish Victorian morals, citing as an example the non-inclusion of "various centuries-old 'four-letter words until 1972. However, no English dictionary included such words, for fear of possible prosecution under British obscenity laws, until after the conclusion of the Lady Chatterley's Lover obscenity trial in 1960. The Penguin English Dictionary of 1965 was the first dictionary that included the word fuck. Joseph Wright's English Dialect Dictionary had included shit in 1905. The OEDs claims of authority have also been questioned by linguists such as Pius ten Hacken, who notes that the dictionary actively strives towards definitiveness and authority but can only achieve those goals in a limited sense, given the difficulties of defining the scope of what it includes. Founding editor James Murray was also reluctant to include scientific terms, despite their documentation, unless he felt that they were widely enough used. In 1902, he declined to add the word "radium" to the dictionary. See also Australian Oxford Dictionary Canadian Oxford Dictionary Compact Oxford English Dictionary of Current English Concise Oxford English Dictionary New Oxford American Dictionary Oxford Advanced Learner's Dictionary Shorter Oxford English Dictionary A Dictionary of Canadianisms on Historical Principles The Australian National Dictionary Dictionary of American Regional English References Further reading (McPherson is Senior Editor of OED) External links Archive of documents, including Trench's original "On some deficiencies in our English Dictionaries" paper Murray's original appeal for readers Their page of OED statistics, and another such page. Two   from the OED. Oxford University Press pages: Second Edition, Additions Series Volume 1, Additions Series Volume 2, Additions Series Volume 3, The Compact Oxford English Dictionary New Edition, 20-volume printed set+CD-ROM, CD 3.1 upgrade, CD 4.0 full, CD 4.0 upgrade 1st edition Internet Archive 1888–1933 Issue Full title of each volume: A New English Dictionary on Historical Principles: Founded Mainly on the Materials Collected by the Philological Society {| class="wikitable" ! Vol. !! Year !! Letters !! Links |- | 1 || 1888 || A, B || Vol. 1 |- | 2 || 1893 || C || Vol. 2 |- | 3 || 1897 || D, E || Vol. 3 (version 2) |- | 4 || 1901 || F, G || Vol. 4 (version 2) (version 3) |- | 5 || 1901 || H–K || Vol. 5 |- | 6p1 || 1908 || L || Vol. 6, part 1 |- | 6p2 || 1908 || M, N || Vol. 6, part 2 |- | 7 || 1909 || O, P || Vol.7 |- | 8p1 || 1914 || Q, R || Vol. 8, part 1 |- | 8p2 || 1914 || S–Sh || Vol.8, part 2 |- | 9p1 || 1919 || Si–St || Vol. 9, part 1 |- | 9p2 || 1919 || Su–Th || Vol. 9, part 2 |- | 10p1 || 1926 || Ti–U || Vol. 10, part 1 |- | 10p2 || 1928 || V–Z || Vol. 10, part 2 |- | Sup. || 1933 || A–Z|| Supplement |} 1933 Corrected re-issue Full title of each volume: The Oxford English Dictionary: Being a Corrected Re-issue with an Introduction, Supplement and Bibliography, of A New English Dictionary on Historical Principles: Founded Mainly on the Materials Collected by the Philological Society {| class="wikitable" |- ! Vol. !! Letters !! Links |- | 1|| A–B || |- | 2 || C || |- | 3 || D–E || |- | 4 || F–G || |- | 5 || H–K || |- | 6 || L–M || |- | 7 || N–Poy || |- | 8 || Poy–Ry || |- | 9 || S–Soldo || |- | 10 || Sole–Sz || |- | 11 || T–U || |- | 12 || V–Z || |- | Sup. || A–Z || |} HathiTrust Some volumes (only available from within the USA): University of Virginia copy Princeton University copy University of Michigan copy 1884 non-fiction books British culture English dictionaries English non-fiction literature Language software for MacOS Language software for Windows Oxford dictionaries
18349525
https://en.wikipedia.org/wiki/Vanguard%20Managed%20Solutions
Vanguard Managed Solutions
Vanguard Managed Solutions (VanguardMS) was a limited liability company (LLC) which specialized in monitoring live data networks from network operations centers (NOCs) from 2001 to 2007. It began as Codex Corporation then was a division of Motorola, and then purchased by Platinum Equity. Platinum merged the network monitoring business section to CompuCom in 2007, but retained the IP router business renamed to Vanguard Networks. History Codex Corporation was founded in July 1962 by James M. Cryer Jr. and Arthur Kohlenberg, who were director and chief scientist at a Boston area research division of Melpar. Originally the company was a government contractor headquartered in Newton, Massachusetts. In May 1967, Codex acquired Teldata, a small company led by Jerry Holsinger that was developing data communication products that would operate at data rates of 9600 bit per second compared to the 1200 bit per second rate of existing products. Robert G. Gallager was hired as a consultant, and convinced the company to develop quadrature amplitude modulation techniques using two sidebands instead of single-sideband modulation. This technique was later refined by Dave Forney (hired in 1965) into a successful modem (modulator and demodulator) product. Within a few years it had about 20% of the market, which then was dominated by American Telephone & Telegraph. Codex had its initial public offering in 1968. By 1970 military spending was decreasing, so pure commercial products were developed. However, the AE-96 model 9600 bit/s modems had problems in practice when manufacturing was scaled up. Holsinger left in 1970 found another modem company: Intertel. Founding president Cryer and chief scientist Kohlenberg both died in 1970, and financing was delayed. Art Carr took over in September 1970 and reduce staff to conserve cash. A secondary public offering was held in 1972 to reduce debt and raise capital to expand. Motorola purchased Codex Corporation February 7, 1977. That same month, chief primary competitor Milgo was purchased by British firm Racal, after a take-over attempt by Applied Digital Data Systems. In 1982, Carr became head of the Motorola Information Systems Group, which included other acquisitions such as Four-Phase Systems in California. The ISG division based in Mansfield, Massachusetts sold their "Vanguard" series Motorola routers to enterprise and retail markets. The Vanguard series delivered transport capabilities of multiplexing voice, legacy protocols, and IP Routing over a single Frame Relay circuit using Annex G protocol derived from X.25. The product's popularity led to its expansion of a managed services unit where existing router customers were given the option to have their network monitored in real time for Frame Relay outages, and hardware failures. This service attracted customers who preferred a hands-off approach to maintaining their own networks, and not have to deal with contacting the telecommunications company for Frame Relay Circuit outages on their own. In 1994, Motorola re-organized ISG and combined Codex with Universal Data Systems products. The new group was called the Internet and Networking Group, with John Lockitt remaining president and chief executive of Motorola Codex. After the dot-com bubble collapse in 2000, Motorola was forced to close or sell off some of their own business units. Holding company Platinum Equity purchased the ING division from Motorola. The acquisition was announced in July 2001 and closed on September 4, 2001. The network management unit marketed network monitoring products and was purchased in 2007 by Court Square Capital Partners. Court Square merged the products into those of the CompuCom Systems (based in Dallas, Texas), which was also acquired at the same time. The Multi-service router unit remained with Platinum Equity. The "VanguardMS" brand name was changed to Vanguard Networks. In February 2013 the brand was purchased by Raymar Information Technology. At that time, Raymar was located in Sacramento, California, and Vanguard continued to operate in Foxboro, Massachusetts. References Motorola
8166303
https://en.wikipedia.org/wiki/2006%20BCS%20computer%20rankings
2006 BCS computer rankings
In American college football, the 2006 BCS computer rankings are a part of the Bowl Championship Series (BCS) formula that determines who plays in the BCS National Championship Game as well as several other bowl games. Each computer system was developed using different methods which attempts to rank the teams' performance. For 2006, the highest and lowest rankings for a team are dropped and the remaining four rankings are summed. A team ranked #1 by a computer system is given 25 points, #2 is given 24 points and so forth. The summed values are then divided by 100 (the maximum value a team can earn if they received four first place votes that were summed). The values are then ranked by percentage. This percentage ranking is then averaged with the Coaches Poll and Harris Poll average rankings, each receiving equal weight, and the results become the BCS Rankings. BCS computer rankings average For 2006, the rankings were released beginning with the eighth week of the season on October 14. Data taken from official BCS website. There are missing values in the table because the BCS Rankings only list the top 25 of the BCS Rankings, providing data on how those teams achieved their top 25 ranking. The computers ranking may include teams that do not make the top 25 BCS Rankings once averaged with the AP and Coaches Polls. Anderson & Hester Jeff Anderson and Chris Hester are the owners of this computer system that has been a part of the BCS since its inception. The Anderson & Hester Rankings claim to be distinct in four ways: These rankings do not reward teams for running up scores. Teams are rewarded for beating quality opponents, which is the object of the game. Margin of victory, which is not the object of the game, is not considered. Unlike the AP and Coaches Polls, these rankings do not prejudge teams. These rankings first appear after the season's fifth week, and each team's ranking reflects its actual accomplishments on the field, not its perceived potential. These rankings compute the most accurate strength of schedule ratings. Each team's opponents and opponents' opponents are judged not only by their won-lost records but also, uniquely, by their conferences' strength (see #4). These rankings provide the most accurate conference ratings. Each conference is rated according to its non-conference won-lost record and the difficulty of its non-conference schedule. The margin of victory was once allowed by the BCS for the computers, but was removed following the 2004 season. Therefore, all six computer systems do not include margin of victory. However, this computer system has never included it in its formula. In addition, only human polls (specifically the AP Poll and Coaches Poll in this reference) "prejudge" teams by releasing pre-season polls with the expected rankings of teams before they have played any games. The last two claims are subjective opinions by the authors of this computer system. Billingsley Richard Billingsley is the owner of this computer system. Self-described as not a mathematician or computer-geek; simply a devout college football fan since the age of 7. The main components in the formula are: Won-Loss Records, Opponent Strength (based on the opponent’s record, rating, and rank), with a strong emphasis on the most recent performance. Very minor consideration is also given to the site of the game, and defensive scoring performance. Billingsley did use margin of victory, but removed it after the 2001 season. It had accounted for 5% of the total ranking for his system and was part of the system for 32 years. Also, this computer system releases rankings each week, using a complex formula to incorporate the previous season's rank (but not ranking score) into the early parts of the current season. For the 2006 season, this computer ranking uniquely favored Penn State and TCU. Colley Matrix Wes Colley, creator of the Colley Matrix, has a Ph.D from Princeton University in Astrophysical Sciences. He attended Virginia and is therefore a Virginia fan. His brother, Will Colley played for Georgia. Colley claims 5 advantages using his system: First and foremost, the rankings are based only on results from the field, with absolutely no influence from opinion, past performance, tradition or any other bias factor. This is why there is no pre-season poll here. All teams are assumed equal at the beginning of the year. Second, strength of schedule has a strong influence on the final ranking. Padding the schedule wins you very little. Furthermore, only D-IA opponents count in the ranking, so those wins against James Madison or William & Mary don't mean anything. For instance, Wisconsin with 4 losses finished the 2000 season ahead of well ahead of TCU with only 2 losses. That's because Wisconsin's Big 10 schedule was much, much more difficult that TCU's WAC schedule. Third, as with the NFL, NHL, NBA, and Major League, score margin does not matter at all in determining ranking, so winning big, despite influencing pollsters, does not influence this scheme. The object of football is winning the game, not winning by a large margin. Fourth, there is no ad hoc weighting of opponents' winning percentage and opponents' opponents' winning percentage, etc., ad nauseam (no random choices of 1/3 of this + 2/3 of that, for example). In this method, very simple statistical principals, with absolutely no fine tuning are used to construct a system of 117 equations with 117 variables, representing each team according only to its wins and losses, (see Ranking Method). The computer simply solves those equations to arrive at a rating (and ranking) for each team. Fifth, comparison between this scheme and the final press polls (1998, 1999, 2000, 2001, 2002) proves that the scheme produces sensible results. While all computer systems are not biased towards the "Name recognition" of a school, Colley's system doesn't include any information that doesn't involve the current season. No pre-season poll and no carry-over from the previous season. Colley's focus on strength of schedule without including opponents' strength of schedule is unique. Massey Kenneth Massey is the owner of this complex computer system. He was a Ph.D candidate of Mathematics at Virginia Tech. Only the score, venue, and date of each game are used to calculate the Massey ratings. However, Massey calculates an offensive and defensive ratings which combine to produce a power ranking as well. The overall team rating is a merit based quantity, and is the result of applying a Bayesian win-loss correction to the power rating. Sagarin Jeff Sagarin is the owner of this computer system published in USA Today. He olds an MBA from Indiana. This system uses the Elo Chess system where winning and losing are the sole factors. He also publishes a "Predictor" system that uses margin of victory. However, the BCS only uses the Elo Chess system. Wolfe Peter Wolfe uses a Bradley-Terry model for his computer system. It uses wins and losses but also uses game location as a factor. In addition, he ranks all teams that can be connected by schedule played (over 700 involving Division I-A, I-AA, II, III and NAIA). Legend See also 2007 BCS computer rankings References Bowl Championship Series
339251
https://en.wikipedia.org/wiki/MOS
MOS
MOS or Mos may refer to: Technology MOSFET (metal–oxide–semiconductor field-effect transistor), also known as the MOS transistor Mathematical Optimization Society Model output statistics, a weather-forecasting technique MOS (filmmaking), term for a scene that is "motor only sync" or "motor only shot", or jokingly, “mit out sound” MOS Technology, a defunct semiconductor company Mobile operating system, operating systems for mobile devices Computing Acorn MOS, an operating system used in the Acorn BBC computer range Media Object Server, a protocol used in newsroom computer systems Mean opinion score, a measure of the perceived quality of a signal MOS (operating system), a Soviet Unix clone My Oracle Support, a support site for the users of Oracle Corporation products, known until October 2010 as "MetaLink" Government and military Master of the Sword, the title for the head of physical education at the U.S. Military Academy at West Point Member of Service, any emergency responder (police officer, firefighter, emergency medical technician) that needs emergency help, usually over two-way radio Military occupation specialty code, used by the U.S. military to identify a specific job Ministry of Supply, former British government ministry that co-ordinated military supplies Places Ma On Shan (town), a town in the New Territories of Hong Kong Ma On Shan station, MTR station code Mos, Spain, a municipality in Galicia, Spain in the province of Pontevedra Museum of Science (Boston) (MoS), a Boston, Massachusetts landmark, located in Science Park, a plot of land spanning the Charles River, USA Companies and organizations MOS (brand), American brand of organizational tools Mos, a startup tech company founded by Amira Yahyaoui MOS Burger, a fast-food restaurant chain that originated in Japan The Mosaic Company (NYSE: MOS), American fertilizer and mining company Other uses Mos, an uncommon singular form of mores, widely observed social norms (from Latin and ) Mos, a traditional dish of the Nivkh people Mos language, an aboriginal Mon–Khmer language of Malaya and Thailand Mannan oligosaccharide-based nutritional supplements Manual of style, also known a style guide or stylebook; a guide for writing and sometimes also for layout and typography Margin on services, a financial reporting method for Australian life insurance companies Moment of symmetry, in music, same as well formed generated collection MOS (gene), gene for a human protein expressed in testis during sperm formation MOS, German vehicle registration plate district code for Neckar-Odenwald-Kreis "Man on the street" () segments in broadcasting Mossi language ISO 639 alpha-2 language code Morvan Syndrome (MoS) though usually referred to as MVS MOS, minimum operating segment of a transportation system See also Mo's Restaurants, American restaurant chain in Oregon Mos Def, American hip-hop artist and actor Mos Eisley, a fictional city which first appeared in Star Wars Episode 4: A New Hope Man of Steel (disambiguation) MDOS (disambiguation) PC-MOS/386 Molybdenum disulfide (MoS2)
12101487
https://en.wikipedia.org/wiki/JFire
JFire
JFire was an Enterprise Resource Planning and Customer Relationship Management system. The system has been written entirely in Java and is based on the technologies Java EE 5 (formerly J2EE), JDO 2, Eclipse RCP 3. Hence, both client and server can easily be extended and it requires only a relatively low effort to customize it for specific sectors or companies. Since November 2009, there is a stable JFire release containing many modules, e.g. for user and access rights control, accounting, store management, direct online trade with other companies or end-customers (e.g. via a web shop), an editor for interactive 2-dimensional graphics and other useful plugins. A reporting module which is based on BIRT allows for the editing and rendering of reports, statistics and similar documents (e.g. invoices). Even though the main goal of the project is to serve as a robust and flexible framework and thus to ease the implementation of sector-specific applications, it contains modules for the out-of-the-box usage in small and medium-sized enterprises. Because JFire uses JDO as persistence layer, it is independent of the underlying database management system (DBMS) and spares developers the error-prone work of writing SQL. Furthermore, the use of JDO makes it possible to employ other DBMS types (e.g. object databases). According to the project's website, JFire is shipped with the JDO2 reference implementation DataNucleus, which supports many relational databases and db4o. Even though Java EE, JDO and Eclipse RCP provide many advantages, they have the disadvantage that they require a longer training period than older technologies (e.g. direct SQL). JFire was published in January 2006 under the conditions of the GNU Lesser General Public License (LGPL). Therefore, it is Free Software and everyone can redistribute it, modify it and use it free of charge. The project has been shut down. The developer, Nightlabs, went into liquidation on 1 January 2015. History The history of JFire starts in 2003, when the company NightLabs decided to develop a new ticket sales and distribution software. Because they wanted to base this new system on an ERP within one integrated application suite (rather than multiple separate programs), they started to search for a suitable framework. After some research and evaluations, they decided to launch such an ERP framework project based on new technologies like JDO and Eclipse RCP, which make it easy for other projects to build upon. When first released in January 2006, it quickly gained attention in the Eclipse community: The German Eclipse Magazine published an article in May 2006, the project was invited to the EclipseCon 2006, the Eclipse Magazine India published an article in December 2006 and in April 2007, the JFire project was invited to the Eclipse Forum Europe, where it impressed the BIRT team with its graphical parameter workflow builder. In late 2009, Jfire had been absorbed by the company VIENNA Advantage. Architecture JFire consists of two parts - the server and different types of clients. So far, the most comprehensive client is a rich client. Additionally, there exists a JSP web client, which currently supports only a part of the functionality (e.g. a web shop). Some applications built on JFire employ other types of clients, as well (e.g. mobile devices used in Yak, an access control system). Because JFire enables different companies/organizations to cooperate directly, a server acts as client to other servers, as well. Each organization has its own JDO datastore, which guarantees a very high degree of protection of privacy. Between organizations, only data essentially required by the business partner are exchanged. Following the framework idea, JFire is built very modular: In the client, it consists of OSGi plug-ins based on the Eclipse Rich Client Platform (RCP) and in the server, JFire is composed of Java EE EAR modules. Due to its modularity, JFire is used as base for non-ERP applications, too, which employ a smaller number of modules (e.g. only the user, access rights and organization management). Server The Base-Module is responsible for Authentication, User- and Rightsmanagement and builds the core for transactions between different organisations and servers. On top of it comes the Trade-Module which includes Accounting, Store-Management, Reporting and forms the base for a general distribution sales network. The Trade-Module offers many interfaces for easy integration of external systems like third-party payment- or delivery-systems. Additionally it provides extension possibilities to build your own Business Application on top of JFire. References Accounting software Free business software Free accounting software Free ERP software Free customer relationship management software Free software programmed in Java (programming language) Free reporting software Enterprise resource planning software for Linux
980927
https://en.wikipedia.org/wiki/Idris%20%28operating%20system%29
Idris (operating system)
Idris is a discontinued multi-tasking, Unix-like, multi-user, real-time operating system released by Whitesmiths, of Westford, Massachusetts. The product was commercially available from 1979 through 1988. Background Idris was originally written for the PDP-11 by P. J. Plauger, who started working on Idris in August 1978. It was binary compatible with Unix V6 on PDP-11, but it could run on non-memory managed systems (like LSI-11 or PDP-11/23) as well. The kernel required 31 KB of RAM, and the C compiler (provided along with the standard V6 toolset) had more or less the same size. Ports Although Idris was initially available for the PDP-11, it was later ported to run on a number of platforms, such as the VAX, Motorola 68000, System/370 and Intel 8086. There was also a version that used bank-switching for memory management, that ran on the Intel 8080. In 1986, David M. Stanhope at Computer Tools International ported Idris to the Atari ST and developed its ROM boot cartridge. This work also included a port of X to Idris. Computer Tools and Whitesmiths offered it to Atari as a replacement for Atari TOS, but eventually marketed it directly to ST enthusiasts. A specific version of Idris (CoIdris) was packaged as a .COM file under DOS and used it for low level I/O services. Idris was ported to the Apple Macintosh (as MacIdris) by John O'Brien (of Whitesmiths Australia) and remained available until the early 1990s. MacIdris ran as an application under the Finder or Multifinder. After Whitesmiths had been merged with Intermetrics, Idris along with its development toolchain was ported by Real Time Systems Ltd to the INMOS T800 transputer architecture for the Parsytec SN1000 multiprocessor. References Discontinued operating systems PDP-11 Unix variants 68k architecture
600739
https://en.wikipedia.org/wiki/Bonnie%20Nardi
Bonnie Nardi
Bonnie Nardi is an emeritus professor of the Department of Informatics at the University of California, Irvine, where she led the TechDec research lab in the areas of Human-Computer Interaction and computer-supported cooperative work. She is well known for her work on activity theory, interaction design, games, social media, and society and technology. She was elected to the ACM CHI academy in 2013. She retired in 2018. Work Prior to teaching at the University of California, Nardi worked at AT&T Labs, Agilent, Hewlett-Packard and Apple labs. She is among anthropologists who have been employed by high-tech companies to examine consumers' behavior in their homes and offices. Nardi collaborated with Victor Kaptelinin to write Acting with Technology: Activity Theory and Interaction Design (2009) and Activity Theory in HCI: Fundamentals and Reflections (2012). These works discuss activity theory and offer a basis for understanding our relationship with technology. Interests Her interests are in the areas of human-computer interaction, computer supported cooperative work, more specifically in activity theory, computer-mediated communication, and interaction design. Nardi has researched CSCW applications and blogging, and has more recently pioneered the study of World of Warcraft in HCI. She has studied the use of technology in offices, hospitals, schools, libraries and laboratories. She is widely known among librarians – especially research, reference and digital librarians – for Chapter 7 of Information Ecologies, which focused on librarians as keystone species in information ecologies. Nardi's book inspired the title of a UK conference Information Ecologies: the impact of new information 'species''' hosted, inter alia, by the UK Office of Library Networking, now known by its acronym UKOLN, and led to a keynote address by Nardi at a 1998 Library of Congress Institute on Reference Service in a Digital Age. She had written Information Ecologies while a researcher at ATT Labs Research. Nardi's self-described theoretical orientation is "activity theory" – aka Cultural-Historical Activity Theory (CHAT) -, a philosophical framework developed by the Russian psychologists Vygotsky, Luria, Leont'ev, and their students. "My interests are user interface design, collaborative work, computer-mediated communication, and theoretical approaches to technology design and evaluation." She is currently conducting an ethnographic study of World of Warcraft. According to Oklahoma Senator Tom Coburn's Wastebook 2010, Nardi received a $100,000 grant to "analyze and understand the ways in which players of World of Warcraft, a popular multiplayer game, engage in creative collaboration". In Coburn's list of 100 supposedly-wasteful federal spending projects, Nardi's project came in at number 6, with Coburn's report saying, "Most people have to work for a living, others get to play video games."Tom Coburn, Wastebook 2010 A Guide to Some of the Most Wasteful Government Sending of 2010 , December 2010. World of Warcraft is a popular game made by the large Irvine-based Blizzard Entertainment, local to UC Irvine. Background Nardi received her undergraduate degree from University of California at Berkeley and her PhD from the School of Social Sciences at University of California, Irvine. Nardi also spent a year in Western Samoa doing postdoctoral research. Selected bibliography Nardi, B., D. Schiano, and M. Gumbrecht (2004). Blogging as social activity, or, Would you let 900 million people read your diary? Proceedings of the Conference on Computer-Supported Cooperative Work. New York: ACM Press, pp. 222–228. Nardi, B., S. Whittaker, and E. Bradner (2000). Interaction and Outeraction: Instant messaging in action. Proceedings Conference on Computer-supported Cooperative Work. New York: ACM Press, pp. 79–88. Gantt, M. and B. Nardi (1992). Gardeners and gurus: Patterns of collaboration among CAD users. Proceedings of the ACM Conference on Human Factors in Computer Systems, pp. 107–117. Nardi, B., and J. Miller (1990). An ethnographic study of distributed problem solving in spreadsheet development. Proceedings of the Conference on Computer-Supported Cooperative Work'', pp. 197–208. See also Digital anthropology Information ecology Digital library Digital librarian Lucy Suchman Terry Winograd Mark Weiser Paul Dourish Notes and references External links home page UKOLN Department of Informatics ATT Labs Research Interface designers American librarians American women librarians Year of birth missing (living people) Living people University of California, Irvine faculty Human–computer interaction researchers Game researchers 21st-century American women
18686005
https://en.wikipedia.org/wiki/Ktrace
Ktrace
ktrace is a utility included with certain versions of BSD Unix and Mac OS X that traces kernel interaction with a program and dumps it to disk for the purposes of debugging and analysis. Traced kernel operations include system calls, namei translations, signal processing, and I/O. Trace files generated by ktrace (named by default) can be viewed in human-readable form by using the kdump utility. Since Mac OS X Leopard, ktrace has been replaced by DTrace. See also DTrace, Sun Microsystems's trace version, now running on OpenSolaris, FreeBSD, macOS, and Windows kdump (Linux), Linux kernel's crash dump mechanism, which internally uses kexec SystemTap trace on Linux, part of the Linux Trace Toolkit References Unix programming tools
63983563
https://en.wikipedia.org/wiki/Transport%20Fever%202
Transport Fever 2
Transport Fever 2 is a business simulation game developed by Urban Games and published by Good Shepherd Entertainment. It is the third video game of the Transport Fever franchise, and became available for Microsoft Windows and Linux on 11 December 2019 and macOS on 23 February 2021. Gameplay Like the series' previous games, Transport Fever 2 still focuses on the transport evolution of the past seventeen decades. However, the campaign mode rewrites the transport history in comparison to Transport Fever, and takes place across three different continents. The game also features a sandbox mode, a map editor and mod tools. Development and release Transport Fever 2 was announced in April 2019. It is developed by Urban Games, the developer of the Transport Fever franchise, and published by Good Shepherd Entertainment. The game was initially available for Microsoft Windows and Linux on 11 December 2019 worldwide via Steam, with a macOS version released later in February 2021. Reception Transport Fever 2 received "fairly positive" reviews, according to review aggregator Metacritic. Matt S. of Digitally Downloaded rates the game 4.5 stars out of 5. He writes "It's elegantly presented and understands that some efficiencies are required for the sake of playability." Rick Lane from The Guardian gives a 3 star with maximum 5. He compares the game with The Sims franchise of Maxis and Cities: Skylines of Colossal Order, commenting the growth of the in-game cities would bring players a lot of fun. However, despite not short of detail, he criticises the game lacks depth in certain areas. The game scores an overall 7 points from TheSixthAxis, which suggests the game "great attention to detail for vehicles and the environment". However, on the minus side, the game does feel more or less like a refined expansion rather than a proper sequel. As of February 2021, Urban Games suggested that the game had sold about 500,000 copies, more than Transport Fever 1. References External links Wiki 2019 video games Business simulation games Linux games MacOS games Single-player video games Transport Fever Transport simulation games Video games developed in Switzerland Video games using procedural generation Windows games
395260
https://en.wikipedia.org/wiki/Steinberg
Steinberg
Steinberg Media Technologies GmbH (trading as Steinberg) is a German musical software and hardware company based in Hamburg with satellite offices in Siegburg and London. It develops music writing, recording, arranging, and editing software, most notably Cubase, Nuendo, and Dorico. It also designs audio and MIDI hardware interfaces, controllers, and iOS/Android music apps including Cubasis. Steinberg created several industry standard music technologies including the Virtual Studio Technology (VST) format for plug-ins and the ASIO (Audio Stream Input/Output) protocol. Steinberg has been a wholly owned subsidiary of Yamaha since 2005. History The company was founded in 1984 by Karl Steinberg and Manfred Rürup in Hamburg. As early proponents and fans of the MIDI protocol, the two developed Pro 16, a MIDI sequencing application for the Commodore 64 and soon afterwards, Pro 24 for the Atari ST platform. The ST had built-in MIDI ports which helped to quickly increase interest in the new technology across the music world. In 1989 Steinberg released Cubase for Atari, and versions for the Mac and Windows platforms would follow soon afterwards. It became a very popular MIDI sequencer, used in studios around the globe. Steinberg Media Technologies AG had a revenue of 25 million DM in 1999. It had 180 employees in 2000. A planned entry on the Neuer Markt (New Market, NEMAX50) of the Deutsche Börse failed. The company had a revenue of 20 million in 2001 and 130 employees in 2002. In 2003 Steinberg was acquired by Pinnacle Systems and shortly after that, by Yamaha in 2004. With its new mother company Yamaha, Steinberg expanded design and production of its own hardware, and since 2008 it has created a range of audio and MIDI interface hardware including the UR, MR816, CC and CI series. In 2012, Steinberg launched its first iOS sequencer, Cubasis, which has seen regular updates since then. Steinberg has won a number of industry awards including several MIPA awards, and accolades for Cubasis and its CMC controllers amongst others. Dorico team acquisition In 2012, Steinberg acquired the former development team behind Sibelius, following the closure of Avid's London office in July, to begin development on a new professional scoring software named Dorico. It was released on 19 October 2016. Product History Cubase was released in 1989, initially as a MIDI sequencer. Digital audio recording followed in 1992 with Cubase Audio, followed by VST support in 1996 which made it possible for third-party software programmers to create and sell virtual instruments for Cubase. Steinberg bundled its own VST instruments and effects with Cubase, as well as continuing to develop standalone instruments as well. Atari support eventually ended and Cubase became a Mac and Windows DAW (digital audio workstation), with feature parity across both platforms. The WaveLab audio editing and mastering suite followed in 1995 for Windows, and the VST and ASIO protocols – open technologies that could be used by any manufacturer – were first released in 1997. WaveLab would come to the Mac in 2010. In 2000 the company released Nuendo, a new DAW clearly targeted at the broadcast and media industries. 2001 saw the release of HALion, a dedicated software sampler. A complete rewrite of Cubase in 2002 was necessary due to its legacy code which was no longer maintainable, leading to a name change to Cubase SX, ditching older technology and using the audio engine from Nuendo. Since this time, Cubase and Nuendo have shared many core technologies. Cubase currently comes in three versions – Elements, Artist and Pro. Steinberg was one of the first DAW manufacturers who started using automatic delay compensation for synchronization of different channels of the mixer which may have different latency. With the growing popularity of mobile devices, Steinberg develops apps for iOS including Cubasis, a fully featured DAW for iPad with plug-ins, full audio and MIDI recording and editing and many other professional features. It also creates standalone apps including the Nanologue synth and LoopMash. In 2016 Steinberg released Dorico, a professional music notation and scoring suite. Steinberg VST As part of the development of its flagship, the sequencer Cubase, Steinberg defined the VST interface (Virtual Studio Technology) in 1996, by means of which external programs can be integrated as virtual instruments playable via MIDI. VST simulates a real-time studio environment with EQs, effects, mixing and automation and has become a quasi-standard supported by many other audio editing programs. The latest version is VST 3. The VST 3 is a general rework of the long-serving VST plug-in interface. It is not compatible with the older VST versions, but it includes some new features and possibilities. Initially developed for Macintosh only, Steinberg Cubase VST for the PC followed a year later and established VST and the Audio Stream Input/Output Protocol (ASIO) as open standards that enabled third parties to develop plug-ins and audio hardware. ASIO ensures that the delay caused by the audio hardware during sound output is kept to a minimum to enable hardware manufacturers to provide specialized drivers. ASIO has established itself as the standard for audio drivers. Products Current products Music software Cubase Dorico Nuendo WaveLab Sequel Cubasis (for iOS and Android) Remix SpectraLayers Pro 6 VST instruments HALion (SE/Sonic) - virtual sampling and sound design system HALion Symphonic Orchestra Groove Agent - electronic and acoustic drums The Grand - virtual Piano Padshop - granular synthesizer Retrologue - analog synthesizer Dark Planet - dark sounds for cinematic and electronic music Hypnotic Dance - synth-based dance sounds Triebwerk - Sounds for Elektro, Techno and House Iconica - Orchester Library, recorded at Funkhaus Berlin Hardware Steinberg AXR4 – 28x24 Thunderbolt 2 Audio Interface with 32-Bit Integer Recording and RND SILK Steinberg UR824 – 24x24 USB 2.0 audio interface with 8x D-PREs, 24-bit/192 kHz, on board DSP, zero latency monitoring, advanced integration. Their top-of-the-line USB audio interface Steinberg CC121 – Advanced Integration Controller Steinberg CI2 – Advanced Integration Controller Steinberg MR816 CSX – Advanced Integration DSP Studio Steinberg MR816 X – Advanced Integration DSP Studio Steinberg UR44 – 6x4 USB 2.0 audio interface with 4x D-PREs, 24-bit/192 kHz support & MIDI I/O Steinberg UR22mkII – 2x2 USB 2.0 audio interface with 2x D-PREs, 24-bit/192 kHz support & MIDI I/O Steinberg UR12 – 2x2 USB 2.0 audio interface with 1x D-PREs, 24-bit/192 kHz support Steinberg Key (License Control Device for Steinberg Software - Dongle) eLicenser (License Control Management for Steinberg Software - Dongle) Past products Music software Pro 16 (for Commodore 64) Trackstar (for Commodore 64) Pro 24 (for Atari ST, Commodore Amiga) The Ear (for Atari ST) Twelve (for Atari ST) Tango (for Atari) MusiCal (for Atari ST) Cubeat (for Atari ST) Cubase Lite (for Atari ST/Mac/PC) SoundWorks series (for Atari ST) - Sample editors for the Akai S900, Ensoniq Mirage, E-mu Emax and Sequential Prophet 2000 SynthWorks series (for Atari ST) - Patch editor/librarians for the Yamaha DX7, DX7II, TX7 and TX81z, Roland D50 and MT32 and Ensoniq ESQ-1 Cubase SX Cubase VST Avalon - sample editor for AtariV-Stack ReCycle - Windows/Mac sample editor VST instruments Plex D'cota Hypersonic X-phraze Model-E Virtual Guitarist Virtual Bassist Hardware MIDEX-8 - USB MIDI interface MIDEX-3 - USB MIDI interface MIDEX+ - Atari MIDI interface Steinberg Amiga MIDI interface Steinberg Media Interface 4 (MI4) - USB MIDI interface Avalon 16 DA Converter - AD Converter for Atari SMP-24 - SMPTE/MIDI processor Timelock - SMPTE processor Topaz - Computer controlled recorder Protocols Steinberg have introduced several industry-standard software protocols. These include: ASIO (a low-latency communication protocol between software and sound cards) VST (a protocol allowing third-party audio plugins and virtual instruments) LTB (providing accurate timing for its now-discontinued MIDI interfaces) VSL (an audio/MIDI network protocol which allows the connection and synchronisation of multiple computers running Steinberg software) Steinberg's notable packages include the sequencers Cubase and Nuendo, as well as WaveLab'' (a digital audio editor) and numerous VST plugins. References Further reading External links German brands Software companies of Germany Music equipment manufacturers Manufacturing companies based in Hamburg Manufacturing companies established in 1984 Software companies established in 1984 1984 establishments in Germany Yamaha Corporation 2005 mergers and acquisitions
990036
https://en.wikipedia.org/wiki/Software%20requirements%20specification
Software requirements specification
A software requirements specification (SRS) is a description of a software system to be developed. It is modeled after business requirements specification (CONOPS). The software requirements specification lays out functional and non-functional requirements, and it may include a set of use cases that describe user interactions that the software must provide to the user for perfect interaction. Software requirements specification establishes the basis for an agreement between customers and contractors or suppliers on how the software product should function (in a market-driven project, these roles may be played by the marketing and development divisions). Software requirements specification is a rigorous assessment of requirements before the more specific system design stages, and its goal is to reduce later redesign. It should also provide a realistic basis for estimating product costs, risks, and schedules. Used appropriately, software requirements specifications can help prevent software project failure. The software requirements specification document lists sufficient and necessary requirements for the project development. To derive the requirements, the developer needs to have clear and thorough understanding of the products under development. This is achieved through detailed and continuous communications with the project team and customer throughout the software development process. The SRS may be one of a contract's deliverable data item descriptions or have other forms of organizationally-mandated content. Typically a SRS is written by a technical writer, a systems architect, or a software programmer. Structure An example organization of an SRS is as follows: Purpose Definitions Background System overview References Overall description Product perspective System Interfaces User interfaces Hardware interfaces Software interfaces Communication Interfaces Memory constraints Design constraints Operations Site adaptation requirements Product functions User characteristics Constraints, assumptions and dependencies Specific requirements External interface requirements Performance requirements Logical database requirement Software system attributes Reliability Availability Security Maintainability Portability Functional requirements Functional partitioning Functional description Control description Environment characteristics Hardware Peripherals Users Other Requirements smell Following the idea of code smells, the notion of requirements smell has been proposed to describe issues in requirements specification where the requirement is not necessarily wrong but could be problematic. Examples of requirements smells are subjective language, ambiguous adverbs and adjectives, superlatives and negative statements. See also System requirements specification Concept of operations Requirements engineering Software Engineering Body of Knowledge (SWEBOK) Design specification Specification (technical standard) Formal specification Abstract type References External links ("This standard replaces IEEE 830-1998, IEEE 1233-1998, IEEE 1362-1998 - http://standards.ieee.org/findstds/standard/29148-2011.html") Software requirements Software documentation IEEE standards
39830388
https://en.wikipedia.org/wiki/JetBrains
JetBrains
JetBrains s.r.o. (formerly IntelliJ Software s.r.o.) is a Czech software development company which makes tools for software developers and project managers. , the company has offices in Prague, Saint Petersburg, Moscow, Munich, Boston, Novosibirsk, Amsterdam, Foster City and Marlton, New Jersey. The company offers many integrated development environments (IDE) for the programming languages Java, Groovy, Kotlin, Ruby, Python, PHP, C, Objective-C, C++, C#, Go, JavaScript, and the domain-specific language SQL. The company created the Kotlin programming language, which can run in a Java virtual machine (JVM), in 2011. InfoWorld magazine awarded the firm "Technology of the Year Award" in 2011 and 2015. History JetBrains, initially called IntelliJ Software, was founded in 2000 in Prague by three Russian software developers: Sergey Dmitriev, Valentin Kipyatkov and Eugene Belyaev. The company's first product was IntelliJ Renamer, a tool for code refactoring in Java. In 2012 CEO Sergey Dmitriev left the company to two newly appointed CEOs, Oleg Stepanov and Maxim Shafirov, to work in the field of bioinformatics. In 2021 the New York Times claimed, based on unidentified sources, that unknown parties might have embedded malware in JetBrains' software that led to the SolarWinds hack and other widespread security compromises. JetBrains said they had not been contacted by any government or security agency, and that they had not "taken part or been involved in this attack in any way". Products IDEs Programming languages Kotlin Kotlin is a statically typed programming language that runs on the Java Virtual Machine and also compiles to JavaScript or native code (via LLVM). The name comes from the Kotlin Island, near St. Petersburg. On 7 May 2019, Google declared Kotlin its preferred language for Android application development. MPS MPS (Meta Programming System) is an open-source language workbench that focuses on Domain-Specific Languages (DSLs). It uses projectional editing instead of classical textual editing offering easy language composition, multiple code visualizations as well as various non-textual notations for DSL designers. MPS comes with its own code generation engine, which can be used to provide semantics for MPS-based DSLs. It also provides the ability to capture information about other language aspects like type-system, constraints, data flow, and others. Team tools TeamCity TeamCity is a continuous integration and continuous delivery server developed by JetBrains. It is a server-based web application written in Java. TeamCity is a proprietary commercial software with a Freemium license for up to 20 build configurations and three free build agents. Upsource Upsource is a code review and repository browsing tool. It provides a UI for exploring and monitoring Git, GitHub, Mercurial, Perforce and/or Subversion repositories from a central location. Upsource provides syntax highlighting for multiple programming languages, and provides server-side static code analysis, code-aware navigation, and usage search for Java, PHP, JavaScript and Kotlin languages. YouTrack YouTrack is a proprietary, commercial web-based bug tracker, issue tracking system, and agile project management software developed by JetBrains. It provides development teams with query-based issue search with auto-completion, manipulating issues in batches, extended keyboard-shortcuts support, customizing the set of issue attributes, and creating custom workflows. YouTrack provides support for both Scrum and Kanban methodologies and allows developers to follow a custom process. YouTrack is localized into English, German, Russian, Spanish and French. YouTrack is available as SaaS and on-premises. The free version includes up to 10 users. KTor Ktor is a new framework developed by Jetbrains in 2021 that includes support for servers based on JavaScript, iOS, Android, and JVM. Ktor is built with platform servers in mind. Tools for data science Datalore Datalore is an intelligent web application for data analysis and visualization, which is focused specifically on the machine learning environment in Python. JetBrains Academy JetBrains Academy is an online platform to learn programming, including such programming languages as Python, Java, and Kotlin. The Academy was introduced by JetBrains in 2019, and reached 200,000 users by July 2020. Certifications were added in November 2021 after community feedback prioritized verifiability of the work done on projects. Integrated Team Environment Space Space is a tool for "integrated team environment" with support for teams, version control, blogs, meetings, CI/CD, document storage and more. The product was announced at KotlinConf 2019 and, after a beta testing period, launched in December 2020. Revenue model JetBrains IDEs have several license options, which feature the same software abilities and differ in their price and terms of use. The team products are available as hosted and installed versions and have free versions for small teams. Many products are free for open source projects, students, teachers and classrooms. Open source projects In 2009, JetBrains open-sourced the core functionality of IntelliJ IDEA by offering the free Community Edition. It is built on the IntelliJ Platform and includes its sources. JetBrains released both under Apache License 2.0. In 2010, Android support became a part of the Community Edition, and two years later Google announced its Android Studio, the IDE for mobile development on Android platform built on the Community Edition of IntelliJ IDEA and an official alternative to Eclipse Android Developer Tool. In June 2015, it was announced that the support of Eclipse ADT would be discontinued making Android Studio the official tool for Android App development. MPS, short for meta programming system, and Kotlin, a statically typed programming language for JVM, are both open source. In January 2020, JetBrains released a geometric monospaced font called JetBrains Mono as the default font for their IDEs under the Apache License 2.0. The font is designed for reading source code by being optimized for reading vertically with support for programming ligatures. It has a larger x-height than Consolas, Fira Mono, or Source Code Pro. Past projects Fabrique was to be a rapid application development (RAD) software framework for building custom web and enterprise applications. A preview version was shown in 2004, but it was never released. Omea is a desktop-based reader and organizer for RSS (and later of every bit of information that comes across one's desktop), the first and so far the only consumer-oriented product from JetBrains. Introduced in 2004, it failed to gain expected popularity. In 2008, having reached v 2.2, Omea was open-sourced under the GNU General Public License (GPL) v2. The product is still available for download, and after the retirement of Google Reader, has gained some attention again. Astella is an IDE for Adobe Flash and Apache Flex. This most short-lived JetBrains product was announced in October 2011, just a month before Adobe Systems killed Mobile Flash. References External links Companies based in Prague Czech brands Software companies of the Czech Republic Czech companies established in 2010 Software companies established in 2010
66654
https://en.wikipedia.org/wiki/Populous%20%28video%20game%29
Populous (video game)
Populous is a video game developed by Bullfrog Productions and published by Electronic Arts, released originally for the Amiga in 1989, and is regarded by many as the first God game. With over four million copies sold, Populous is one of the best-selling PC games of all time. The player assumes the role of a deity, who must lead followers through direction, manipulation, and divine intervention, with the goal of eliminating the followers led by the opposite deity. Played from an isometric perspective, the game consists of more than 500 levels, with each level being a piece of land which contains the player's followers and the enemy's followers. The player is tasked with defeating the enemy followers and increasing their own followers' population using a series of divine powers before moving on to the next level. The game was designed by Peter Molyneux, and Bullfrog developed a gameplay prototype via a board game they invented using Lego. The game received critical acclaim upon release, with critics praising the game's graphics, design, sounds and replay value. It was nominated for multiple year-end accolades, including Game of the Year from several gaming publications. The game was ported to many other computer systems and was later supported with multiple expansion packs. It is the first game in the Populous series, preceding Populous II: Trials of the Olympian Gods and Populous: The Beginning. Gameplay The main action window in Populous is viewed from an isometric perspective, and it is set in a "tabletop" on which are set the command icons, the world map (depicted as an open book) and a slider bar that measures the level of the player's divine power or "mana". The game consists of 500 levels, and each level represents an area of land on which live the player's followers and the enemy followers. In order to progress to the next level the player must increase the number of their followers such that they can wipe out the enemy followers. This is done by using a series of divine powers. There are a number of different landscapes the world (depicted on the page in the book) can be, such as desert, rock and lava, snow and ice, etc. and the type of landscape is not merely aesthetic: it affects the development of the player's and enemy's followers. The most basic power is raising and lowering land. This is primarily done in order to provide flat land for the player's followers to build on (though it is also possible to remove land from around the enemy's followers). As the player's followers build more houses they create more followers, and this increases the player's mana level. Increasing the mana level unlocks additional divine powers that allow the player to interact further with the landscape and the population. The powers include the ability to cause earthquakes and floods, create swamps and volcanoes, and turn ordinary followers into more powerful knights. Plot In this game the player adopts the role of a deity and assumes the responsibility of shepherding people by direction, manipulation, and divine intervention. The player has the ability to shape the landscape and grow their civilization – and their divine power – with the overall aim of having their followers conquer an enemy force, which is led by an opposing deity. Development Peter Molyneux led development, inspired by Bullfrog's artist Glenn Corpes having drawn isometric blocks after playing David Braben's Virus. Initially Molyneux developed an isometric landscape, then populated it with little people that he called "peeps", but there was no game; all that happened was that the peeps wandered around the landscape until they reached a barrier such as water. He developed the raise/lower terrain gameplay mechanic simply as a way of helping the peeps to move around. Then, as a way of reducing the number of peeps on the screen, he decided that if a peep encountered a piece of blank, flat land, it would build a house, and that a larger area of land would enable a peep to build a larger house. Thus the core mechanics – god-like intervention and the desire for peeps to expand – were created. The endgame – of creating a final battle to force the two sides to enter a final conflict – developed as a result of the developmental games going on for hours and having no firm end. Bullfrog attempted to prototype the gameplay via a board game they invented using Lego, and Molyneux admits that whilst it didn't help the developers to balance the game at all, it provided a useful media angle to help publicise the game. During the test phase the testers requested a cheat code to skip to the end of the game, as there was insufficient time to play through all 500 levels, and it was only at this point that Bullfrog realised that they had not included any kind of ending to the game. The team quickly repurposed an interstitial page from between levels and used it as the final screen. After demoing the game to over a dozen publishers, Bullfrog eventually gained the interest of Electronic Arts, who had a gap in their spring release schedule and was willing to take a chance on the game. Bullfrog accepted their offer, although Molyneux later described the contract as "pretty atrocious:" 10% royalties on units sold, rising to 12% after one million units sold, with only a small up-front payment. Peter Molyneux presented a post-mortem of the games development and work in progress on a related personal project at Game Developers Conference in 2011. Expansion packs Bullfrog produced Populous World Editor, which gave users the ability to modify the appearance of characters, cities, and terrain. An expansion pack called Populous: The Promised Lands added five new types of landscape (the geometric Silly Land, Wild West, Lego style Block Land, Revolution Française, and computer themed Bit Plains). In addition, another expansion disk called Populous: The Final Frontier added a single new landscape-type and was released as a cover disk for The One. Reception Populous was released in 1989 to almost universal critical acclaim. The game received a 5 out of 5 stars in 1989 in Dragon #150 by Hartley, Patricia, and Kirk Lesser in "The Role of Computers" column. Biff Kritzen of Computer Gaming World gave the game a positive review, noting, "as heavy-handed as the premise sounds, it really is a rather light-hearted game." The simple design and layout were praised, as were the game's colourful graphics. In a 1993 survey of pre 20th-century strategy games the magazine gave the game three stars out of five, calling it a "quasi-arcade game, but with sustained play value". MegaTech magazine stated that the game has "super graphics and 500 levels. Populous is both highly original and amazingly addictive, with a constant challenge on offer". They gave the Mega Drive version of Populous an overall score of 91%. In the September–October 1989 edition of Games International (Issue #9), John Harrington differed from other reviewers, only giving the game a rating of 2 out of 5, calling it "repetitive" and saying, "Although you take on the role of a god, somehow there is a lack of mystique about this game, and despite the cute graphics, the colourful worlds and the commendably elegant icon-driven game system, this game left me with a less than 'god like' feeling." Computer and Video Games reviewed the Amiga version, giving it an overall score of 96%. Japanese gaming magazine Famitsu gave the SNES version 31 out of 40. Raze gave the Mega Drive version of Populous an overall score of 89%. Zero gave the Amiga version of Populous an overall score of 92%. Your Amiga gave the Amiga version of Populous an overall score of 93%. ST/Amiga Format gave the Amiga version of Populous an overall score of 92%. Maxwell Eden reviewed Populous World Editor for Computer Gaming World, and stated that "Now all Populous fans wanting to be apprentice wizards can share in the magic of that gift. Populous is a great game and PWE is an ideal enhancement that breathes new life into weary bytes. Absolute power was never as incorruptible, nor this creative." Compute! named the game to its list of "nine great games for 1989", stating that with "great graphics, a simple-to-learn interface, and almost unlimited variety, Populous is a must buy for 1989". Peter Molyneux estimated that the game accounted for nearly a third of all the revenue of Electronic Arts in that year. Orson Scott Card in Compute! criticized the game's user interface, but praised the graphics and the ability to "create your own worlds ... you control the world of the game, instead of the other way around". STart in 1990 gave "kudos especially to Peter Molyneux, the creative force behind Populous". The magazine called the Atari ST version "a fascinating, fun and challenging game. It's unlike any other computer game I've ever seen, ever. Don't miss it, unless you are a dyed-in-the-wool arcade gamer who has no time for strategy". Entertainment Weekly picked the game as the No. 16 greatest game available in 1991, saying: "Talk about big-time role-playing. Most video games posit you as a mere sword-wielding, perilously mortal human; in Populous you're a deity. Slow-paced, intricate, and difficult to learn: You literally have to create entire worlds while all the time battling those pesky forces of evil." The game was released in the same month that The Satanic Verses controversy gained publicity in the United States following the publication of The Satanic Verses in the United States. Shortly after release, Bullfrog was contacted by the Daily Mail and was warned that the "good vs evil" nature of the game could lead to them receiving similar fatwā, although this did not materialize. By October 1997, global sales of Populous had surpassed 3 million units, a commercial performance that PC Gamer US described as "an enormous hit". By 2001, Populous had sold four million copies, making it one of the best-selling PC games of all time. Awards In 1990 Computer Gaming World named Populous as Strategy Game of the Year. In 1996, the magazine named it the 30th best game ever, with the editors calling it "the father of real-time strategy games". In 1991 it won the Origins Award for Best Military or Strategy Computer Game of 1990, the 1990 Computer Game of the Year in issue 25 of American video game magazine Video Games & Computer Entertainment, and was voted the sixth best game of all time in Amiga Power. In 1992 Mega placed the game at No. 25 in their Top Mega Drive Games of All Time. In 1994, PC Gamer US named Populous as the third best computer game ever. The editors hailed it as "unbelievably addictive fun, and one of the most appealing and playable strategy games of all time." In 1999, Next Generation listed Populous as number 44 on their "Top 50 Games of All Time", commenting that, "A perfect blend of realtime strategy, resource management, and more than a little humor, it remains unsurpassed in the genre it created." In 2018, Complex placed the game 98th on their "The Best Super Nintendo Games of All Time". Legacy In 1990 Bullfrog used the Populous engine to develop Powermonger, a strategic combat-oriented game with similar mechanics to Populous, but with a 3-dimensional graphical interface. In 1991 they developed and released a true sequel, Populous II: Trials of the Olympian Gods, and in 1998 a further direct sequel, Populous: The Beginning. Populous was also released on the SNES, developed by Imagineer as one of the original titles for the console in Japan, and features the addition of a race based on the Three Little Pigs. Populous DS, a new version of the game (published by Xseed Games in America and Rising Star Games in Europe), was developed by Genki for the Nintendo DS and released 11 November 2008. The game allows the user to shape the in-game landscape using the DS's stylus. It also features a multiplayer mode allowing four players to play over a wireless connection. Populous has been re-released through GOG.com and on Origin through the Humble Origin Bundle sale. It runs under DOSBox. The browser-based game Reprisal was created in 2012 by Electrolyte and Last17 as a homage to Populous. Godus (formerly Project GODUS) was revealed as a URL on the face of Curiosity – What's Inside the Cube?, and "is aimed to reimagine" Populous. References External links Populous at the Hall of Light 1989 video games Acclaim Entertainment games Acorn Archimedes games Amiga games Atari ST games Bullfrog Productions games DOS games Electronic Arts games FM Towns games Game Boy games Games commercially released with DOSBox God games Imagineer games Classic Mac OS games Master System games Online games Sharp X68000 games NEC PC-9801 games Origins Award winners Populous (series) Real-time strategy video games Sega Genesis games Super Nintendo Entertainment System games TurboGrafx-CD games Video games developed in the United Kingdom Video games scored by Rob Hubbard Video games scored by Yoshio Tsuru Video games with expansion packs Video games with isometric graphics
27878816
https://en.wikipedia.org/wiki/Motorola%20i1
Motorola i1
The Motorola i1 is an Internet-enabled smartphone by Motorola, running the Android operating system designed by Google. It was the first Android smartphone for iDEN-based networks, which use older 2G technology rather than modern CDMA and GSM networks, and only support data rates up to 19.2 kbps. The Motorola i1 also uses Wi-Fi to access the Internet at higher speeds. It was announced on March 23, 2010, and launched with Boost Mobile in the US on June 20, 2010, for a retail price of $350 without a contract. The Motorola i1 is available in the United States for Boost Mobile, Sprint Nextel and SouthernLINC, in Canada for Telus (Mike) and in Mexico and other Latin American countries for Nextel. See also List of Android devices Android (operating system) Galaxy Nexus References Mobile phones introduced in 2010 Android (operating system) devices Linux-based devices Motorola mobile phones Smartphones Touchscreen portable media players
81926
https://en.wikipedia.org/wiki/Blender%20%28software%29
Blender (software)
Blender is a free and open-source 3D computer graphics software toolset used for creating animated films, visual effects, art, 3D printed models, motion graphics, interactive 3D applications, virtual reality, and computer games. Blender's features include 3D modelling, UV unwrapping, texturing, raster graphics editing, rigging and skinning, fluid and smoke simulation, particle simulation, soft body simulation, sculpting, animating, match moving, rendering, motion graphics, video editing, and compositing. History The Dutch animation studio NeoGeo (not associated with the Neo Geo video game hardware entity) started to develop Blender as an in-house application, and based on the timestamps for the first source files, January 2, 1994 is considered to be Blender's birthday. The version 1.00 was released in January 1995, with the primary author being company co-owner and software developer Ton Roosendaal. The name Blender was inspired by a song by the Swiss electronic band Yello, from the album Baby which NeoGeo used in its showreel. Some design choices and experiences for Blender were carried over from an earlier software application, called Traces, that Roosendaal developed for NeoGeo on the Commodore Amiga platform during the 1987–1991 period. On January 1, 1998, Blender was released publicly online as SGI freeware. NeoGeo was later dissolved, and its client contracts were taken over by another company. After NeoGeo's dissolution, Ton Roosendaal founded Not a Number Technologies (NaN) in June 1998 to further develop Blender, initially distributing it as shareware until NaN went bankrupt in 2002. This also meant, at the time, the development of Blender was discontinued. In May 2002, Roosendaal started the non-profit Blender Foundation, with the first goal to find a way to continue developing and promoting Blender as a community-based open-source project. On July 18, 2002, Roosendaal started the "Free Blender" campaign, a crowdfunding precursor. The campaign aimed at open-sourcing Blender for a one-time payment of €100,000 (US$100,670 at the time), with the money being collected from the community. On September 7, 2002, it was announced that they had collected enough funds and would release the Blender source code. Today, Blender is free and open-source software, largely developed by its community as well as 24 employees employed by the Blender Institute. The Blender Foundation initially reserved the right to use dual licensing so that, in addition to GPL-2.0-or-later, Blender would have been available also under the Blender License that did not require disclosing source code but required payments to the Blender Foundation. However, they never exercised this option and suspended it indefinitely in 2005. Blender is solely available under "GNU GPLv2 or any later" and was not updated to the GPLv3, as "no evident benefits" were seen. In 2019, with the release of version 2.80, the integrated game engine for making and prototyping video games was removed; Blender's developers recommended users migrate to more powerful open source game engines such as Godot instead. Suzanne In February 2002, it was clear that the company behind Blender, NaN, could not survive and would close its doors in March. Nevertheless, they put out one more release, Blender 2.25. As a sort-of Easter egg and last personal tag, the artists and developers decided to add a 3D model of a chimpanzee head (called a "monkey" in the software). It was created by Willem-Paul van Overbruggen (SLiD3), who named it Suzanne after the orangutan in the Kevin Smith film Jay and Silent Bob Strike Back. Suzanne is Blender's alternative to more common test models such as the Utah Teapot and the Stanford Bunny. A low-polygon model with only 500 faces, Suzanne is included in Blender and often used as a quick and easy way to test materials, animations, rigs, textures, and lighting setups. It is as easily added to a scene as a cube or plane. The largest Blender contest gives out an award called the Suzanne Award. Release history The following table lists notable developments during Blender's release history: green indicates the current version, yellow indicates currently supported versions, and red indicates versions that are no longer supported (though many later versions can still be used on modern systems). As of 2021, official releases of Blender for Microsoft Windows, and Linux, as well as a port for FreeBSD, are available in 64-bit versions. Blender is available for Windows 8.1 and above, and Mac OS X 10.13 and above. Blender 2.76b was the last supported release for Windows XP and version 2.63 was the last supported release for PowerPC. Blender 2.83 LTS and 2.92 were the last supported versions for Windows 7. In 2013, Blender was released on Android as a demo, but hasn't been updated since. Features Modeling Primitives Blender has support for a variety of geometric primitives, including polygon meshes, fast subdivision surface modeling, Bézier curves, NURBS surfaces, metaballs, icospheres, text, and an n-gon modeling system called B-mesh. Modifiers Modifiers apply non-destructive effects which can be applied upon rendering or exporting. Sculpting Blender has multi-res digital sculpting, which includes dynamic topology, maps baking, remeshing, re-symmetrize, and decimation. The latter is used to simplify models for exporting purposes. E.g. to use in a game. Geometry Nodes Blender's Geometry nodes is a node based system for procedurally and non-destructively creating and manipulating geometry. It was first added to Blender 2.92, which focuses on object scattering and instancing. It takes the form of a modifier, so it can be stacked over different modifers. The system uses object attributes, which can be modified and overridden with string inputs. Attributes can include Position, Normal and UV maps. All attributes can be viewed in an attribute spreadsheet editor. Geometry Nodes also has the capability of creating primitive mesh such as Cubes, Spheres, Icospheres and Cylinders. In Blender 3.0, support for creating and modifying curves objects will be added to Geometry Nodes. In Blender 3.0, the geometry nodes workflow was completely redesigned with fields in order to make the system more intuitive and work like shader nodes. Hard surface modeling Hard surface modeling is usually used to design hard surfaces such as cars and machines. It is usually done in a non-destructive (using as many modifiers as possible) manner but can be destructive. Simulation Blender can be used to simulate smoke, rain, dust, cloth, fluids, hair, and rigid bodies. Fluid simulation The fluid simulator can be used for simulating liquids, like water hitting a cup. It uses the Lattice Boltzmann methods (LBM) to simulate the fluids and allows for lots of adjusting of the amount of particles and the resolution. The particle physics fluid simulation creates particles that follow the Smoothed-particle hydrodynamics method. Simulation tools for soft body dynamics including mesh collision detection, LBM fluid dynamics, smoke simulation, Bullet rigid body dynamics, and an ocean generator with waves. A particle system that includes support for particle-based hair. Real-time control during physics simulation and rendering. In Blender 2.82 a new fluid simulation system called mantaflow was added, replacing the old system. In Blender 2.92 another fluid simulation system called APIC was added. Vortices and more stable calculations are improved in Relation to FLIP system. Improved Mantaflow is the source of the APIC part. Animation Keyframed animation tools including inverse kinematics, armature (skeletal), hook, curve and lattice-based deformations, shape animations, non-linear animation, constraints, and vertex weighting. Grease Pencil Blender's Grease Pencil tools allow for 2D animation within a full 3D pipeline. Rendering Internal render engine with scanline rendering, indirect lighting, and ambient occlusion that can export in a wide variety of formats; A path tracer render engine called Cycles, which can take advantage of the GPU for rendering. Cycles supports the Open Shading Language since Blender 2.65. Cycles Hybrid Rendering is possible in Version 2.92 with Optix. Tiles are calculated with GPU in combination with cpu. EEVEE is a new physically based real-time renderer. It works both as a renderer for final frames, and as the engine driving Blender's real-time viewport for creating assets. Texture and shading Blender allows procedural and node-based textures, as well as texture painting, projective painting, vertex painting, weight painting and dynamic painting. Post-production Blender has a node-based compositor within the rendering pipeline accelerated with OpenCL. Blender also includes a non-linear video editor called the Video Sequence Editor (VSE), with support for effects like Gaussian blur, color grading, fade and wipe transitions and other video transformations. However, there is no built-in multi-core support for rendering video with the VSE. Plugins/addons and scripts Blender supports Python scripting for the creation of custom tools, prototyping, game logic, importing/exporting from other formats and task automation. This allows for integration with several external render engines through plugins/addons. Deprecated features The Blender Game Engine was a built-in real-time graphics and logic engine with features such including collision detection, a dynamics engine, and programmable logic. It also allowed the creation of stand-alone, real-time applications ranging from architectural visualization to video games. In April 2018 it was removed from the upcoming Blender 2.8 release series, having long lagged behind other game engines such as the open-source Godot, and Unity. In the 2.8 announcement, the Blender team specifically mentioned the Godot engine as a suitable replacement for migrating Blender Game Engine users. Blender Internal Blender Internal, a biased rasterization engine / scanline renderer used in the previous versions of Blender, was also removed for the 2.80 release in favor of the new "EEVEE" renderer, a realtime PBR renderer. File format Blender features an internal file system that can pack multiple scenes into a single file (called a ".blend" file). Most of Blender's ".blend" files are forward, backward, and cross-platform compatible with other versions of Blender, with the following exceptions: Loading animations stored in post-2.5 files in Blender pre-2.5. This is due to the reworked animation subsystem introduced in Blender 2.5 being inherently incompatible with older versions. Loading meshes stored in post 2.63. This is due to the introduction of BMesh, a more versatile mesh format. Blender 2.8 ".blend" files are no longer fully backward compatible, causing errors when opened in previous versions. All scenes, objects, materials, textures, sounds, images, post-production effects for an entire animation can be stored in a single ".blend" file. Data loaded from external sources, such as images and sounds, can also be stored externally and referenced through either an absolute or relative pathname. Likewise, ".blend" files themselves can also be used as libraries of Blender assets. Interface configurations are retained in the ".blend" files. A wide variety of import/export scripts that extend Blender capabilities (accessing the object data via an internal API) make it possible to interoperate with other 3D tools. Blender organizes data as various kinds of "data blocks" (akin to gltf), such as Objects, Meshes, Lamps, Scenes, Materials, Images and so on. An object in Blender consists of multiple data blocks – for example, what the user would describe as a polygon mesh consists of at least an Object and a Mesh data block, and usually also a Material and many more, linked together. This allows various data blocks to refer to each other. There may be, for example, multiple Objects that refer to the same Mesh and making subsequent editing of the shared mesh results in shape changes in all Objects using this Mesh. Objects, meshes, materials, textures, etc. can also be linked to from other .blend files, which is what allows the use of .blend files as reusable resource libraries. Import and export The software supports a variety of 3D file formats for import and export, among them Alembic, 3D Studio (3DS), Filmbox (FBX), Autodesk (DXF), SVG, STL (for 3D printing), UDIM, USD, VRML, WebM, X3D and Obj. User interface Commands Most of the commands are accessible via hotkeys. There are also comprehensive graphical menus. Numeric buttons can be "dragged" to change their value directly without the need to aim at a particular widget, as well as being set using the keyboard. Both sliders and number buttons can be constrained to various step sizes with modifiers like the and keys. Python expressions can also be typed directly into number entry fields, allowing mathematical expressions to specify values. Modes Blender includes many modes for interacting with objects, the two primary ones being Object Mode and Edit Mode, which are toggled with the key. Object mode is used to manipulate individual objects as a unit, while Edit mode is used to manipulate the actual object data. For example, Object Mode can be used to move, scale, and rotate entire polygon meshes, and Edit Mode can be used to manipulate the individual vertices of a single mesh. There are also several other modes, such as Vertex Paint, Weight Paint, and Sculpt Mode. Workspaces The Blender GUI builds its own tiled windowing system on top of one or multiple windows provided by the underlying platform. One platform window (often sized to fill the screen) is divided into sections and subsections that can be of any type of Blender's views or window types. The user can define multiple layouts of such Blender windows, called screens, and switch quickly between them by selecting from a menu or with keyboard shortcuts. Each window types own GUI elements can be controlled with the same tools that manipulate the 3D view. For example, one can zoom in and out of GUI-buttons using similar controls, one zooms in and out in the 3D viewport. The GUI viewport and screen layout are fully user-customizable. It is possible to set up the interface for specific tasks such as video editing or UV mapping or texturing by hiding features not used for the task. Rendering engines Cycles Cycles is a path-tracing render engine that is designed to be interactive and easy to use, while still supporting many features. It has been included with Blender since 2011, with the release of Blender 2.61. Cycles supports with AVX, AVX2 and AVX-512 extensions, as well as CPU acceleration in modern hardware. GPU rendering Cycles supports GPU rendering, which is used to speed up rendering times. There are three GPU rendering modes: CUDA, which is the preferred method for older Nvidia graphics cards; OptiX, which utilizes the hardware ray-tracing capabilities of Nvidia's Turing architecture & Ampere architecture; and OpenCL, which supports rendering on AMD graphics cards and added Intel Iris and Xe in 2.92. The toolkit software associated with these rendering modes does not come within Blender and needs to be separately installed and configured as per their respective source instructions. Multiple GPUs are also supported, which can be used to create a render farm – although having multiple GPUs doesn't increase the available memory, because each GPU can only access its own memory. Since Version 2.90 this limitation of SLI cards is broken with Nvidia Systems with NVlink. Integrator The integrator is the core rendering algorithm used for lighting computations. Cycles currently supports a path tracing integrator with direct light sampling. It works well for a variety of lighting setups, but it is not as suitable for caustics and certain other complex lighting situations. Rays are traced from the camera into the scene, bouncing around until they find a light source (a lamp, an object material emitting light, or the world background), or until they are simply terminated based on the number of maximum bounces determined in the light path settings for the renderer. To find lamps and surfaces emitting light, both indirect light sampling (letting the ray follow the surface bidirectional scattering distribution function, or BSDF) and direct light sampling (picking a light source and tracing a ray towards it) are used. The two types of integrators The default path tracing integrator is a "pure" path tracer. This integrator works by sending several light rays that act as photons from the camera out into the scene. These rays will eventually hit either: a light source, an object, or the world background. If these rays hit an object, they will bounce based on the angle of impact, and continue bouncing until a light source has been reached or until a maximum number of bounces, as determined by the user, which will cause it to terminate and result in a black, unlit pixel. Multiple rays are calculated and averaged out for each pixel, a process known as "sampling". This sampling number is set by the user and greatly affects the final image. Lower sampling often results in more noise and has the potential to create "fireflies" (which are uncharacteristically bright pixels), while higher sampling greatly reduces noise, but also increases render times. The alternative is a branched path tracing integrator, which works mostly the same way. Branched path tracing splits the light rays at each intersection with an object according to different surface components, and takes all lights into account for shading instead of just one. This added complexity makes computing each ray slower but reduces noise in the render, especially in scenes dominated by direct (one-bounce) lighting. Open Shading Language Blender users can create their own nodes using the Open Shading Language (OSL), although it is important to note that this feature is not supported by GPUs. Materials Materials define the look of meshes, NURBS curves, and other geometric objects. They consist of three shaders to define the mesh's surface appearance, volume inside, and surface displacement. Surface shader The surface shader defines the light interaction at the surface of the mesh. One or more bidirectional scattering distribution functions, or BSDFs, can specify if incoming light is reflected, refracted into the mesh, or absorbed. The alpha value is one measure of translucency. Volume shader When the surface shader does not reflect or absorb light, it enters the volume (light transmission). If no volume shader is specified, it will pass straight through (or be refracted, see refractive index or IOR) to another side of the mesh. If one is defined, a volume shader describes the light interaction as it passes through the volume of the mesh. Light may be scattered, absorbed, or even emitted at any point in the volume. Displacement shader The shape of the surface may be altered by displacement shaders. In this way, textures can be used to make the mesh surface more detailed. Depending on the settings, the displacement may be virtual-only modifying the surface normals to give the impression of displacement (also known as bump mapping) – real, or a combination of real displacement with bump mapping. EEVEE EEVEE (or Eevee) is a real-time PBR renderer included in Blender from version 2.8. This render engine was given the nickname Eevee, after the Pokémon. The name was later made into the backronym "Extra Easy Virtual Environment Engine" or EEVEE. Workbench Using the default 3D viewport drawing system for modeling, texturing, etc. External renderers Free and open-source: Mitsuba Renderer YafaRay (previously Yafray) LuxCoreRender (previously LuxRender) Appleseed Renderer POV-Ray NOX Renderer Armory3D – a free and open source game engine for Blender written in Haxe Radeon ProRender – Radeon ProRender for Blender Malt Render – a non-photorealistic renderer with GLSL shading capabilities Proprietary: Pixar RenderMan – Blender render addon for RenderMan Octane Render – OctaneRender plugin for Blender Indigo Renderer – Indigo for Blender V-Ray – V-Ray for Blender, V-Ray Standalone is needed for rendering Maxwell Render – B-Maxwell addon for Blender Thea Render – Thea for Blender Corona Renderer – Blender To Corona exporter, Corona Standalone is needed for rendering Past renderers Blender Internal Blender's non-photorealistic renderer. It was removed from Blender in version 2.8. Render clay is an add-on by Fabio Russo; it overwrites materials in Blender Internal or Cycles with a clay material in a chosen diffuse color. Included in Blender version 2.79. Blender Game Engine A real-time renderer removed in 2019 with the release of 2.8 Development Since the opening of the source code, Blender has experienced significant refactoring of the initial codebase and major additions to its feature set. Improvements include an animation system refresh; a stack-based modifier system; an updated particle system (which can also be used to simulate hair and fur); fluid dynamics; soft-body dynamics; GLSL shaders support in the game engine; advanced UV unwrapping; a fully recoded render pipeline, allowing separate render passes and "render to texture"; node-based material editing and compositing; and projection painting. Part of these developments was fostered by Google's Summer of Code program, in which the Blender Foundation has participated since 2005. Blender 2.8 Official planning for the next major revision of Blender after the 2.7 series began in the latter half of 2015, with potential targets including a more configurable UI (dubbed "Blender 101"), support for Physically based rendering (PBR) (dubbed EEVEE for "Extra Easy Virtual Environment Engine") to bring improved realtime 3D graphics to the viewport, allowing the use of C++11 and C99 in the codebase, moving to a newer version of OpenGL and dropping support for versions before 3.2, and a possible overhaul of the particle and constraint systems. Blender Internal renderer has been removed from 2.8. Code Quest was a project started in April 2018 set in Amsterdam, at the Blender Institute. The goal of the project was to get a large development team working in one place, in order to speed up the development of Blender 2.8. By June 29, 2018, the Code Quest project ended, and on July 2, the alpha version was completed. Beta testing commenced on November 29, 2018 and was anticipated to take until July 2019. Blender 2.80 was released on July 30, 2019. Cycles X On the 23rd April 2021 the Blender Foundation announced the Cycles X project, where they will improve the Cycles architecture for future development. Key changes include a new kernel, removal of tiled rendering (replaced with progressive refine), removal of branched path tracing and removal of OpenCL support. Volumetric rendering will also be replaced with better algorithms. Cycles X had been accessible only on an experimental branch until 21st of September 2021 when it was merged into master. Support Blender is extensively documented on its website. There are also a number of online communities dedicated to support, such as the Blender Stack Exchange. Modified versions Due to Blender's open-source nature, other programs have tried to take advantage of its success by repackaging and selling cosmetically-modified versions of it. Examples include IllusionMage, 3DMofun, 3DMagix, and Fluid Designer, the latter being recognized as Blender-based. Use in industry Blender started as an in-house tool for NeoGeo, a Dutch commercial animation company. The first large professional project that used Blender was Spider-Man 2, where it was primarily used to create animatics and pre-visualizations for the storyboard department. The French-language film Friday or Another Day () was the first 35 mm feature film to use Blender for all the special effects, made on Linux workstations. It won a prize at the Locarno International Film Festival. The special effects were by Digital Graphics of Belgium. Tomm Moore's The Secret of Kells, which was partly produced in Blender by the Belgian studio Digital Graphics, has been nominated for an Oscar in the category "Best Animated Feature Film". Blender has also been used for shows on the History Channel, alongside many other professional 3D graphics programs. Plumíferos, a commercial animated feature film created entirely in Blender, had premiered in February 2010 in Argentina. Its main characters are anthropomorphic talking animals. Blender is used by NASA for many publicly available 3D models. Many 3D models on NASA's 3D resources page are in a native .blend format. Special effects for episode 6 of Red Dwarf season X, screened in 2012, were created using Blender as confirmed by Ben Simonds of Gecko Animation. Blender was used for previsualization in Captain America: The Winter Soldier. Some promotional artwork for Super Smash Bros. for Nintendo 3DS and Wii U was partially created using Blender. The alternative hip-hop group Death Grips has used Blender to produce music videos. A screenshot from the program is briefly visible in the music video for Inanimate Sensation. The visual effects for the TV series The Man in the High Castle were done in Blender, with some of the particle simulations relegated to Houdini. NASA also used Blender to develop an interactive web application Experience Curiosity to celebrate the 3rd anniversary of the Curiosity rover landing on Mars. This app makes it possible to operate the rover, control its cameras and the robotic arm and reproduces some of the prominent events of the Mars Science Laboratory mission. The application was presented at the beginning of the WebGL section on SIGGRAPH 2015. Blender was used for both CGI and compositing for the movie Hardcore Henry. The visual effects in the feature film Sabogal were done in Blender. VFX supervisor Bill Westenhofer used Blender to create the character "Murloc" in the 2016 film Warcraft. Director David F. Sandberg used Blender for multiple shots in Lights Out, and Annabelle: Creation. Blender was used for parts of the credit sequences in Wonder Woman Blender was used for doing the animation in the film Cinderella the Cat. The 2018 film Next Gen was fully created in Blender by Tangent Animation. A team of developers worked on improving Blender for internal use, but it is planned to eventually add those improvements to the official Blender build. The 2019 film I Lost My Body was largely animated using Blender's Grease Pencil tool by drawing over CGI animation allowing for a real sense of camera movement that is harder to achieve in purely traditionally drawn animation. Ubisoft Animation Studio will use Blender to replace its internal content creation software starting in 2020. Khara and its child company Project Studio Q are trying to replace their main tool, 3ds Max, with Blender. They started "field verification" of Blender during their ongoing production of Evangelion: 3.0+1.0. They also signed up as Corporate Silver and Bronze members of Development Fund. The 2020 film Wolfwalkers was partially created using Blender. In 2021 SPA Studios started hiring Blender artists. The 2021 Netflix production Maya and the Three was created using Blender. Warner Bros. Animation started hiring Blender artists in 2022. Open projects Since 2005, every 1–2 years the Blender Foundation has announced a new creative project to help drive innovation in Blender. In response to the success of the first open movie project, Elephants Dream, in 2006 the Blender Foundation founded the Blender Institute to be in charge of additional projects, with two projects announced at first: Big Buck Bunny, also known as Project Peach (a "furry and funny" short open animated film project); and Yo Frankie!, or Project Apricot, an open game utilizing the CrystalSpace game engine that reused some of the assets created for Big Buck Bunny. Elephants Dream (Project Orange) In September 2005, some of the most notable Blender artists and developers began working on a short film using primarily free software, in an initiative known as the Orange Movie Project hosted by the Netherlands Media Art Institute (NIMk). The codename, "Orange", in reference to the fruit, started the trend of giving each project a different fruity name. The resulting film, Elephants Dream, premiered on March 24, 2006. Big Buck Bunny (Project Peach) On October 1, 2007, a new team started working on a second open project, "Peach", for the production of the short movie Big Buck Bunny. This time, however, the creative concept was different. Instead of the deep and mystical style of Elephants Dream, things are more "funny and furry" according to the official site. The movie had its premiere on April 10, 2008. This later made its way to Nintendo 3DS's Nintendo Video between 2012 and 2013. Yo Frankie! (Open Game Project: Apricot) "Apricot" was the project name for the production of a game based on the universe and characters of the Peach movie (Big Buck Bunny) using free software, including the Crystal Space framework. The resulting game is titled Yo Frankie!. The project started on February 1, 2008, and development was completed at the end of July. A finalized product was expected at the end of August; however, the release was delayed. The game was eventually released on December 9, 2008, under either the GNU GPL or LGPL, with all content being licensed under Creative Commons Attribution 3.0. Sintel (Project Durian) The Blender Foundation's Project Durian (in keeping with the tradition of fruits as code names) was this time chosen to make a fantasy action epic of about twelve minutes in length, starring a teenage girl and a young dragon as the main characters. The film premiered online on September 30, 2010. A game based on Sintel was officially announced on Blenderartists.org on May 12, 2010. Many of the new features integrated into Blender 2.5 and beyond were a direct result of Project Durian. Tears of Steel (Project Mango) On October 2, 2011, the fourth open movie project, codenamed "Mango", was announced by the Blender Foundation. A team of artists assembled using an open call of community participation. It is the first Blender open movie to use live action as well as CG. Filming for Mango started on May 7, 2012, and the movie was released on September 26, 2012. As with the previous films, all footage, scenes and models were made available under a free content compliant Creative Commons license. According to the film's press release, "The film's premise is about a group of warriors and scientists, who gather at the 'Oude Kerk' in Amsterdam to stage a crucial event from the past, in a desperate attempt to rescue the world from destructive robots." Cosmos Laundromat: First Cycle (Project Gooseberry) On January 10, 2011, Ton Roosendaal announced that the fifth open movie project would be codenamed "Gooseberry" and that its goal would be to produce a feature-length animated film. He speculated that production would begin sometime between 2012 and 2014. The film was to be written and produced by a coalition of international animation studios. The studio lineup was announced on January 28, 2014, and production began soon thereafter. As of March 2014, a moodboard had been constructed and development goals set. The initial ten minute pilot was released on YouTube on August 10, 2015. It won the SIGGRAPH 2016 Computer Animation Festival Jury's Choice award. Glass Half On November 13, 2015, Glass Half was released in HD format. This project demonstrates real-time rendering capabilities using OpenGL for 3D cartoon animation. It also marks the end of the fruit naming scheme. Glass Half was financed by the Blender Foundation with proceeds from the Blender Cloud. It is a short, roughly three-minute long comedy in a gibberish language that addresses subjectivity in art. Caminandes Caminandes is a series of animated short films envisioned by Pablo Vazquez of Argentina. It centers on a llama named Koro in Patagonia and his attempts to overcome various obstacles. The series only became part of the Open Movie Project starting with the second episode. Caminandes 1: Llama Drama (2013) Caminandes 2: Gran Dillama (2013) Caminandes 3: Llamigos (2016) Agent 327: Operation Barbershop Agent 327: Operation Barbershop is a three-minute teaser released on May 15, 2017 for a planned full-length animated feature. The three-minute teaser is uploaded to YouTube by the official Blender Studio channel. Co-directed by Colin Levy and Hjalti Hjálmarsson, it is based on the classic Dutch comic series Agent 327. This teaser film also acts as a proof-of-concept to attract funding for the full-length animated feature. Agent 327: Operation Barbershop showcases the latest technology of Cycles engine, the render engine that has been included in Blender since 2011. Assets from this teaser have been released under Creative Commons license via Blender Cloud. Hero Hero is the first open movie project to demonstrate the capabilities of the Grease Pencil, a 2D animation tool in Blender 2.8. It was put on YouTube on April 16, 2018. It has a roughly four-minute runtime, which includes over a minute of "behind-the-scenes" "making-of" footage. It showcases the art of Spanish animator Daniel Martínez Lara. Spring "Spring is the story of a shepherd girl and her dog, who face ancient spirits to continue the cycle of life. This poetic and visually stunning short film was written and directed by Andy Goralczyk, inspired by his childhood in the mountains of Germany." On October 25, 2017, an upcoming animated short film named Spring was announced, to be produced by Blender Studio. Spring was released April 4, 2019. Its purpose was to test Blender 2.8's capabilities before its official release. Coffee Run "Fueled by caffeine, a young woman runs through the bittersweet memories of her past relationship." On May 29, 2020, the open movie Coffee Run was released. It was the first open movie to be rendered in the EEVEE render engine. It was released on July 30, 2019. Sprite Fright Sprite Fright is the 13th open movie. It is set in Britain and draws inspiration from 1980's horror comedy. It is directed by Pixar story artist Mathew Luhn with Hjalti Hjalmarsson. It is about a group of teenagers being attacked and killed by Sprites after they litter the forest. It premiered at Eye Film in the Netherlands on 28 October 2021 and was publicly released on Blender Studio and YouTube on 29th October 2021. Online services Blender Foundation Blender Studio The Blender Studio platform, launched in March 2014 as Blender Cloud, is a subscription-based cloud computing platform where members can access Blender add-ons, courses and to keep track of the production of Blender Studio's open movies. It is currently operated by the Blender Studio, formerly a part of the Blender Institute. It was launched to promote and fundraiser for Project: Gooseberry, and is intended to replace the selling of DVDs by the Blender Foundation with a subscription-based model for file hosting, asset sharing and collaboration. Blender Add-ons included in Blender Studio are CloudRig, Blender Kitsu, Contact sheet Add-on, Blender Purge and Shot Builder. It was rebranded from Blender Cloud to Blender Studio on 22nd October 2021. The Blender Development Fund The Blender development fund is a subscription where individuals and companies can fund Blender's development. Corporate members include Epic Games, Nvidia, Microsoft, Apple, Unity, Intel, Decentraland, Amazon Web Services, Meta,AMD, Adobe and many more. You can also provide one time donations to Blender. It supports Bank Transfer, PayPal and Cryptocurrency. Blender ID The Blender ID is a unified login for Blender software and service users, providing a login for Blender Studio, the Blender Store, the Blender Conference, Blender Network, Blender Development Fund, and the Blender Foundation Certified Trainer Program. Blender Open Data The Blender Open Data is a platform to collect, display, and query benchmark data produced by the Blender community with related Blender Benchmark software. Blender Network The Blender Network was an online platform to enable online professionals to conducts business with Blender and provide online support. It was terminated on 31st March 2021. Blender Store A store to buy Blender merchandise. Blender Community BlenderArtists BlenderArtists is an online forum for artists to share their work and to communicate to each other. Blender Market Blender Market is an online store for artists and developers to sell assets and Blender add-ons. A portion of the money made from Blender Market products goes to the Blender Development Fund. It was founded by CG Cookie, an online website that provides tutorials and training for Blender. Notes See also ManuelbastioniLAB, a Blender add-on for the parametric 3D modeling of photorealistic humanoid characters MakeHuman List of free and open-source software packages References Further reading External links 1995 software 3D animation software 3D graphics software 3D computer graphics software for Linux 3D rendering software for Linux AmigaOS 4 software Articles containing video clips Blender Foundation Computer science in the Netherlands Computer-aided design software for Linux Cross-platform free software Formerly proprietary software Free 3D graphics software Free computer-aided design software Free software programmed in C Free software programmed in C++ Free software programmed in Python Global illumination software Information technology in the Netherlands Visual effects software IRIX software MacOS graphics-related software MorphOS software Motion graphics software for Linux Portable software Software that uses FFmpeg Technical communication tools Video game development software Windows graphics-related software 2D animation software
10001565
https://en.wikipedia.org/wiki/USS%20Crouter%20%28DE-11%29
USS Crouter (DE-11)
USS Crouter (DE-11) was an of the United States Navy in commission from 1943 to 1945. The ship was named after Mark Hanna Crouter (1897–1942), U.S. Navy officer and Navy Cross recipient. Namesake Mark Hanna Crouter was born on 3 October 1897 in Baker, Oregon. He graduated from the United States Naval Academy on 7 June 1919. After extensive service at sea and ashore, he served as executive officer on the heavy cruiser . He was killed in the Naval Battle of Guadalcanal. He was posthumously awarded the Navy Cross. Construction and commissioning Crouter originally was intended for transfer to the United Kingdom as BDE-11, but was instead retained by the U.S. Navy. She was laid down on 8 February 1942 at the Boston Navy Yard at Boston, Massachusetts and launched on 26 January 1943, sponsored by Mrs. M. H. Crouter, widow of Commander Crouter. She was commissioned on 25 May 1943. Service history Departing Boston on 24 July 1943, Crouter deployed to the Pacific Ocean for World War II service. She reached Nouméa, New Caledonia, on 3 September 1943. After several convoy escort voyages to Efate and Espiritu Santo in the New Hebrides and to Viti Levu in the Fiji Islands, she escorted convoys between Nouméa and Port Purvis on Florida Island in the Solomons, aiding in the consolidation of the Solomon Islands until 31 March 1944. After overhaul on the United States West Coast, Crouter escorted a convoy from Pearl Harbor, Hawaii, to Eniwetok between 14 June 1944 and 3 July 1944. Returning to Pearl Harbor, Crouter conducted submarine training exercises, and rescued nine survivors of a crashed PBY Catalina flying boat on 15 July 1944. She departed Pearl Harbor on 3 August 1944 for continued operations with submarines from Majuro between 13 August and 24 October 1944. Arriving at Eniwetok on 26 October 1944, Crouter operated out of that port as convoy escort to Ulithi Atoll, Kossol Roads, and Saipan until 15 March 1945. At San Pedro Bay, Leyte, in the Philippine Islands, Crouter joined the screen of the transport convoy bound for Okinawa, arriving on 1 April 1945 for the invasion landings. She remained on patrol off Okinawa, joining a hunter-killer group from 19 April 1945 to 28 April 1945. Her service in anti-aircraft work included shooting down two suicide planes. Crouter reported to Guam on 21 May 1945 for training with submarines, remaining there through the end of the war and until 18 September 1945. Crouter returned to the United States at San Pedro, California, on 5 October 1945, and was decommissioned on 30 November 1945. She was sold for scrapping on 25 November 1946. Awards Crouter was awarded one battle star for World War II service in the Pacific. References Evarts-class destroyer escorts World War II frigates and destroyer escorts of the United States Ships built in Boston 1943 ships ja:トーテュガ (ドック型揚陸艦)
51001800
https://en.wikipedia.org/wiki/Victor%20W.%20Marek
Victor W. Marek
Victor Witold Marek, formerly Wiktor Witold Marek known as Witek Marek (born 22 March 1943) is a Polish mathematician and computer scientist working in the field of theoretical computer science and mathematical logic. Biography Victor Witold Marek studied mathematics at the Faculty of Mathematics and Physics of the University of Warsaw. Supervised by Andrzej Mostowski, he received both a magister degree in mathematics in 1964 and a doctoral degree in mathematics in 1968. He completed habilitation in mathematics in 1972. In 1970–1971, Marek was a postdoctoral researcher at the Utrecht University, the Netherlands, where he worked under Dirk van Dalen. In 1967–1968 as well as in 1973–1975, he was a researcher at the Institute of Mathematics of the Polish Academy of Sciences, Warsaw, Poland. In 1979-1980 and 1982-1983 he worked at the Venezuelan Institute of Scientific Research. In 1976, he was appointed an Assistant Professor of Mathematics at the University of Warsaw. In 1983, he was appointed a professor of computer science at the University of Kentucky. In 1989–1990, he was a Visiting Professor of Mathematics at the Cornell University, Ithaca, New York. In 2001-2002, he was a visitor at the Department of Mathematics of the University of California, San Diego. In 2013, Professor Marek was the Chair of the Program Committee of the scientific conference commemorating Andrzej Mostowski's Centennial. Legacy Teaching He has supervised a number of graduate theses and projects. He was an advisor of 16 doctoral candidates both in mathematics and computer science. In particular, he advised dissertations in mathematics by Małgorzata Dubiel-Lachlan, Roman Kossak, Adam Krawczyk, Tadeusz Kreid, Roman Murawski, Andrzej Pelc, Zygmunt Ratajczyk, Marian Srebrny, and Zygmunt Vetulani. In computer science his students were V. K. Cody Bumgardner, Waldemar W. Koczkodaj, Witold Lipski, Joseph Oldham, Inna Pivkina, Michał Sobolewski , Paweł Traczyk, and Zygmunt Vetulani. All these individuals has worked in the various institutions of higher education in Canada, France, Poland, and the United States. Mathematics He investigated a number of areas in the foundations of mathematics, for instance infinitary combinatorics (large cardinals), metamathematics of set theory, the hierarchy of constructible sets, models of second-order arithmetic, the impredicative theory of Kelley–Morse classes. He proved that the so-called Fraïssé conjecture (second-order theories of denumerable ordinals are all different) is entailed by Gödel's axiom of constructibility. Together with Marian Srebrny, he investigated properties of gaps in a constructible universe. Computer science He studied logical foundations of computer science. In the early 1970s, in collaboration with Zdzislaw Pawlak, he investigated Pawlak's information storage and retrieval systems which then was a widely studied concept, especially in the Eastern Europe. These systems were, essentially a single-table relational databases, but unlike Codd's relational databases were bags rather than sets of records. These investigations, in turn, led Pawlak to the concept of rough set, studied by Marek and Pawlak in 1981. The concept of rough set, in computer science, statistics, topology, universal algebra, combinatorics, and modal logic, turned out to be an expressive language for describing, and especially manipulating an incomplete information. Logic In the area of nonmonotonic logics, a group of logics related to artificial intelligence, he focused on investigations of Reiter's Deault Logic, and autoepistemic logic of R. Moore. These investigations led to a form of Logic Programming called Answer Set Programming a computational knowledge representation formalism, studied both in Europe and in the United States. Together with Mirosław Truszczynski, he proved that the problem of existence of stable models of logic programs is NP-complete. In a stronger formalism admitting function symbols, along with Nerode and Remmel he showed that the analogous problem is Σ-complete. Publications V. W. Marek is an author of over 180 scientific papers in the area of foundations of mathematics and of computer science. He was also an editor of numerous proceedings of scientific meetings. Additionally, he authored or coauthored several books. These include: Logika i Podstawy Matematyki w Zadaniach (jointly with J. Onyszkiewicz) Logic and Foundations of Mathematics in problems (jointly with J. Onyszkiewicz) Analiza Kombinatoryczna (jointly with W. Lipski), Nonmonotonic Logic – Context-dependent Reasoning (jointly with M. Truszczyński), Introduction to Mathematics of Satisfiability. References External links Personal page of Dr. V.W. Marek at the University of Kentucky Papers online Slides and other scientific materials 1943 births Polish mathematicians Polish computer scientists University of Warsaw alumni University of Kentucky faculty University of California, San Diego faculty Living people
563196
https://en.wikipedia.org/wiki/Star%20Wars%3A%20Knights%20of%20the%20Old%20Republic%20%28video%20game%29
Star Wars: Knights of the Old Republic (video game)
Star Wars: Knights of the Old Republic (often abbreviated KOTOR or KotOR) is a role-playing video game set in the Star Wars universe. Developed by BioWare and published by LucasArts, the game was released for the Xbox on July 19, 2003, and for Microsoft Windows on November 19, 2003. The game was later ported to Mac OS X, iOS, and Android by Aspyr, and it is playable on the Xbox 360 and Xbox One via their respective backward compatibility features. A Nintendo Switch version was released on November 11, 2021. The story of Knights of the Old Republic takes place almost 4,000 years before the formation of the Galactic Empire, where Darth Malak, a Dark Lord of the Sith, has unleashed a Sith armada against the Galactic Republic. The player character, as a Jedi, must venture to different planets in the galaxy to defeat Malak. Players choose from three character classes (Scout, Soldier or Scoundrel) and customize their characters at the beginning of the game, and engage in round-based combat against enemies. Through interacting with other characters and making plot decisions, players can earn Light Side and Dark Side Points, and the alignment system will determine whether the player's character aligns with the light or dark side of the Force. The game was directed by Casey Hudson, designed by James Ohlen, and written by Drew Karpyshyn. LucasArts proposed developing a game tied to Star Wars: Episode II – Attack of the Clones, or a game set thousands of years before the prequels. The team chose the latter as they thought that they would have more creative freedom. Ed Asner, Ethan Phillips, and Jennifer Hale were hired to perform voices for the game's characters, while Jeremy Soule composed the soundtrack. Announced in 2000, the game was delayed several times before its release in July 2003. The game received critical acclaim upon release, with critics applauding the game's characters, story, and sound. It was nominated for numerous awards and is often cited as one of the greatest video games ever made. A sequel, Star Wars: Knights of the Old Republic II – The Sith Lords, developed by Obsidian Entertainment at BioWare's suggestion, was released in 2004. The series' story continued with the 2011 release of Star Wars: The Old Republic, a massively multiplayer online role-playing game developed by BioWare. In September 2021, a remake of the game was announced to be in development by Aspyr for Microsoft Windows and PlayStation 5. Gameplay Players choose from three basic character classes (Scout, Soldier or Scoundrel) at the beginning of the game and later choose from three Jedi subclasses (Guardian, Sentinel or Consular). Beyond class, a character has "skills" stats, tiered "feats," and later on, tiered Force powers, similar to magic spells in fantasy games. Feats and Force powers are generally unlocked upon level-up, while the player is given skill points to distribute among their skills every level. Combat is round-based; time is divided into discrete rounds, and combatants attack and react simultaneously, although these actions are presented sequentially on-screen. The number of actions a combatant may perform each round is limited. While each round's duration is a fixed short interval of real time, the player can configure the combat system to pause at specific events or the end of each round, or set the combat system to never automatically pause, giving the illusion of real-time combat. Combat actions are calculated using Dungeons & Dragons rules, particularly the d20 System. While these are not displayed directly on the screen, the full breakdown for each action (including die rolls and modifiers) is accessible from a menu. For much of the game, the player can have up to two companions in their party. These companions will participate in combat. They can be manually controlled by the player, or act autonomously if the player does not give them any input. Outside of combat, the companions will randomly engage the player or each other in dialogue, sometimes unlocking additional quests. They will also participate in conversations the player has with other non-player characters. Non-combat interaction with other characters in the game world is based upon a dialogue menu system. Following each statement, the player can select from a list of menu responses. The dialogue varies based on the gender and skills of the main character. The alignment system tracks actions and speech—from simple word choices to major plot decisions—to determine whether the player's character aligns with the light or dark side of the Force. Generosity and altruism lead to the light side, while selfish or violent actions will lead the player's character to the dark side, which will alter the character's appearance, turning their eyes yellow and their skin pale. In addition to the standard role-playing gameplay, there are several minigame events that come up over the course of the game. The player can participate in swoop racing to earn money, and sometimes interplanetary travel will be interrupted by enemy starfighters, which begins a minigame where the player controls a turret to shoot down the opposing starcraft. The player can also engage in a card game known as pazaak, which is similar to the game of blackjack, to gamble money. Synopsis Setting Knights of the Old Republic takes place approximately 4,000 years before the rise of the Galactic Empire, and covers the era following the conclusion of the Star Wars: Tales of the Jedi comics, during the early years of the Galactic Republic. The backstory of the game involves the Mandalorian warrior society invading the Republic in a pan-galactic conflict known as the Mandalorian Wars. The Jedi were hesitant to get involved, but a pair of renegade Jedi Knights, Revan and Malak, insist on leading a Republic force to war. After winning the war against the Mandalorians, Revan and Malak disappeared into the Unknown Regions, returning a year later with a Sith armada and launching an invasion against the Republic themselves. Malak, Revan's apprentice, eventually succeeded his former master as Dark Lord of the Sith after Revan was seemingly killed in an ambush by the Jedi. Malak's aggression has left the Jedi scattered and vulnerable; many Jedi Knights have fallen in battle, and others have sworn allegiance to Malak. Playable locations in Knights of the Old Republic include the planets Tatooine, Dantooine, Kashyyyk, Korriban, Manaan, Rakata Prime, and Taris; aboard the Republic cruiser Endar Spire and Saul Karath's ship Leviathan; and on the Star Forge space station. A space station near Yavin is accessible to players in the PC, Mac OS X, and mobile versions of the game and is available to Xbox players via download from Xbox Live. Travel between these locations happens aboard the freighter Ebon Hawk, which is also a playable location. Characters and locations Joining the player character's quest are veteran Republic pilot Carth Onasi, the Twi'lek teenager Mission Vao and her Wookiee companion Zaalbar, the Jedi Bastila Shan, 'Grey' Jedi Jolee Bindo, utility droid T3-M4, Mandalorian mercenary Canderous Ordo, and assassin droid HK-47 if he is bought. Juhani, another Jedi, may also join the party if she is spared by the player. Republic soldier Trask Ulgo is also briefly playable during the game's opening sequence on the Endar Spire. The game's main antagonist is Darth Malak, the Dark Lord of the Sith. Other antagonistic characters include Black Vulkar gang leader Brejik, crime boss Davik Kang, bounty hunter Calo Nord, Zaalbar's brother Chuundar, Sith apprentice Darth Bandon, Admiral Saul Karath, Sith Overseer Uthar Wynn, and Rakatan tribe leader The One. Czerka Corporation, an unscrupulous corporation operating on several planets, is an ally of Darth Malak's Sith forces. Supporting characters who assist the player's party in some capacity are Hidden Bek gang leader Gadon Thek, Jedi Masters Vandar Tokare and Zhar Lestin, game hunter Komad Fortuna, Zaalbar and Chuundar's father Freyyr, Uthar's Sith apprentice Yuthura Ban, Republic representative Roland Wann, the Rakatan tribe "The Elders," and Republic Admiral Forn Dodonna. Plot The game opens with the player's character—the player can choose a face and be male or female (canonically male)—awakening aboard a Republic ship, the Endar Spire, which is under attack by Malak's forces over the city world of Taris. Republic soldier Trask Ulgo soon arrives and informs the player character that they are under attack. Fighting their way to the escape pods, Trask and the player character are soon confronted by Sith Lord Darth Bandon. With no other options, Trask sacrifices himself while the player continues to make their way to the escape pods. The player character meets Carth Onasi, a skilled pilot and Republic war hero, and they escape the doomed warship. Crashing on the surface of Taris, the player character is knocked unconscious, and Carth pulls them away from the wreckage. After suffering a strange vision, the player character awakens in an abandoned apartment with Carth, who explains that Taris is under martial law by Malak's forces who are searching for the Jedi Knight Bastila Shan, known for her mastery of battle meditation, a Force technique that strengthens one's allies and weakens one's enemies during battle. Carth and the player character search for her and meet new companions along the way, such as the Twi'lek street urchin Mission Vao and her Wookiee companion Zaalbar. The group finds and rescues Bastila from the Black Vulkar gang. With the help of utility droid T3-M4 and Mandalorian mercenary Canderous Ordo, the group escapes Taris aboard the star freighter Ebon Hawk, moments before Malak's fleet decimates the planet's surface in a vain effort to kill Bastila. While taking refuge at the Jedi Academy on Dantooine, the player's character trains to be a Jedi, discovers a "Star Map," and learns of the "Star Forge," the probable source of Malak's military resources. The player's character and their companions search planets across the galaxy—Dantooine, Manaan, Tatooine, Kashyyyk, and Korriban—for more information about the Star Forge, gaining new companions along the way such as the Cathar Jedi Juhani, assassin droid HK-47, and 'Grey' Jedi Jolee Bindo. After discovering three more Star Maps, the player's party is captured by Darth Malak and brought aboard his flagship, where Malak reveals that the player character is in truth an amnesiac Darth Revan; the Jedi Council wiped their memories after their presumed death at Malak's hands in the hopes that Bastila could lead them to the Star Forge through her bond with them. Bastila sacrifices herself so the player can escape, and is subsequently turned to the dark side by Malak. On the light side route, the player kills or redeems Bastila, defeats Malak, destroys the Star Forge, and is hailed as a saviour and hero. On the dark side route, the player allies with Bastila, overthrows and kills Malak, takes control of the Star Forge for themselves, and reclaims their title as Dark Lord of the Sith. Production Development In July 2000, BioWare announced that they were working with LucasArts to create a Star Wars role-playing video game for the PC and next-generation consoles. Joint BioWare CEO Greg Zeschuk commented that "The opportunity to create a richly detailed new chapter in the Star Wars universe is incredibly exciting for us. We are honored to be working with the extremely talented folks at Lucas Arts, developing a role-playing game based upon one of the most high-profile licenses in the world." The game was officially unveiled as Star Wars: Knights of the Old Republic at E3 2001. At this point, the game had been in development for around six months. "Preproduction started in 2000, but the discussions started back in 1999," LucasArts' Mike Gallo said, "The first actual e-mails were in October or November of '99. That's when we first started talking to BioWare. But some really serious work finally started at the beginning of 2000." The decision to set the game four thousand years before Star Wars: Episode I – The Phantom Menace was one of the first details about the game made known. LucasArts gave BioWare a choice of settings for the game. "LucasArts came to us and said that we could do an Episode II game," BioWare CEO Raymond Muzyka said. "Or LucasArts said we could go 4,000 years back, which is a period that's hardly been covered before." BioWare chose to set the game four thousand years before the films as it gave them greater creative freedom. They aimed to create content similar to that from the films but different enough to be a definite precursor. Concept work had to be sent to "the ranch" to be approved for use. Muzyka noted that very little of their content was rejected: "It was more like, 'Can you just make his head like this rather than like that.' So it was all very feasible. There were good suggestions made and they made the game better, so we were happy to do them. It was a good process really and I think we were pleasantly surprised how easy LucasArts was to work with." Zeschuk said that "Overall, we were really happy with the results. We felt like we had enough freedom to truly create something wonderful." Gallo said that BioWare and LucasArts were aiming for a gameplay time of around sixty hours: "Baldur's Gate was 100 hours of gameplay or more. Baldur's Gate 2 was 200 hours, and the critical-path play through Baldur's Gate 2 was 75 hours... We're talking smaller than that [for Knights of the Old Republic], dramatically, but even if it's 60 percent smaller, then it's still 100 hours. So our goal for gameplay time is 60 hours. We have so many areas that we're building--worlds, spaceships, things like that to explore--so we have a ton of gameplay." Project director Casey Hudson said that one of the greatest achievements and one of the greatest risks was the combat system. "We wanted to create something that combined the strategic aspects of our Baldur's Gate series and Neverwinter Nights but which presented it through fast, cinematic 3D action," Hudson said. "That required us to make something that hadn't really been done before." Creating the system was a daunting task, because of the many factors to cover, which were difficult to visualize. The developers intended to make the game have more open-ended gameplay. Gallo compared some situations to Deus Ex: "You have several ways to get through an area and you might need a character who has a specific skill to do that." Technical LucasArts and BioWare settled on developing Knights of the Old Republic for the PC and Xbox. The Xbox was chosen over other consoles because of BioWare's background of developing PC games and greater familiarity with the Xbox than other consoles: "We could do the things we wanted to do on the Xbox without as much effort as we'd need to do it on the PS2 or GameCube," Gallo said. Other factors included the console's recent success and the opportunity to release one of the Xbox's first RPGs. BioWare had previously developed MDK2 for the Dreamcast and PlayStation 2. Hudson said that "Having experience in developing for other consoles gave us the proper mindset for implementing this game on the Xbox, and, by comparison, the Xbox was relatively easy to develop for." Hudson did, however, note that there were some challenges during development. One of the difficulties was in deciding how much graphical detail to provide. "Since our games generally have a lot of AI and scripting, numerous character models, and huge environments, we stress the hardware in a very different way than most games," Hudson said. This made it difficult to predict how well the game would run. The game uses the Odyssey Engine, based on the Aurora Engine (previously developed by BioWare for use in Neverwinter Nights) but completely rewritten for Knights of the Old Republic. It was highly detailed for its time: grass waves in the wind, dust blows across Tatooine and puffs of sand rise as the player walks across the seabed. The choreography for the character animations was done using 3DS Max. Hudson noted that the differences between consoles and PCs mean that the graphics would have to be modified. "You typically play console games on a TV across the room while PC games are played on a monitor only inches away." Console games put effort into close-up action and overall render quality; PC games emphasize what can be done with high resolutions and super-sharp textures. Hudson also noted that the difference between a game controller and mouse-and-keyboard setup influenced some design decisions. The PC version features an additional location the player can visit and more NPCs, items, and weapons; these additions were later made available on the Xbox version through Xbox Live. The PC version supports higher display resolutions (up to 1600x1200) and has higher-resolution textures. Sound While the main game, graphics engine and story were developed by BioWare, LucasArts worked on the game's audio. Knights of the Old Republic contains three hundred different characters and fifteen thousand lines of speech. "One complete copy of the Knights of the Old Republic script fills up 10 5-inch binders," voice department manager Darragh O'Farrell noted. A cast of around a hundred voice actors, including Ed Asner, Raphael Sbarge, Ethan Phillips, Jennifer Hale, and Phil LaMarr was assembled. "Fortunately, with a game this size, it's easy to have an actor play a few different characters and scatter those parts throughout the game so you'll never notice it's the same actor you heard earlier," O'Farrell said. Voice production started six months before the game's beta release. The voice production team were given the script 90% complete to work with. "There were a few changes made during recording, but most of the remaining 10 percent will be dealt with in our pickup session," O'Farrell said, "The pickup session is right at the end of the project, where we catch performance issues, tutorial lines, verbal hints, and anything else that we might have overlooked." A game the size of Knights of the Old Republic would typically take seven weeks to record; two weeks of recording all-day and all-night meant LucasArts was able to record all voices in five weeks. Actors were recorded one at a time, as the non-linear nature of the game meant it was too complicated and expensive to record more than one actor at a time. Most of the dialogue recorded was spoken in Galactic Basic (represented by English); however, around a tenth of the script was written in Huttese. Mike Gallo used Ben Burtt's Star Wars: Galactic Phrase Book & Travel Guide to translate English into Huttese. "The key to recording alien dialogue is casting the right actor for the part," O'Farrell said, "Over the years I've had actors take to Huttese like a fish to water, but the opposite is also true. In the past I've had to line-read (when an actor copies my performance) 150-plus Huttese lines to an actor in order to make it work." Award-winning composer Jeremy Soule was signed to compose the game's score. "It will be a Star Wars score, but it will all be original, and probably the things that will remain will be the Force themes and things like that," Gallo said. Soule was unable to write a full orchestral score for Knights of the Old Republic due to technical limitations: "At the time we only had an 8 megabit per second MIDI system. That was state of the art... I had to fool people into thinking they were hearing a full orchestra. I'd write woodwinds and drums, or woodwinds, horns and drums, or strings and drums and brass. I couldn't run the whole orchestra at once, it was impossible." Release When announced at E3 2001, Knights of the Old Republic was initially scheduled for a late 2002 release. In August 2002 it was announced on the game's forums that its release had been delayed: the Xbox version was to be released in spring 2003 and the PC version in summer 2003. A further delay was announced in January 2003, with both versions of the game expected to be released in fall 2003. Zeschuk attributed the delay to BioWare's focus on quality: "Our goal is to always deliver a top-notch gameplay experience, and sometimes it can be very difficult to excel in all areas. We keep working on tackling each individual issue until we feel we've accomplished something special." The Xbox version of Knights of the Old Republic went gold on July 9, 2003, with a release date of July 15. It sold 250,000 copies in the first four days of its release, making Knights of the Old Republic the fastest-selling Xbox title at the time of its release. Following the game's release, it was announced that free downloadable content would be available through Xbox Live at the end of the year. The PC version of the game went gold on November 11, 2003, and was released on November 18. It was re-released as part of the Star Wars: The Best of PC collection in 2006. The game was ported to Mac OS X by Aspyr and released in North America on September 7, 2004, and re-released digitally on Steam on May 14, 2012, for Mac OS X and PC. The game was released for the iPad on May 30, 2013. The iPad version includes the Yavin Station DLC that was previously released for Xbox and PC. The game was released as DRM-free download on GoG.com in October 2014. The game was also launched on Android's Google Play Store on December 22, 2014. In October 2017, Microsoft made the Xbox One console backward compatible with the Xbox version of the game, as part of a 13-game curated catalogue. Remake In September 2021, Knights of the Old Republic — Remake, a graphically-updated remake of the original game was announced to be in development by Aspyr for Microsoft Windows and PlayStation 5. It will be a timed console exclusive for PlayStation 5 before releasing on other platforms. On the remake's development, lead producer Ryan Treadwell wrote, "We’re rebuilding it from the ground up with the latest tech to match the groundbreaking standard of innovation established by the original, all while staying true to its revered story". Several individuals who worked on the original game are returning for the remake, such as former BioWare developers and Jennifer Hale (reprising her role of Bastila). However, Tom Kane will not be returning due to his retirement for medical reasons. John Cygan and Ed Asner will also not return due to their deaths in 2017 and 2021 respectively. There has been speculation that the remake will actually include parts of both KOTOR 1 and 2, and also establish some of the lore from those games in the official Star Wars canon, something Disney seems keen on as it spins Star Wars off into new books, TV shows, movies, and games. Reception Sales After its release on July 15, 2003, the first Xbox shipment of Knights of the Old Republic sold out within four days on shelves, which amounted to 250,000 sales during that period. This made it the console's fastest-ever seller at the time of its launch. The game ultimately sold 270,000 copies in its initial two weeks and was ranked by The NPD Group as the #2 best-selling console game of its debut month across all platforms. It fell to the 8th position on NPD's sales chart for August and was absent by September. Worldwide sales reached 600,000 copies by October. In the United States alone, the Xbox version of Knights of the Old Republic sold 1.3 million copies and earned $44 million by July 2006. It also received a "Silver" sales award from the Entertainment and Leisure Software Publishers Association, indicating sales of at least 100,000 copies in the United Kingdom. Following its launch in November 2003, the computer version of Knights became the third-best-selling computer title of its debut week, according to NPD. Although it dropped out of NPD's weekly top 10 by its third week, it claimed sixth place in computer game sales for November overall, and ninth for December. It returned to the weekly top 10 during the December 28 – January 3 period but was absent again on the next week's chart. NPD ultimately declared it the 17th-best-selling computer game of 2004. By August 2006, the computer version had sold 470,000 copies and earned $14.7 million in the United States alone. Edge ranked it as the country's 32nd-best-selling computer game released between January 2000 and August 2006. Total sales of the game's Xbox and computer releases surpassed 2 million copies by February 2005 and 2.5 million by May and reached nearly 3 million by March 2006. As of 2007, Knights of the Old Republic had sold 3.2 million units. Critical reviews Star Wars: Knights of the Old Republic received "universal acclaim" according to review aggregator Metacritic, and won numerous awards, including Game Developers Choice Awards' 2004 game of the year, BAFTA Games Awards' best Xbox game of the year, and Interactive Achievement Awards for best console RPG and best computer RPG. GameSpot named it the best Xbox game of July 2003, and best computer game of November 2003. Knights of the Old Republic has seen success as the game of the year from many sources including IGN, Computer Gaming World, PC Gamer, GMR, The Game Developers Choice Awards, Xbox Magazine, and G4. Computer Games Magazine named it the best computer game of 2003, and presented it with awards for "Best Original Music" and "Best Writing." The editors wrote, "The elegance and accessibility that BioWare made part-and-parcel of this game should be the future standard for this genre." According to the review aggregator Metacritic, the PC version received an average score of 93 based on 33 reviews. In total, the game has won over 40 game of the year awards from various publications. Interactive Achievement Awards awarded it for Best Story and Best Character Development. IGN gave KotOR additional awards in Best Sound (Xbox category), Best Story (PC category), Xbox RPG of the Year 2003, PC RPG of the Year 2003, Xbox Game of the Year 2003, PC Game of the Year 2003, and Overall Game of the Year 2003 across all platforms. In 2007, IGN listed it at #27 on its list of the Top 100 Games of All-Time. In 2010, IGN placed the game at #3 on its Best games of the Decade (2000–2009), beaten by Shadow of the Colossus and Half-Life 2. At the 2004 Game Developers Choice Awards, HK-47 won the category of "Original Game Character of the Year." Legacy In 2007, the plot twist in KotOR was ranked number two in Game Informer list of the top ten video game plot twists of all time and number 10 on ScrewAttack's "Top 10 OMGWTF Moments." The game is also part of The Xbox Platinum Series/Classics for sales in excess of 1 million units. The Los Angeles Times listed Knights of the Old Republic as one of the most influential works of the Star Wars Expanded Universe. In 2010, Game Informer named the game the 54th best game on their Top 200 Games of All Time list. In November 2012, Time named it one of the 100 greatest video games of all time. In early 2017, plot elements from the game were referenced in the animated TV series Star Wars Rebels such as the Mandalorian Wars and the ancient Sith planet Malachor. Additionally, Darth Revan was set to appear in the "Ghost of Mortis" arc in Star Wars: The Clone Wars; while this was cut, the deleted scene of Darth Revan was later released. In 2019, Kathleen Kennedy stated that Lucasfilm was looking into developing movies or television series in the Knights of the Old Republic era, but that no plans had yet been made. BuzzFeed News reports that Laeta Kalogridis will write a Star Wars movie that's based on the Knights of the Old Republic video game series. Star Wars: The Rise of Skywalker - The Visual Dictionary, a book guide to the 2019 film Star Wars: The Rise of Skywalker, contained a reference in which one of the legions of Sith troopers of Palpatine's Final Order is named after Darth Revan. See also List of Star Wars video games References Further reading External links at Star Wars: Knights of the Old Republic at BioWare 2003 video games Android (operating system) games Aspyr games BioWare games Interactive Achievement Award winners IOS games LucasArts games MacOS games Nintendo Switch games Role-playing video games Space opera video games Knights of the Old Republic Video game prequels Video games scored by Jeremy Soule Video games developed in Canada Video games featuring protagonists of selectable gender Video games with alternate endings Windows games Xbox games
39442575
https://en.wikipedia.org/wiki/SecureDrop
SecureDrop
SecureDrop is a free software platform for secure communication between journalists and sources (whistleblowers). It was originally designed and developed by Aaron Swartz and Kevin Poulsen under the name DeadDrop. James Dolan also co-created the software. History After Aaron Swartz's death, the first instance of the platform was launched under the name Strongbox by staff at The New Yorker on 15 May 2013. The Freedom of the Press Foundation took over development of DeadDrop under the name SecureDrop, and has since assisted with its installation at several news organizations, including ProPublica, The Guardian, The Intercept, and The Washington Post. Security SecureDrop uses the anonymity network Tor to facilitate communication between whistleblowers, journalists, and news organizations. SecureDrop sites are therefore only accessible as onion services in the Tor network. After a user visits a SecureDrop website, they are given a randomly generated code name. This code name is used to send information to a particular author or editor via uploading. Investigative journalists can contact the whistleblower via SecureDrop messaging. Therefore, the whistleblower must take note of their random code name. The system utilizes private, segregated servers that are in the possession of the news organization. Journalists use two USB flash drives and two personal computers to access SecureDrop data. The first personal computer accesses SecureDrop via the Tor network, and the journalist uses the first flash drive to download encrypted data from the secure drop server. The second personal computer does not connect to the Internet, and is wiped during each reboot. The second flash drive contains a decryption code. The first and second flash drives are inserted into the second personal computer, and the material becomes available to the journalist. The personal computer is shut down after each use. Freedom of the Press Foundation has stated it will have the SecureDrop code and security environment audited by an independent third party before every major version release and then publish the results. The first audit was conducted by University of Washington security researchers and Bruce Schneier. The second audit was conducted by Cure53, a German security firm. SecureDrop suggests sources disabling JavaScript to protect anonymity. Prominent organizations using SecureDrop The Freedom of the Press Foundation now maintains an official directory of SecureDrop instances. This is a partial list of instances at prominent news organizations. Awards 2016: Free Software Foundation, Free Software Award, Award for Projects of Social Benefit See also GlobaLeaks WikiLeaks References External links SecureDrop at Freedom of the Press Foundation Sources (journalism) Free content management systems Free software 2013 software Whistleblowing Tor onion services Software using the GNU AGPL license
23399489
https://en.wikipedia.org/wiki/H%2ACommerce
H*Commerce
H*Commerce: The Business of Hacking You is a six-part online documentary film series directed by Seth Gordon. It centers on the struggle between criminal hackers and security experts. Each segment is between five and eight minutes in length. The first was released on the Internet on May 20, 2009. "H*Commerce" stands for "hacker commerce", specifically "the business of making money through the illegal use of technology to compromise personal and business data," according to McAfee. Synopsis The main narrative follows the story of Sweet Home, Oregon resident Janella Spears, who falls victim to an elaborate Nigerian 419 scam. Initiated by e-mail, the scammers promise Spears and her family the return of a dead relative's lost fortune. Interested in her family's genealogy and persuaded by a death certificate purported to be her relative's, Spears eventually loses US$440,000. The series also features Denver, Colorado-based computer forensics expert Chris Roberts, who provides both counseling and technical support to the Spears family as they deal with the emotional challenges brought about by their mounting debt, as well as the resultant anger and distrust. Roberts also demonstrates how an open wireless network can provide a criminal with the data necessary to commit identity fraud, among other Internet crimes, and how he can surreptitiously access Bluetooth-enabled devices in surrounding cars on a crowded freeway. Themes The series focuses on the growing threat of cybercrime and explores the evolution of computer hacking from phone phreaking to a multibillion-dollar criminal industry that targets individuals with tools such as botnets and DDOS attacks. The series also explores other hacking-related topics such as malware, site scraping and spear phishing. It also focuses on online safety education and safe shopping. Apple cofounder Steve Wozniak makes an appearance as well to talk about his phone phreaking days, as does phreaking pioneer John "Captain Crunch" Draper. In the film, Wozniak explains that hackers and phreakers originally maintained a strong ethic, using their techniques not to "rip people off or make money" but "to explore". Draper observes that the imprisonment of hackers had unintended side-effects: "Once I got busted, I had to tell everybody else in jail how to do it and then the cat's out of the bag. The next thing you know, the mob has this technology." According to McAfee, Americans lost a total of almost $8.5 billion due to Internet-enabled crimes between 2007 and 2008. Production H*Commerce was produced by antivirus software maker McAfee as part of a campaign to educate consumers about online security threats. Gordon, best known for Four Christmases and The King of Kong: A Fistful of Quarters, was hired to direct. The series was shot in HD. The project was originally conceived as a series of standalone episodes, with each highlighting a different aspect of online crimes, including phishing, bank scraping, and e-mail scams. However, during pre-production research, Gordon identified Spears as an example of how typical Internet users are vulnerable to criminals, and decided to make her the focus of the entire series. Millam stated: We produced the Web-based film to make cybercrime real for people, and to help consumers understand why they need to take precautions. The days when hackers were a small group of thrill-seekers breaking into computers to gain fame and notoriety are behind us. Now, hacking for profit, or what we call H*Commerce, is a global industry. Distribution and marketing Promotion for the film, and the site on which it appears, was conducted primarily online through banner ads and trailers as well as a social media and mobile campaign. In New York and San Francisco, McAfee conducted a poster campaign featuring artwork similar to those created by Saul Bass for Alfred Hitchcock by San Francisco Graphic Designer, Nicole Flores. The film's website also offers additional resources to learn about cybercrime and online safety, as well as a McAfee's Cybercrime Response Unit, a free service for consumers who believe they have been the victim of a cybercrime. References External links Stop H*Commerce official site McAfee English-language films Documentary films about organized crime Documentary films about the Internet Sweet Home, Oregon Films set in Oregon Cybercrime
28778865
https://en.wikipedia.org/wiki/Harald%20Gutzelnig
Harald Gutzelnig
Harald Gutzelnig (born 18 June 1956) is an Austrian editor, managing director, non-fictional author and software programmer. He lives in Perg. Gutzelnig and his wife Marianne founded the CDA Verlag. Life After having finished his education in Pedagogy at the Pädagogische Hochschule (1976–1979) Gutzelnig was working as teacher at a Hauptschule in Upper Austria until 1993. At the end of the 1980s he began programming an educational software for touch typing, named PC-Tipp-Trainer, and published it with a manual. Later this software was redesigned and renamed PC-Schreib and TippTop (four versions from 1992 to 1999, published by Data Becker). After that he wrote a course for programming Turbo Pascal and several other books for beginners in electronic data processing. In 1995 he and his wife Marianne Gutzelnig founded the CDA Verlag in Perg, Upper Austria, which publishes and distributes several computer magazines like CD Austria in Austria as well as PC News and PC User in the German-speaking countries Germany, Switzerland and Luxembourg. Works Turbo-Pascal, Eine runde Einführung in die Kunst des professionellen Programmierens, Wolfram's Fachverlag, 1. Auflage Version 5.5, 3-925328-02-5, Attenkirchen 1990, 2. Auflage Version 6.0, Attenkirchen 1991, (programming course introduction to Turbo Pascal for professional programming) Turbo Pascal Tools, Featuring Turbo enhancement toolkit, Version 1, Systhema-Verlag, München 1989, 3-89390-134-5, Version 2, Systhema-Verlag, München 1990, As easy as, Das Superkalkulationsprogramm, Systhema-Verlag, München 1991, Etikettenstar, Systhema-Verlag, München 1991, (label star, software for creating labels) Drucker-Utilities, Schöne Schriften für Nadeldrucker, Systhema Verlag, München 1991, (Printing utilities) Internationale Shareware-Hits, Systhema-Verlag, München 1991, (overview of international shareware) Harry's Spass am Lernen, Mathe-, Deutsch und Vokabeltrainer, Software, Systhema-Verlag, München 1992, Harry's fun with learning (mathematics German and vocabulary educational software) PC-Tipp-Trainer, Software, Systhema-Verlag, 1. Auflage München 1991, , 2. Auflage München 1992, Handbuch zu PC-Schreib (manual for PC-Schreib, educational software for touch typing), Verlag Rossipaul, München 1993, Turbo-Pascal, Ein Programmierkurs für Einsteiger, München 1992, Deutscher Taschenbuchverlag (dtv) und BeckISBN 3-406-36807-7 ( Turbo Pascal programming course for beginners) Internationale Shareware, Marktübersicht und Leitfaden, München 1992, Deutscher Taschenbuchverlag (dtv) 3-423-50116-2 und Beck TippTop 4.0, Medienkombination (Buch und Software-CD-ROM), Spielend einfach tippen lernen, 24 Lektionen und 26 Crash-Kurse, mit neuer deutscher Rechtschreibung, neues Kurssystem mit Fortschrittskontrolle, Data Becker, Düsseldorf 1999, (Vorgängerversionen ab 1992) Peter Weibel (introduction), Harald Gutzelnig (author), M Derbort (author), J Entenebner (author), H Gutzelnig (author), Clemens Hüffel (editor), Anton Reiter (author and editor), Alexander Feiglstorfer (Illustrator), Handbuch neue Medien, CDA Verlag, Perg, 1. Auflage 2006 , 2. Auflage Literature Willing's press guide, Band 2, Western Europe, Willing Service, 2003 (English) Brinkman's alphabetische catalogus van boeken en verder in den boekhandel voorkomende artikelen, Band 2, Sijthoff, 2001 (Dutch) References External links Harald Gutzelnig's page at xing.com web site of PC User indicating his position as editor and managing director (German) web site of CD Austria indicating his position as editor and managing director (German) web site of PC News indicating his position as editor and managing director (German) Austrian non-fiction writers People from Perg District 1956 births Austrian computer programmers Magazine publishers (people) Living people Austrian magazine editors Austrian schoolteachers
44019719
https://en.wikipedia.org/wiki/Invincea
Invincea
Invincea, Inc. was a company that offered a suite of endpoint protection software products. Originally called Secure Command LLC, Invincea, Inc. was a venture-backed software company that provided malware threat detection, prevention, and analysis to stop advanced threats. It was acquired by Sophos in February 2017. History The company was founded in 2006 by Dr. Anup Ghosh and was based in Fairfax, Virginia. Major investors included Dell Ventures, New Atlantic Ventures, Grotech Ventures, Aeris Capital, and Harbert Venture Partners. In 2012, Invincea used a $21 million grant from DARPA to improve the security of the US military's Android-based devices such as tablet PCs and smartphones. The Invincea software secured data from unauthorized access and protect devices from malicious applications. In June 2013, Dell announced an OEM partnership with Invincea and began shipping new endpoint security software dubbed "Dell Data Protection | Protected Workspace" on all of its commercial tablets and PCs worldwide. Dell Data Protection included Invincea container technology to put a shield - or virtualized container around each browser or application instance to protect it from the rest of the device and the network on which it resided. In December 2013, Invincea acquired Sandboxie for an undisclosed amount. Sandboxie was a pioneer in the Windows Containment and sandboxing market, also called “container” technology, and the acquisition was made to consolidate Sandboxie and Invincea's own container solution. In May 2016, Invincea launched X by Invincea. X by Invincea was a suite of products that protected endpoints by detecting and blocking known and unknown malware without signatures in real-time. X combined deep learning, an advanced form of machine learning, behavioral analysis and the legacy Invincea container technology, also known as isolation technology, in one lightweight agent. In February 2017, Invincea was acquired by Sophos, a security software and hardware company. In August that year, the subsidiary Invincea Labs was renamed Two Six Labs. In January 2018, Sophos announced that Invincea's deep learning technology would be integrated with the Sophos Intercept X endpoint security product. On April 16, 2018, Invincea announced the end of selling the X by Invincea suite of products. The Sophos products did not integrate with the Invincea container technology. Support and maintenance remained available under existing contracts through December 31, 2019, at which point, support and maintenance for Invincea products ceased. Sophos did not include the Invincea container technology in Intercept X. For that reason, Sandboxie was released as a free tool, and Sophos released its container technology to be open source. References 2006 establishments in Virginia American companies established in 2006 Companies based in Fairfax, Virginia Computer network security Software companies established in 2006
36275180
https://en.wikipedia.org/wiki/Microsoft%20Office%202013
Microsoft Office 2013
Microsoft Office 2013 (codenamed Office 15) is a version of Microsoft Office, a productivity suite for Microsoft Windows. It is the successor to Microsoft Office 2010 and the predecessor to Microsoft Office 2016. Unlike with Office 2010, no OS X equivalent was released. Microsoft Office 2013 includes extended file format support, user interface updates and support for touch among its new features and is suitable for IA-32 and x64 systems. Office 2013 is incompatible with Windows XP, Windows Server 2003, Windows Vista, Windows Server 2008, and earlier versions of Windows. Office 2013 is compatible with Windows 7, Windows Server 2008 R2, Windows 8, Windows Server 2012, Windows 8.1, Windows Server 2012 R2, Windows 10, Windows Server 2016 and Windows Server 2019. A version of Office 2013 comes included on Windows RT devices. It is not supported on Windows 11 or Windows Server 2022. It is the last version of Microsoft Office to support Windows 7 below SP1 and Windows Server 2008 R2 below SP1 as the following version, Microsoft Office 2016 will only support Windows 7 SP1 or later and Windows Server 2008 R2 SP1 or later. Development on this version of Microsoft Office was started in 2010 and ended on October 11, 2012, when Microsoft Office 2013 was released to manufacturing. Microsoft released Office 2013 to general availability on January 29, 2013. This version includes new features such as integration support for online services (including OneDrive, Outlook.com, Skype, Yammer and Flickr), improved format support for Office Open XML (OOXML), OpenDocument (ODF) and Portable Document Format (PDF) and support for multi-touch interfaces. Microsoft Office 2013 comes in twelve different editions, including three editions for retail outlets, two editions for volume licensing channel, five subscription-based editions available through Microsoft Office 365 program, the web application edition known as Office Web Apps and the Office RT edition made for tablets and mobile devices. Office Web Apps are available free of charge on the web although enterprises may obtain on-premises installations for a price. Microsoft Office applications may be obtained individually; this includes Microsoft Visio, Microsoft Project and Microsoft SharePoint Designer which are not included in any of the twelve editions. On February 25, 2014, Microsoft Office 2013 Service Pack 1 (SP1) for Windows 7 was released. Mainstream support for Office 2013 ended on April 10, 2018. Extended support ends on April 11, 2023. On June 9, 2018, Microsoft announced that its forums would no longer include Office 2013 or other products in extended support among its products for discussions involving support. On August 27, 2021, Microsoft announced that Microsoft Outlook 2013 SP1 with all subsequent updates will be required to connect to Microsoft 365 Exchange servers by November 1, 2021; Outlook 2013 without SP1 will no longer be supported. Microsoft have stated that Office 2013 is not supported on Windows 11. Development Development started in 2010 while Microsoft was finishing work on Office 14, released as Microsoft Office 2010. On January 30, 2012, Microsoft released a technical preview of Office 15, build 3612.1010, to a selected group of testers bound by non-disclosure agreements. On July 16, 2012, Microsoft held a press conference to showcase Office 2013 and to release the Consumer Preview. The Office 2013 Consumer Preview is a free, fully functional version but will expire 60 days after the final product's release. An update was issued for the Office 2013 Customer Preview suite on October 5. Office 2013 was released to manufacturing on October 11, 2012. It was made available to TechNet and MSDN subscribers on October 24, 2012. On November 15, 2012, 60-days trial versions of Microsoft Office 2013 Professional Plus, Project Professional 2013 and Visio Professional 2013 were made available to the public over the Internet. Microsoft has released Office 2013 for general availability on January 29, 2013. Microsoft released the service pack 1 update on February 25, 2014. Features New features Office 2013 introduces Click-To-Run 2.0 installation technology for all editions based on Microsoft App-V Version 5. Previously, only certain editions of Office 2010 were available with Click-To-Run 1.0 installer technology, which was based on App-V 4.x, where a separate Q drive was created and installed files of Office were isolated from the rest of the system, causing many Office add-ins to not be compatible. With the newer Click-To-Run technology, Office 2013 installs files just like Windows Installer (MSI) to the Program Files directory. Retail versions of Office 2013 use the Click-to-Run installer. Volume-licensed versions use Windows Installer (MSI) technology. Some editions like Professional Plus are available in both retail (C2R) and volume (MSI) channels. Office 2013 is more cloud-based than previous versions; a domain login, Office 365 account, or Microsoft account can now be used to sync Office application settings (including recent documents) between devices, and users can also save documents directly to their OneDrive account. Microsoft Office 2013 includes updated support for ISO/IEC 29500, the International Standard version of Office Open XML (OOXML) file format: in particular it supports saving in the "Strict" profile of ISO/IEC 29500 (Office Open XML Strict). It also supports OASIS version 1.2 of ISO/IEC 26300:2006, Open Document Format, which Office 2013 can read and write. Additionally, Office 2013 provides full read, write, and edit support for ISO 32000 (PDF). New features include a new read mode in Microsoft Word, a presentation mode in Microsoft PowerPoint and improved touch and inking in all of the Office programs. Microsoft Word can also insert video and audio from online sources as well as the capability to broadcast documents on the Web. Word and PowerPoint also have bookmark-like features which sync the position of the document between different computers. The Office Web Apps suite was also updated for Office 2013, introducing additional editing features and interface changes. Other features of Office 2013 include: PDF Import feature in Microsoft Word Improved text wrapping and improved Track Changes feature in Microsoft Word Flash Fill in Microsoft Excel Office Remote/Microsoft PowerPoint Remote app and Office add-in to control presentations from a Windows Phone or Android phone. Automatic slide resizing/refit in Microsoft PowerPoint New Office Open XML-based format, VSDX for Microsoft Visio Flatter look of the Ribbon interface and subtle animations when typing or selecting (Word and Excel) A new visualization for scheduled tasks in Microsoft Outlook Remodeled start screen New graphical options in Word Objects such as images can be freely moved; they snap to boundaries such as paragraph edges, document margin and or column boundaries Supports embedding of Online picture support with content from Office.com, Bing.com and Flickr (by default, only images in public domain) to in replacement to the cliparts gallery from previews office versions. Ability to return to the last viewed or edited location in Word and PowerPoint New slide designs, animations and transitions in PowerPoint 2013 Support for Outlook.com and Hotmail.com in Outlook Support for integration with Skype, Yammer and SkyDrive IMAP special folders support Starting with Office 2013, proofing tools are separately and freely downloadable without being bundled in Multilingual User Interface (MUI)/Multilanguage packs, Language Interface Packs (LIPs) or Single Language Packs (SLP). Excel 2013 supports new limit models, as follows: Remarks 1 "Name", in this context, is a form of variable in Microsoft Excel Removed features The following features were removed from Microsoft Office 2013. Removed from the entire suite Microsoft SharePoint Workspace Microsoft Clip Organizer Microsoft Office Picture Manager Office 2007 and Office 2010 chart styles Ability to insert a 3D cone, pyramid, or cylinder chart (It is still possible to insert a 3D rectangle chart and change the shape after insertion.) Only basic version of help files available while offline. There is no longer an option to install local help files during installation. Features removed from Microsoft Word Custom XML markup has been removed for legal reasons Older WordArt objects are now converted to new WordArt objects Word 2013 no longer uses ClearType Features removed from Microsoft Excel Simultaneous open files via Multiple Document Interface (MDI), along with requisite changes to VBA code to no longer support MDI; Excel is now Single Document Interface (SDI) only Features removed from Microsoft Access Access Data Projects (ADP) Support for Jet 3.x IISAM Access OWC control dBASE support suite Features removed from Microsoft Outlook Download Headers Only mode for IMAP Outlook Exchange Classic offline Microsoft Exchange Server 2003 support Public Folder Free/Busy feature (/Cleanfreebusy startup switch) Ability to import from or export to any formats other than Personal Storage Table (PST) or comma-separated values (CSV) Notes and Journal customization Outlook Activities tab Outlook Mobile Service (OMS) Outlook Search through Windows Shell Features removed from Microsoft PowerPoint Support for Visio Drawing Changes Distribution changes Unlike past versions of Office, retail copies of Office 2013 on DVD are only offered in select regions, such as those Microsoft classifies as emerging markets, as well as Australia, at the discretion of retailers. In all other regions, retail copies of Office 2013 and Office 365 subscriptions only contain a product key, and direct users to the Office website to redeem their license and download the software. Licensing changes The original license agreement for retail editions of Microsoft Office 2013 was different from the license agreements of retail editions of previous versions of Microsoft Office in two significant ways. The first of these was that the software could no longer be transferred to another computer. In previous versions of Office, this restriction applied only to OEM editions; retail Office license agreements allowed uninstalling from one computer to install on another computer. Digitally downloaded copies of Office were also said to be permanently locked to that PC's hardware, preventing it from being transferred to any other computing device. Should the buyer have wished to use Office 2013 on a different computer, or if they later became unable to use the computing device that the original license was downloaded to (e.g. hardware became inoperable due to malfunction) then a completely new, full-priced copy of Office 2013 would have to have been purchased to replace the prior one. Microsoft stated that this change was related to the software piracy that has been rampant for years, worldwide. However, many commentators saw this change as an effort to forcibly move its customers towards the subscription-based business model used by the Office 365 service. The legality of this move, particularly in Europe, has been questioned. However, on March 6, 2013, Microsoft announced that equivalent transfer rights to those in the Office 2010 retail license agreements are applicable to retail Office 2013 copies effective immediately. Transfer of license from one computer to another owned by the same user is now allowed every 90 days, except in the case of hardware failure, in which the license may be moved sooner. The first user of the product is now also allowed to transfer it to another user. The second difference, which remains in the updated licensing agreement, is that the software can be installed on only one computer. In previous versions of Office, this restriction also applied only to OEM editions; retail Office license agreements allowed installing the product on two or three computers, depending on the edition. Microsoft requires an account in order to activate any Office edition from 2013 on. Editions Traditional editions As with previous versions, Office 2013 is made available in several distinct editions aimed towards different markets. All traditional editions of Microsoft Office 2013 contain Word, Excel, PowerPoint and OneNote and are licensed for use on one computer. Five traditional editions of Office 2013 were released: Home & Student: This retail suite includes the core applications Word, Excel, PowerPoint, and OneNote. Home & Business: This retail suite includes the core applications Word, Excel, PowerPoint, and OneNote plus Outlook. Standard: This suite, only available through volume licensing channels, includes the core applications Word, Excel, PowerPoint, and OneNote plus Outlook and Publisher. Professional: This retail suite includes the core applications Word, Excel, PowerPoint, and OneNote plus Outlook, Publisher and Access. Professional Plus: This suite includes the core applications Word, Excel, PowerPoint, and OneNote plus Outlook, Publisher, Access, InfoPath and Lync. Office 365 The Office 365 subscription services, which were previously aimed towards business and enterprise users, were expanded for Office 2013 to include new plans aimed at home use. The subscriptions allow use of the Office 2013 applications by multiple users using a software as a service model. Different plans are available for Office 365, some of which also include value-added services, such as 20 GB of OneDrive storage (later increased to 1 TB) and 60 Skype minutes per month on the new Home Premium plan. These new subscription offerings were positioned as a new option for consumers wanting a cost-effective way to purchase and use Office on multiple computers in their household. Office RT A special version of Office 2013, initially known as Office 2013 Home & Student RT, is shipped with all Windows RT devices. It initially consisted of Word, Excel, PowerPoint and OneNote. In Windows RT 8.1, the suite was renamed Office 2013 RT and Outlook was added. The edition, whilst visually indistinguishable from normal versions of Office 2013, contains special optimizations for ARM-based devices, such as changes to reduce battery usage (including, for example, freezing the animation of the blinking cursor for text editing during periods of inactivity), enabling touch mode by default to improve usability on tablets, and using the graphics portion of a device's SoC for hardware acceleration. Windows RT devices on launch were shipped with a "preview" version of Office Home & Student 2013 RT. The release date for the final version varied depending on the user's language, and was distributed through Windows Update when released. On June 5, 2013, Microsoft announced that Windows RT 8.1 would add Outlook to the suite in response to public demand. Office RT modifies or excludes other various features for compatibility reasons or resource reduction. To save disk space; templates, clip art, and language packs are downloaded online rather than stored locally. Other excluded features include the removal of support for third-party code such as macros/VBA/ActiveX controls, the removal of support for older media formats and narration in PowerPoint, editing of equations generated with the legacy Equation Editor, data models in Excel (PivotCharts, PivotTables, and QueryTables are unaffected), searching embedded media files in OneNote, along with data loss prevention, Group Policy support, and creating e-mails with information rights management in Outlook. As the version of Office RT included on Windows RT devices is based on the Home & Student version, it cannot be used for "commercial, nonprofit, or revenue-generating activities" unless the organization has a volume license for Office 2013 already, or the user has an Office 365 subscription with commercial use rights. Windows Store apps Alongside Office RT, free versions of OneNote and the Lync client were made available as Windows Store apps upon the release of Windows 8 and RT. The OneNote app, originally known as OneNote MX, contains a limited feature set in comparison to its desktop version, but is also optimized for use on tablets. The OneNote app has since received several major updates, including camera integration, printing abilities, and multiple inking options. Universal Microsoft Word, Excel, and PowerPoint apps for Windows 10 were released in 2015. Office Mobile Windows Phone 8 ships with an updated version of the Office Mobile suite, consisting of mobile versions of Word, Excel, PowerPoint, and OneNote. In comparison to their Windows Phone 7 versions, the new versions add an improved Office Hub interface that can sync recently opened and modified documents (including changes to documents stored via Office 365 and SkyDrive), a separated OneNote app with additional features (such as voice notes and integration with the new "Rooms" functionality of the OS), and improved document editing and viewing functionality. In June 2013, Microsoft released a version of Office Mobile for iPhone; it is similar to the Windows Phone version, but originally requires an Office 365 subscription to use. A version for Android smartphones was released in July 2013; it, too, originally needed Office 365 for use. Apps for iPad and Android tablet computers were released in March 2014 and January 2015, respectively. These, along with their smartphone equivalents, have been made free for personal use, though certain premium features have been paywalled and require Office 365, which includes licensing of the apps for business use. Windows 10 Mobile that was released in December 2015 included new Office apps, more in line with their iPhone and Android equivalent, and making use of the "universal app" platform pioneered with Windows 10. Comparison Remarks 1 The Windows RT versions do not include all of the functionality provided by other versions of Office. 2 Commercial use of Office RT is allowed through volume licensing or business subscriptions to Office 365. 3 Windows Store versions are also available. 4 InfoPath was initially part of Office 365 Small Business Premium. However, it's currently unavailable though subscription. 5 Professional Plus edition on the retail channel is/was available with MSDN subscription or via Microsoft Home Use Program. System requirements Each Microsoft Office 2013 application has the following requirements, although there may be app-specific requirements. In addition to these, graphics hardware acceleration requires a screen resolution of 1024×576 pixels or larger and a DirectX 10-compliant GPU with at least 64 MB of video memory (in case of absence of the required hardware, however, Office 2013 applications can still run without graphics acceleration.) See also List of office suites References External links 2013 software 2013 Office 2013
222458
https://en.wikipedia.org/wiki/Sequent%20Computer%20Systems
Sequent Computer Systems
Sequent Computer Systems was a computer company that designed and manufactured multiprocessing computer systems. They were among the pioneers in high-performance symmetric multiprocessing (SMP) open systems, innovating in both hardware (e.g., cache management and interrupt handling) and software (e.g., read-copy-update). Through a partnership with Oracle Corporation, Sequent became a dominant high-end UNIX platform in the late 1980s and early 1990s. Later they introduced a next-generation high-end platform for UNIX and Windows NT based on a non-uniform memory access architecture, NUMA-Q. As hardware prices fell in the late 1990s, and Intel shifted their server focus to the Itanium processor family, Sequent joined the Project Monterey effort in October 1998. which aimed to move a standard Unix to several new platforms. In July 1999 Sequent agreed to be acquired by IBM. At the time, Sequent's CEO said its technology would "find its way through IBM's entire product field" and IBM announced it would "both sell Sequent machines, and fold Sequent's technology...into its own servers", but by May 2002 a decline in sales of the models acquired from Sequent, among other reasons, led to the retirement of Sequent-heritage products. Vestiges of Sequent's innovations live on in the form of data clustering software from PolyServe (subsequently acquired by HP), various projects within OSDL, IBM contributions to the Linux kernel, and claims in the SCO v. IBM lawsuit. History Originally named Sequel, Sequent was formed in 1983 when a group of seventeen engineers and executives left Intel after the failed iAPX 432 "mainframe on a chip" project was cancelled; they were joined by one non-Intel employee. They started Sequent to develop a line of SMP computers, then considered one of the up-and-coming fields in computer design. Balance Sequent's first computer systems were the Balance 8000 (released in 1984) and Balance 21000 (released in 1986). Both models were based on 10 MHz National Semiconductor NS32032 processors, each with a small write-through cache connected to a common memory to form a shared memory system. The Balance 8000 supported up to 6 dual-processor boards for a total maximum of 12 processors. The Balance 21000 supported up to 15 dual-processor boards for a total maximum of 30 processors. The systems ran a modified version of 4.2BSD Unix the company called DYNIX, for DYNamic unIX. The machines were designed to compete with the DEC VAX-11/780, with all of their inexpensive processors available to run any process. In addition the system included a series of libraries that could be used by programmers to develop applications that could use more than one processor at a time. Symmetry Their next series was the Intel 80386-based Symmetry, released in 1987. Various models supported between 2 and 30 processors, using a new copy-back cache and a wider 64-bit memory bus. 1991's Symmetry 2000 models added multiple SCSI boards, and were offered in versions with from one to six Intel 80486 processors. The next year they added the VMEbus based Symmetry 2000/x50 with faster CPUs. The late 1980s and early 1990s saw big changes on the software side for Sequent. DYNIX was replaced by DYNIX/ptx, which was based on a merger of AT&T Corporation's UNIX System V and 4.2BSD. And this was during a period when Sequent's high-end systems became particularly successful due to a close working relationship with Oracle, specifically their high-end database servers. In 1993 they added the Symmetry 2000/x90 along with their ptx/Cluster software, which added various high availability features and introduced custom support for Oracle Parallel Server. In 1994 Sequent introduced the Symmetry 5000 series models SE20, SE60 and SE90, which used 66 MHz Pentium CPUs in systems from 2 to 30 processors. The next year they expanded that with the SE30/70/100 lineup using 100 MHz Pentiums, and then in 1996 with the SE40/80/120 with 166 MHz Pentiums. A variant of the Symmetry 5000, the WinServer 5000 series, ran Windows NT instead of DYNIX/ptx. NUMA Recognizing the increase in competition for SMP systems after having been early adopters of the architecture, and the increasing integration of SMP technology into microprocessors, Sequent sought its next source of differentiation. They began investing in the development of a system based on a cache-coherent non-uniform memory architecture (ccNUMA) and leveraging Scalable Coherent Interconnect. NUMA distributes memory among the processors, avoiding the bottleneck that occurs with a single monolithic memory. Using NUMA would allow their multiprocessor machines to generally outperform SMP systems, at least when the tasks can be executed close to their memory — as is the case for servers, where tasks typically do not share large amounts of data. In 1996 they released the first of a new series of machines based on this new architecture. Known internally as STiNG, an abbreviation for Sequent: The Next Generation (with Intel inside), it was productized as NUMA-Q and was the last of the systems released before the company was purchased by IBM for over $800 million. IBM then started Project Monterey with Santa Cruz Operation, intending to produce a NUMA-capable standardized Unix running on IA-32, IA-64 and POWER and PowerPC platforms. This project later fell through as both IBM and SCO turned to the Linux market, but is the basis for "the new SCO"'s SCO v. IBM Linux lawsuit. IBM purchase and disappearance With their future product strategy in tatters, it appeared Sequent had little future standing alone, and was purchased by IBM in 1999 for $810 million. IBM released several x86 servers with a NUMA architecture. The first was the x440 in August, 2002 with a follow-on x445 in 2003. In 2004, an Itanium-based x455 was added to the NUMA family. During this period, NUMA technology became the basis for IBM's extended X-Architecture (eXA, which could also stand for enterprise X-Architecture). As of 2011, this chipset is now on its fifth generation, known as eX5 technology. It now falls under the brand IBM System x. According to a May 30, 2002 article in the Wall Street Journal (WSJ) entitled "Sequent Deal Serves Hard Lesson for IBM": When IBM bought Sequent, ...it [Sequent] lacked the size and resources to compete with Sun and Hewlett-Packard Co. in the Unix server market.... In 1999, IBM had problems of its own with an aged and high-priced line of servers, particularly for its version of Unix known as AIX. It also faced huge losses in personal computers and declining sales in its cash-cow mainframe line. Detailed model descriptions The following is a more detailed description of the first two generations of Symmetry products, released between 1987 and 1990. Symmetry S-series Symmetry S3 The S3 was the low-end platform based on commodity PC components running a fully compatible version of DYNIX 3. It featured a single 33 MHz Intel 80386 processor, up to 40 megabytes of RAM, up to 1.8 gigabytes of SCSI-based disk storage, and up to 32 direct-connected serial ports. Symmetry S16 The S16 was the entry-level multiprocessing model, which ran DYNIX/ptx. It featured up to six 20 MHz Intel 80386 processors, each with a 128 kilobyte cache. It also supported up to 80 MB of RAM, up to 2.5 GB of SCSI-based disk storage, and up to 80 direct-connected serial ports. Symmetry S27 The S27 ran either DYNIX/ptx or DYNIX 3. It featured up to ten 20 MHz Intel 80386 processors, each with a 128 KB cache. It also supported up to 128 MB of RAM, up to 12.5 GB of disk storage, and up to 144 direct-connected serial ports. Symmetry S81 The S81 ran either DYNIX/ptx or DYNIX 3. It featured up to 30 20 MHz Intel 80386 processors, each with a 128 KB cache. It also supported up to 384 MB of RAM, up to 84.8 GB of disk storage, and up to 256 direct-connected serial ports. Symmetry 2000 series Symmetry 2000/40 The S2000/40 was the low-end platform based on commodity PC components running a fully compatible version of DYNIX/ptx. It featured a single 33 MHz Intel 80486 processor, up to 64 megabytes of RAM, up to 2.4 gigabytes of SCSI-based disk storage, and up to 32 direct-connected serial ports. Symmetry 2000/200 The S2000/200 was the entry-level multiprocessing model, which ran DYNIX/ptx. It featured up to six 25 MHz Intel 80486 processors, each with a 512 kilobyte cache. It also supported up to 128 MB of RAM, up to 2.5 GB of SCSI-based disk storage, and up to 80 direct-connected serial ports. Symmetry 2000/400 The S2000/400 ran either DYNIX/ptx or DYNIX 3. It featured up to ten 25 MHz Intel 80486 processors, each with a 512 KB cache. It also supported up to 128 MB of RAM, up to 14.0 GB of disk storage, and up to 144 direct-connected serial ports. Symmetry 2000/700 The S2000/700 ran either DYNIX/ptx or DYNIX 3. It featured up to 30 25 MHz Intel 80486 processors, each with a 512 KB cache. It also supported up to 384 MB of RAM, up to 85.4 GB of disk storage, and up to 256 direct-connected serial ports. See also NCR Voyager (early 486/Pentium SMP systems) References External links Project Blue-Away, a Sun Microsystems project announced in February 2002 targeting NUMA-Q customers IBM lays off 200 Portland employees, a January 2002 article, also from Portland Business Journal Out of Sequence, a September 1999 article from Willamette Week 1983 establishments in Oregon 1999 disestablishments in Oregon 1999 mergers and acquisitions American companies established in 1983 American companies disestablished in 1999 Beaverton, Oregon Computer companies established in 1983 Computer companies disestablished in 1999 Defunct companies based in Oregon Defunct computer companies of the United States Defunct computer hardware companies IBM acquisitions
10533265
https://en.wikipedia.org/wiki/Chattr
Chattr
is the command in Linux that allows a user to set certain attributes of a file. is the command that displays the attributes of a file. Most BSD-like systems, including macOS, have always had an analogous command to set the attributes, but no command specifically meant to display them; specific options to the command are used instead. The chflags command first appeared in 4.4BSD. Solaris has no commands specifically meant to manipulate them. and are used instead. Other Unix-like operating systems, in general, have no analogous commands. The similar-sounding commands (from HP-UX) and (from AIX) exist but have unrelated functions. Among other things, the command is useful to make files immutable so that password files and certain system files cannot be erased during software upgrades. In Linux systems ( and ) File system support The command line tools (to manipulate attributes) and (to list attributes) were originally specific to the Second Extended Filesystem family (ext2, ext3, ext4), and are available as part of the e2fsprogs package. However, the functionality has since been extended, fully or partially, to many other systems, including XFS, ReiserFS, JFS and OCFS2. The btrfs file system includes the attribute functionality, including the C flag, which turns off the built-in copy-on-write (CoW) feature of btrfs due to slower performance associated with CoW. description The form of the command is: chattr [-RVf] [-+=AacDdijsTtSu] [-v version] files... -R recursively changes attributes of directories and their contents -V is to be verbose and print the program version -f suppresses most error messages description The form of the command (gnu 1.41.3): lsattr [ -RVadv ] [ files... ] -R recursively lists attributes of directories and their contents -V displays the program version -a lists all files in directories, including dotfiles -d lists directories like other files, rather than listing their contents Attributes Some attributes include: Notes In BSD-like systems () File system support The command is not specific to particular file systems. UFS on BSD systems, and APFS, HFS+, SMB, AFP, and FAT on macOS support at least some flags. description The form of the command is: chflags [-R [-H | -L | -P]] flags file ... -H If the -R option is specified, symbolic links on the command line are followed. (Symbolic links encountered in the tree traversal are not followed.) -L If the -R option is specified, all symbolic links are followed. -P If the -R option is specified, no symbolic links are followed. This is the default. -R Change the file flags for the file hierarchies rooted in the files instead of just the files themselves. Displaying BSD-like systems, in general, have no default user-level command specifically meant to display the flags of a file. The command will do with either the -lo, or the -lO, depending on the system, flags passed. Attributes All traditional attributes can be set or cleared by the super-user; some can also be set or cleared by the owner of the file. Some attributes include: BSD systems offer additional flags like offline, snapshot, sparse, and uarchive; see References. See also ATTRIB – analogous command in MS-DOS, OS/2 and Microsoft Windows chown – change file/directory ownership in a Unix system chmod – change file access control attributes in a Unix system cacls – change file access control lists in Microsoft Windows NT Notes References (outdated; see newer version) (flags section in the BSD system source code of the macOS XNU kernel) Unix file system-related software
48080855
https://en.wikipedia.org/wiki/Typeeto
Typeeto
Typeeto is a software tool that enables using a Macintosh Bluetooth compatible keyboard with iOS and Android devices, Apple TV and game consoles. Overview Typeeto's has been compatibility-tested for Android phones and tablets, Apple TV, Windows PCs, the iPad, the iPhone, the iPod Touch, and MacBooks. Typeeto can connect a keyboard to several devices at a time; switching between them requires either a mouse click or pressing a hotkey. Typeeto was featured on Product Hunt and got more than 200 upvotes. References Android (operating system) software Bluetooth software IOS software IPod software PlayStation 4 software Utilities for macOS Utilities for Windows Xbox (console) software
63181427
https://en.wikipedia.org/wiki/Boeing%20737%20MAX%20certification
Boeing 737 MAX certification
The Boeing 737 MAX was initially certified in 2017 by the U.S. Federal Aviation Administration (FAA) and the European Union Aviation Safety Agency (EASA). However, global regulators grounded the plane in 2019 following fatal crashes of Lion Air Flight 610 and Ethiopian Airlines Flight 302. Both crashes were linked to the Maneuvering Characteristics Augmentation System (MCAS), a new automatic flight control feature. Investigations in both crashes determined that Boeing and the FAA favored cost-saving solutions, but ultimately produced a flawed design of the MCAS instead. The FAA's Organization Designation Authorization program, allowing manufacturers to act on its behalf, was also questioned for weakening its oversight of Boeing. Boeing wanted the FAA to certify the airplane as another version of the long-established 737; this would limit the need for additional training of pilots, a major cost saving for airline customers. During flight tests, however, Boeing discovered that the position and larger size of the engines tended to push up the airplane nose during certain maneuvers. To counter that tendency and ensure fleet commonality with the 737 family, Boeing added MCAS so the MAX would handle similar to earlier 737 versions. Boeing convinced the FAA that MCAS could not fail hazardously or catastrophically, and that existing procedures were effective in dealing with malfunctions. The MAX was exempted from certain newer safety requirements, saving Boeing billions of dollars in development costs. In February 2020, the US Justice Department (DOJ) investigated Boeing's hiding of information from the FAA, based on the content of internal emails. In January 2021, Boeing settled to pay over $2.5 billion after being charged with fraud in connections to the crashes. In June 2020, the U.S. Inspector General's report revealed that MCAS problems dated several years before the accidents. The FAA found several defects that Boeing deferred to fix, in violation of regulations. In September 2020, the House of Representatives concluded its investigation and cited numerous instances where Boeing dismissed employee concerns with MCAS, prioritized deadline and budget constraints over safety, and where it lacked transparency in disclosing essential information to the FAA. It further found that the assumption that simulator training would not be necessary had "diminished safety, minimized the value of pilot training, and inhibited technical design improvements". In November 2020, the FAA announced that it had cleared the 737 MAX to return to service. Various system, maintenance and training requirements are stipulated, as well as design changes that must be implemented on each aircraft before the FAA issues an airworthiness certificate, without delegation to Boeing. Other major regulators worldwide are gradually following suit: In 2021, after two years of grounding, Transport Canada and EASA both cleared the MAX subject to additional requirements. Initial certification Type rating In the U.S., the MAX shares a compatible type rating throughout the Boeing 737 series. The impetus for Boeing to build the 737 MAX was serious competition from the Airbus A320neo, which was a threat to win a major order for aircraft from American Airlines, a traditional customer for Boeing airplanes. Boeing decided to update its 737, designed in the 1960s, rather than designing a clean sheet aircraft, which would have cost much more and taken years longer. Boeing's goal was to ensure the 737 MAX would not need a new type rating, which would require significant additional pilot training, adding unacceptably to the overall cost of the airplane for customers. The 737 was first certified by the FAA in 1967. Like every new 737 model since then, the MAX has been approved partially with the original requirements and partially with more current regulations, enabling certain rules and requirements to be grandfathered in. Chief executive Dai Whittingham of the independent trade group UK Flight Safety Committee disputed the idea that the MAX was just another 737, saying, "It is a different body and aircraft but certifiers gave it the same type rating." On May 15, 2019, during a senate hearing, FAA acting administrator Daniel Elwell defended their certification process of Boeing aircraft. However the FAA criticized Boeing for not mentioning the MCAS in the 737 MAX's manuals. Crew manuals Boeing considered MCAS part of the flight control system, based on the fundamental design philosophy of retaining commonality with the 737NG. In 2013, a Boeing meeting urged participants to consider MCAS, (Maneuvering Characteristics Augmentation System), as a simple add-on to the existing stability function: "If we emphasize MCAS is a new function there may be greater certification and training impact". Boeing also played down the scope of MCAS to regulators. The company "never disclosed the revamp of MCAS to FAA officials involved in determining pilot training needs". On March 30, 2016, Mark Forkner, then MAX's chief technical pilot, requested senior FAA officials to remove MCAS from the pilot's manual. Boeing had presented MCAS as being existing technology, but inquiries and certification authorities have raised doubts about its technology readiness. The officials had been briefed on the original version of MCAS but not that MCAS was being significantly overhauled. Because Boeing offered Southwest Airlines a $1-million-per-plane rebate if training was ultimately required, pressure on Boeing executives and engineers increased. In 2017, as the airliner's five-year certification was nearly completed, Forkner wrote to an FAA official, "Delete MCAS". He then departed Boeing and joined Southwest Airlines in 2018. MCAS was left in the glossary of the 1,600-page flight manual. Top Boeing officials believed MCAS operated only far beyond the normal flight envelope, and was unlikely to activate in normal flight. Boeing had also failed to answer questions raised by Canadian test pilots on behalf of Transport Canada about how the anti-stall system operated before the airplane was certified. Most air regulatory agencies, including the FAA, Transport Canada and EASA, did not require specific training on MCAS. Brazil's national civil aviation agency "was one of the only civil aviation authorities to require specific training for the operation of the 737-8 Max". Pilots of the 737 Next Generation received an hour-long iPad lesson to fly on the MAX. On November 6, 2018, after the Lion Air accident, Boeing published a service bulletin in which MCAS was mentioned as a "pitch trim system." In reference to the Lion Air accident, Boeing said the system could be triggered by erroneous angle of attack information when the aircraft is under manual control, and reminded pilots of various indications and effects that can result from this erroneous information. Only four days later, on November 10, 2018, Boeing acknowledged the existence of MCAS in a message to operators. From November 2018 to March 2019, months between the accidents, the FAA Aviation Safety Reporting System received numerous U.S. pilot complaints of the aircraft's unexpected behaviors, and how the crew manual lacked any description of the system. On January 9, 2020, Boeing turned over a hundred internal messages of employees criticizing the FAA and the 737 MAX's development. The majority of them being made before either crash. Some messages read that Boeing was trying to get airlines and regulators, including Lion Air, to avoid simulator training, as well as general frustration with Boeing management. The FAA stated that "the tone and content of some of the language contained in the documents is disappointing," while Boeing said that the messages "do not reflect the company we are and need to be, and they are completely unacceptable ." Simulator training On May 17, 2019, after discovering 737 MAX flight simulators could not adequately replicate MCAS activation, Boeing corrected the software to improve the force feedback of the manual trim wheel and to ensure realism. This led to a debate on whether simulator training is a prerequisite prior to the aircraft's eventual return to service. On May 31, Boeing proposed that simulator training for pilots flying on the 737 MAX would not be mandatory. Computer training is deemed sufficient by the FAA Flight Standardization Board, the US Airline Pilots Association and Southwest Airlines pilots, but Transport Canada and American Airlines urged use of simulators. On June 19, in a testimony before the U.S. House Committee on Transportation and Infrastructure, Chesley Sullenberger advocated for simulator training. "Pilots need to have first-hand experience with the crash scenarios and its conflicting indications before flying with passengers and crew." The "differences training" is the subject of worry by senior industry training experts. Textron-owned simulator maker TRU Simulation + Training anticipated a transition course but not mandatory simulator sessions in minimum standards being developed by Boeing and the FAA. On July 24, Boeing indicated that some regulatory agencies may mandate simulator training before return to service, and also expected some airlines to require simulator sessions even if these are not mandated. On August 22, 2019, the FAA announced that it would invite pilots from around the world, intended to be a representative cross-section of "ordinary" 737 pilots, to participate in simulator tests as part of the recertification process, at a date to be determined. The evaluation group sessions are one of the final steps in the validation of flight-control computer software updates and will involve around 30 pilots, including some first officers with multi-crew pilot licenses which emphasize simulator experience rather than flight hours. The FAA hopes that the feedback from pilots with more varied experience will enable it to determine more effective training standards for the aircraft. On January 7, 2020, Boeing reversed its position and now recommends that pilots receive training in a MAX flight simulator. The FAA will conduct tests using pilots from US and foreign airlines, to determine flight training and emergency procedures. National aviation authorities will consider Boeing's recommendation but will also rely on the outcome of tests and expert opinions. According to The Seattle Times as of this policy reversal, only 34 full-motion 737 MAX flight simulators have been deployed worldwide. The FAA agrees with Boeing on requiring simulator training for pilots, as the MAX returns to service. Rejected improvements During the development of the MAX, some systems that could have improved situation awareness related to the accidents did not make it. Boeing also successfully appealed safety concerns raised by FAA safety specialists about the separation of cables into different zones of the aircraft, to avoid failures due to a common cause. The appeal raised doubts about the independence of the FAA. According to The Seattle Times, Boeing convinced the FAA, during MAX certification in 2014, to grant exceptions to federal crew alerting regulations, specifically relating to the "suppression of false, unnecessary" information. DeFazio, Chair of the House Committee on Transportation and Infrastructure, said that Boeing considered adding a more robust alerting system for MCAS but finally shelved the idea. FAA exempted Boeing from installing an Engine Indicating and Crew Alerting System (EICAS). In a draft report, the NTSB had also recommended to implement information filtering in the crew alert systems. "For example, the erroneous AOA output experienced during the two accident flights resulted in multiple alerts and indications to the flight crews, yet the crews lacked tools to identify the most effective response. Thus, it is important that system interactions and the flight deck interface be designed to help direct pilots to the highest priority action(s)." On October 2, 2019, The Seattle Times and The New York Times reported that a Boeing engineer, Curtis Ewbank, filed an internal ethics complaint alleging that company managers rejected a backup system for determining speed, which might have alerted pilots to problems linked to two deadly crashes of 737 MAX. A similar backup system is installed on the larger Boeing 787 jet, but it was rejected for the 737 MAX because it could increase costs and training requirements for pilots. Ewbank said the backup system could have reduced risks that contributed to two fatal crashes, though he could not be sure that it would have prevented them. A backup speed system "could also detect when sensors measuring the direction of the plane's nose weren't working". He also said in his complaint that Boeing management was more concerned with costs and keeping the MAX on schedule than on safety. An attorney representing families of the Ethiopian crash victims will seek sworn evidence from the whistleblower. As of May 2020, the FAA and EASA have reversed some, if not all, of these design decisions, requiring Boeing to design and retrofit critical systems on all aircraft once the airplane returns to service. Boeing 737 safety analysis Recognized civil aviation development practices, such as those of SAE International ARP4754 and ARP4761, require a safety process with quantitative assessments of availability, reliability, and integrity, validation of requirements, and verification of implementation. Such processes rely on engineering judgment and the application of these practices varies within the industry. Redundancy is a technique that may be used to achieve the quantitative safety requirements. Aviation safety risk is defined in AC 25.1309-1, an FAA document describing acceptable means for showing compliance with the airworthiness requirements of § 25.1309 of the Federal Aviation Regulations (FAR). A catastrophic failure must have an extremely improbable rate, defined as one in a billion flight hours, also stated as less than 10−9 per flight hour. By this measure, as of 2005, the Boeing 737 had an actual fatal accident rate of 1 in 80 million flight hours, missing the requirements by an order of magnitude. In the two years of the 737 MAX's commercial service prior to its grounding, the global fleet of nearly 400 aircraft flew 500,000 flights and suffered two hull loss incidents. As of March 11, 2019, the 737 MAX's accident rate was second behind the Concorde, with four accidents per million flights, compared to the 737NG 0.2 accidents per million flights. MCAS and MAX safety risk analysis The Lion Air accident report provides insight on the classification of failure conditions related to MCAS, and resulting safety assessment and testing: In response to the Lion Air crash, the FAA conducted an internal study of the safety of the 737 MAX. The study, not made public, used the Transport Airplane Risk Assessment Methodology (TARAM) and was completed on December 3, 2018, slightly more than a month after the accident. Just over a year later, on December 11, 2019, the US House Committee on Transportation and Infrastructure released the study and described its conclusion by stating, "if left uncorrected, the MCAS design flaw in the 737 Max could result in as many as 15 future fatal crashes over the life of the fleet". Boeing stated that, based on the TARAM analysis: The FAA report was refuted by MIT professor Arnold Barnett based on the loss of two aircraft out of only 400 delivered. He said there would be 24 crashes per year for a fleet size of 4,800, thus the FAA underestimated the risk by a factor of 72. On March 6, 2019, four days prior to the Ethiopian crash, Boeing and the FAA declined to comment regarding their safety analysis of MCAS for a story in The Seattle Times. On March 17, a week after the crash, The Seattle Times published the following flaws as identified by aviation engineers: Boeing's service bulletin informed airlines that the MCAS could deflect the tail in increments up to 2.5°, more than the 0.6° told to the FAA in the safety assessment; MCAS could reset itself after each pilot response to repeatedly pitch the aircraft down; The capability of MCAS had been downplayed, and its failure assessment was rated as "hazardous", one level below "catastrophic" MCAS relied on a single angle of attack sensor. In its safety analysis for the 737 MAX, Boeing made the assumption that pilots trained on standard Boeing 737 safety procedures should be able to properly assess contradictory warnings and act effectively within four seconds. This four-second rule, for a pilot's assessment of an emergency and its correction, a standard value used in safety assessment scenarios for the MAX, is deemed too short, and criticized for not being supported by empirical human factors studies. The Lion Air accident investigation report found that on the fatal flight and on the previous one, crews responded in about 8 seconds. According to the report, Boeing reasoned that pilots could counter an erratic MCAS by pulling back on the control column alone, without using the cutout switches. However, MCAS could only be stopped by the cutoff switches. Documentation made public at the House hearing on October 30, 2019, "established that Boeing was also already well aware, before the Lion Air accident, that if a pilot did not react to unintended MCAS activation within 10 seconds, the result could be catastrophic." Effect on air transport safety In December 2019, the German aviation audit company Jet Airliner Crash Data Evaluation Center (JACDEC), which is based in Hamburg, reported using the "Safety Risk Index" method that casualties in air crashes in 2019 nearly halved over 2018: 293 fatalities in 2019 compared to 559 in the previous year. On January 1, 2020, the Dutch aviation consultancy, To70, published its annual "Civil Aviation Safety Review", which recorded that in 2019 there were 86 accidents, 8 of which were fatal, resulting in 257 fatalities. Both results showed that the death toll of 157 in the MAX crash of Ethiopian Airlines Flight 302 made more than the half of the fatalities. Jan-Arwed Richter, head of JACDEC said: "The sharp reduction in fatalities compared to 2018 is—at the risk of sounding macabre—due to the grounding of the 737 MAX in March." Whilst Adrian Young, the author of To70's report, said: "Despite a number of high profile accidents this year's fatal accident rate is lower than the average of the last five years." US Department of Justice inquiries A U.S. federal grand jury issued a subpoena on behalf of the Department of Justice (DOJ) for documents related to development of the 737 MAX. On January 7, 2021, Boeing settled to pay over $2.5 billion after being charged with fraud: a criminal monetary penalty of $243.6 million, $1.77 billion of damages to airline customers, and a $500 million crash-victim beneficiaries fund. US Congress inquiries In March 2019, Congress announced an investigation into the FAA approval process. Members of Congress and government investigators expressed concern about FAA rules that allowed Boeing to extensively "self-certify" aircraft. FAA acting Administrator Daniel Elwell said "We do not allow self-certification of any kind". Initially, as a new system or a new device on the amended type certificate, the FAA retained the oversight of MCAS. However, the FAA later released it to the Boeing Organization Designation Authorization (ODA) based on comfort level and thorough examination, Elwell said in March. "We were able to assure that the ODA members at Boeing had the expertise and the knowledge of the system to continue going forward." However, several FAA insiders believed the delegation went too far. Boeing pilot Mark A Forkner was subsequently indicted on charges of supplying false and incompete information to the FAA in respect of the certification of the 737 MAX (The Guardian). Senate On April 2, 2019, after receiving reports from whistle-blowers regarding the training of FAA inspectors who reviewed the 737 MAX type certificate, the Senate Commerce Committee launched a second Congressional investigation; it focuses on FAA training of the inspectors. The FAA provided misleading statements to Congress about the training of its inspectors, most possibly those inspectors that oversaw the Max certification, according to the findings of an Office of Special Counsel investigation released in September. In February 2020, three Senate Transportation Committee members introduced a "Restoring Aviation Accountability" bill, which would specifically require implementation of the Joint Authorities Technical Review's recommendations, and more generally would set up a commission to review the FAA safety delegation process and assess alternative certification schemes that could provide more robust oversight. In December 2020, a report by the Senate committee on commerce, science and transportation found that Boeing "inappropriately coached" pilots while conducting simulator tests during the recertification process, by reminding them of the correct response to a runaway stabilizer, and accused Boeing and the FAA of establishing a "pre-determined outcome to reaffirm a long-held human factor assumption" regarding pilot reaction times. The simulated flight by FAA pilots took place in July 2019, predating a major redesign of MCAS. It appears, in this instance, FAA and Boeing were attempting to cover up important information that may have contributed to the 737 MAX tragedies. House of Representatives On June 7, the delayed escalation on the defective AoA Disagree alert on 737 MAX was investigated. The Chair of the House Transportation and Infrastructure Committee and the Chair of the Aviation Subcommittee sent letters to Boeing, United Technologies Corp., and the FAA, requesting a timeline and supporting documents related to awareness of the defect, and when airlines were notified. In September 2019, a Congress panel asked Boeing's CEO to make several employees available for interviews, to complement the documents and the senior management perspective already provided. The same month, Boeing's board called for changes to improve safety. Representative Peter DeFazio, chairman of the House Transportation and Infrastructure Committee, said Boeing declined his invitation to testify at a House hearing. "Next time, it won't just be an invitation, if necessary," he said. Later, in the same month, the House Transportation and Infrastructure Committee announced that Boeing CEO, Dennis Muilenburg, will testify before Congress accompanied by John Hamilton, chief engineer of Boeing's Commercial Airplanes division and Jennifer Henderson, 737 chief pilot. In October 2019, the House asked Boeing to allow a flight deck systems engineer who filed an internal ethics complaint to be interviewed. On October 18, Peter DeFazio said "The outrageous instant message chain between two Boeing employees suggests Boeing withheld damning information from the FAA". Boeing expressed regret over its ex-pilot's messages after their publication in media. Boeing's media room released a statement about Forkner's meaning of the instant messages, obtained through his attorney because the company has not been able to talk to him directly. The transcript of the messages indicates, according to experts, a problem with the simulator rather than an MCAS erratic activation. On October 25, 2019, Peter DeFazio commented the Lion Air accident report, saying "And I will be introducing legislation at the appropriate time to ensure that unairworthy commercial airliners no longer slip through our regulatory system". The aviation subcommittee and full committee hearings follow: The Subcommittee on Aviation met on June 19, 2019, to hold a hearing titled, "Status of the Boeing 737 MAX: Stakeholder Perspectives". "The hearing is intended to gather views and perspectives from aviation stakeholders regarding the Lion Air Flight 610 and Ethiopian Airlines Flight 302 accidents, the resulting international grounding of the Boeing 737 MAX aircraft, and actions needed to ensure the safety of the aircraft before returning them to service. The Subcommittee will hear testimony from Airlines for America, Allied Pilots Association, Association of Flight Attendants—CWA, Captain Chesley (Sully) Sullenberger, and Randy Babbitt." On July 17, representatives of crash victims' families, in testimony to the House Transportation and Infrastructure Committee – Aviation Subcommittee, called on regulators to re-certificate the MAX as a completely new aircraft. They also called for wider reforms to the certification process, and asked the committee to grant protective subpoenas so that whistle-blowers could testify even if they had agreed to a gag order as a condition of a settlement with Boeing. In a July 31 senate hearing, the FAA defended its administrative actions following the Lion Air accident, noting that standard protocol in ongoing crash investigations limited the information that could be provided in the airworthiness directive. On October 29, 2019, Muilenburg and Hamilton appeared at the House hearing under the title "Aviation Safety and the Future of Boeing's 737 MAX", which was the first time that Boeing executives addressed Congress about the MAX accidents. The hearing came on the heels of the removal of Dennis Muilenburg's title as chairman of the Boeing board a week before and was intended to examine issues associated with the design, development, certification, and operation of the Boeing 737 MAX following two accidents in the last year. The committee first heard from Boeing on actions taken to improve safety and the company's interaction with relevant federal regulators. The second panel was composed of government officials and aviation experts discussing the status of Boeing 737 MAX and relevant safety recommendations." On October 30, the House made public a 2015 internal email discussion between Boeing employees raising concerns about MCAS design in the exact scenario blamed for the two crashes: "Are we vulnerable to single AOA sensor failures with the MCAS implementation?" Committee members discussed another internal document, stating that a reaction longer than 10 seconds to an MCAS malfunction "found the failure to be catastrophic." The hearings' key revelation was insider knowledge of vulnerabilities amid a hectic rate of production. After the testimony of Boeing's CEO, Peter DeFazio and Rick Larsen, leader of its aviation sub-panel, wrote a letter to other lawmakers on November 4, saying that unanswered questions remain: "Mr. Muilenburg left a lot of unanswered questions, and our investigation has a long way to go to get the answers everyone deserves [...] Mr. Muilenburg's answers to our questions were consistent with a culture of concealment and opaqueness and reflected the immense pressure exerted on Boeing employees during the development and production of the 737 Max". On December 11, 2019, during a hearing of the House Committee on Transportation titled "The Boeing 737 MAX: Examining the Federal Aviation Administration's Oversight of the Aircraft's Certification," an internal FAA review dated December 3, 2018, was released, which predicted a high MAX accident rate, if it kept flying with MCAS unchanged. The findings were first reported by The Wall Street Journal in July 2019, The FAA assumed that the emergency airworthiness directive sufficed until Boeing delivered a fix. Over a question whether a mistake was made in this regard, the FAA's chief Stephen Dickson responded, "Obviously the result was not satisfactory." Peter DeFazio said that the committee's investigation "has uncovered a broken safety culture within Boeing and an FAA that was unknowing, unable, or unwilling to step up, regulate and provide appropriate oversight of Boeing". But perhaps most chillingly, we have learned that shortly after the issuance of the airworthiness directive, the FAA performed an analysis that concluded that, if left uncorrected, the MCAS design flaw in the 737 MAX could result in as many as 15 future fatal crashes over the life of the fleet—and that was assuming that 99 out of 100 flight crews could comply with the airworthiness directive and successfully react to the cacophony of alarms and alerts recounted in the National Transportation Safety Board's report on the Lion Air tragedy within 10 seconds. Such an assumption, we know now, was tragically wrong. Despite its own calculations, the FAA rolled the dice on the safety of the traveling public and let the 737 MAX continue to fly until Boeing could overhaul its MCAS software. Tragically, the FAA's analysis—which never saw the light of day beyond the closed doors of the FAA and Boeing—was correct. The next crash would occur just five months later, when Ethiopian Airlines flight 302 plummeted to earth in March 2019. In January 2020, Kansas Rep. Sharice Davids, member of the House Transportation Committee and is vice chair of its aviation subcommittee, said: "The newly released messages from Boeing employees are incredibly disturbing and show a coordinated effort inside the company to deceive the American public and federal regulators, who are in place to keep passengers safe. It's further proof that Boeing put profit over safety in the development of the 737 MAX. [...] In addition to the public safety concerns these messages raise, Boeing's callousness has now cost thousands of Kansans their livelihood and endangered the economy of our state, which is dependent on aerospace." On March 6, 2020, the House Transportation Committee said that a "culture of concealment" at the company and poor oversight by federal regulators contributed to the crashes. In a preliminary summary of its nearly yearlong investigation, the committee said multiple factors had led to the crashes, but focused on the MCAS which Boeing had failed to classify as safety critical, part of a strategy designed to avoid closer scrutiny by regulators as the company developed the plane. The panel said that Boeing had undue influence over the Federal Aviation Administration, and that FAA managers rejected safety concerns raised by their own technical experts. The preliminary report was prepared by the Democratic Staff of the House Committee on Transportation and Infrastructure. In September 2020, concluding an 18-month investigation, a House report produced by Democratic staff of the Committee blamed Boeing and the FAA for lapses in the design, construction and certification of the MAX: Boeing made production and cost goals a higher priority than safety; Boeing made deadly assumptions about MCAS, causing the planes to nosedive; Boeing withheld critical information from the FAA; Delegation of oversight authority to Boeing employees left the FAA unaware of important issues; FAA management sided with Boeing against its own experts. US Office of Special Counsel inquiries The Office of Special Counsel is an organization investigating whistleblower reports. Its report infers that safety inspectors "assigned to the 737 Max had not met qualification standards". The OSC sided with the whistleblower, pointing out that internal FAA reviews had reached the same conclusion. In a letter to President Trump, the OSC found that 16 of 22 FAA pilots conducting safety reviews, some of them assigned to the MAX two years ago, "lacked proper training and accreditation." Safety inspectors participate in Flight Standardization Boards, that ensure pilot competency by developing training and experience requirements. FAA policy requires both formal classroom training and on-the-job training for safety inspectors. Special Counsel Henry J. Kerner wrote in the letter to the President, "This information specifically concerns the 737 Max and casts serious doubt on the FAA's public statements regarding the competency of agency inspectors who approved pilot qualifications for this aircraft". In September, Daniel Elwell disputed the conclusions of the OSC, which found that aviation safety inspectors (ASIs) assigned to the 737 MAX certifications did not meet training requirements. To clarify the facts, lawmakers asked the FAA to provide additional information:We are particularly concerned about the Special Counsel's findings that inconsistencies in training requirements have resulted in the FAA relaxing safety inspector training requirements and thereby adopting "a position that encourages less qualified, accredited, and trained safety inspectors."  We request that the FAA provide documents confirming that all FAA employees serving on the FSB for the Boeing 737-MAX and the Gulfstream VII had the required foundational training in addition to any other specific training requirements. US Cabinet inquiries The FBI has joined the criminal investigation into the certification as well. FBI agents reportedly visited the homes of Boeing employees in "knock-and-talks". At the request of Peter DeFazio, and Chair of the Subcommittee on Aviation Rick Larsen, the U.S. Department of Transportation (DOT) Inspector General opened an investigation into FAA approval of the Boeing 737 MAX aircraft series, focusing on potential failures in the safety-review and certification process. A report released on October 23 says that the FAA faces a "significant oversight challenge" to ensure that manufacturers carrying out delegated certification activities "maintain high standards and comply with FAA safety regulations", and that it plans to introduce a "new process that represents a significant change in its approach" by March 2020. In April 2019, U.S. Secretary of Transportation, Elaine L. Chao, who boarded a MAX flight on March 12 amid calls to ground the aircraft, created the Special Committee to Review the FAA's Aircraft Certification Process to review of Organization Designation Authorization, which granted Boeing authority to review systems on behalf of the FAA, during the certification of the 737 MAX 8. The committee recommended integrating human performance factors and consider all levels of pilot experience, but defended the ODA against any reforms. Relatives of those on board the accident flights condemned the report for calling the ODA an "effective" process. U.S. NTSB On September 26, 2019, the NTSB released the results of its review of potential lapses in the design and approval of the 737 MAX. The report concludes that Boeing's assumptions about pilot reaction to MCAS activation "did not adequately consider and account for the impact that multiple flight deck alerts and indications could have on pilots' responses to the hazard". Before the airplane began service, Boeing had evaluated pilot response to simulated MCAS activation, but the NTSB noted that Boeing did not simulate a specific cause, such as erroneous AoA input, or the multiple cockpit alerts and warnings that could result. The NTSB said those "alerts and indications can increase pilots' workload, and the combination of the alerts and indications did not trigger the accident pilots to immediately perform the runaway stabilizer procedure". It stated, "the pilot responses differed and did not match the assumptions of pilot responses to unintended MCAS operation on which Boeing based its hazard classifications". The NTSB questioned the long-held industry and FAA practice of assuming the nearly instantaneous responses of highly trained test pilots, as opposed to pilots of all levels of experience to verify human factors in aircraft safety. The NTSB expressed concerns that the process used to evaluate the original design needs improvement because that process is still in use to certify current and future aircraft and system designs. The FAA could for example randomly sample pools from the worldwide pilot community to get a more representative assessment of cockpit situations. Return to service Boeing In early October 2019, CEO Muilenburg said that Boeing's own test pilots had completed more than 700 flights with the MAX. Certification flight tests, because of the ongoing safety review, were thought unlikely to occur before November. Boeing made "dry runs" of the certification test flights on October 17, 2019. As of October 28, Boeing had conducted "over 800 test and production flights with the updated MCAS software, totaling more than 1,500 hours". , Boeing was fixing a flaw discovered in the redundant-computer architecture of the 737 MAX flight-control system. The FAA and the EASA were still reviewing changes to the MAX software, raising questions about the return to service forecast. The FAA was to review Boeing's "final system description", which specifies the architecture of the flight control system and the changes that Boeing have made, and perform an "integrated system safety analysis"; the updated avionics were to be assessed for pilot workload. The FAA was specifically looking at six "non-normal" checklists that could be resequenced or changed. The assessment of these checklists with pilots could happen at the end of October, according to an optimistic forecast. As of mid-November 2019, Boeing still needed to complete an audit of its software documentation. A key certification test flight was to follow the audit. In a memo and a video dated November 14, FAA's Steve Dickson instructed his staff to "take whatever time is needed" in their review, repeating that approval is "not guided by a calendar or schedule." At the request of the FAA, Boeing audited key systems on the MAX. In January 2020, Boeing discovered that electrical wiring bundles were too close together and could cause a short circuit that could theoretically lead to a runaway stabilizer. The EASA wants the wiring fixed on all 400 grounded aircraft and future deliveries. Boeing and the FAA disagreed with EASA at first, but the FAA said to Boeing in March 2020 that the wiring is not compliant. A manufacturing fault was also found to have affected the lightning protection foil on two panels covering the engine pylons on certain MAX aircraft manufactured between February 2018 and June 2019. On February 26, 2020, the FAA has proposed a corresponding airworthiness directive to mandate repairs to all affected aircraft. In January 2020, the FAA proposed a $5.4 million fine against Boeing for installing nonconforming slat tracks. The tracks are used to guide the movement of slats, which are panels located on the leading edge of aircraft's wings for additional lift during take-off and landing. Boeing's supplier did not comply with aviation regulations nor with Boeing's quality assurance system. Boeing is alleged to have issued airworthiness certificates for 178 MAX aircraft despite knowing that the slat tracks had failed a strength test. In February 2020, traces of debris were discovered within the fuel tanks of aircraft produced during the groundings. FAA set out the remaining steps in the process to ungrounding the aircraft: after remaining minor issues are resolved, a certification flight will be conducted and flight data will be assessed. Operational validation, including assessment of Boeing's training proposals by international and U.S. crews, as well as by the FAA administrator and his deputy in person, will then proceed, followed by documentation steps. U.S. airlines will then need to obtain FAA approval for their training programs. Each aircraft will be issued with an airworthiness certificate and will be required to conduct a validation flight without passengers. The FAA said it would require airlines perform "enhanced inspections and fixes to portions of an outside panel that helps protect the engines on Boeing's 737 Max from lightning strikes". Boeing conducted numerous test flights in 2020, before a series of FAA recertification flights from June 28 to July 1, 2020. These were performed by a 737 MAX 7, flying from Boeing Field, Seattle, to Boeing's test facilities at Moses Lake and back. FAA The FAA certifies the design of aircraft and components that are used in civil aviation operations. The FAA is "performance-based, proactive, centered on managing risk, and focused on continuous improvement." As with any other FAA certification, the MAX certification included: reviews to show that system designs and the MAX complied with FAA regulations; ground tests and flight tests; evaluation of the airplane's required maintenance and operational suitability; collaboration with other civil aviation authorities on aircraft approval. Due to the global scrutiny following the two fatal accidents, the FAA is re-evaluating its certification process and seeking consensus with other regulators to approve the return to service to avoid suspicion of undue cooperation with Boeing. The International Air Transport Association (IATA) had also made a similar statement calling for more coordination and consensus with training and return to service requirements. In March 2019, reports emerged that Boeing performed the original System Safety Analysis, and FAA technical staff felt that managers pressured them to sign off on it. Boeing managers also pressured engineers to limit safety testing during the analysis. A 2016 Boeing survey found almost 40% of 523 employees working in safety certification felt "potential undue pressure" from managers. Since June 2019, the FAA has reiterated many times that it does not have a timetable on when the 737 MAX will return to service, stating that it is guided by a "thorough process, not a prescribed timeline." The FAA identified new risks of failure during thorough testing. As a result, Boeing worked to make the overall flight-control computer more redundant, such that both computers will operate on each flight instead of alternating between flights. The planes were said to be unlikely to resume operations until 2020. In August 2019, reports emerged of friction between Boeing and certain international air-safety authorities. A Boeing briefing was stopped short by the FAA, EASA, and other regulators, on the grounds that Boeing had "failed to provide technical details and answer specific questions about modifications in the operation of MAX flight-control computers." A U.S. official confirmed frustration with some of Boeing's answers. On October 2, 2019, The Seattle Times reported that Boeing convinced FAA regulators to relax certification requirements in 2014, that would have added over $10 billion in 2013 dollars to the development cost to the MAX. In October 2019, according to current and former FAA officials, instead of increasing its oversight powers, the FAA "has been pressing ahead with plans to further reduce its hands-on oversight of aviation safety". On October 22, 2019, FAA Administrator Steve Dickson said in a news conference that the agency had received the "final software load" and "complete system description" of revisions; several weeks of work are anticipated for certification activities. Final simulator-based assessments were expected to start in November 2019. In October 2019, the FAA requested that Boeing turn over internal documents and explain why it did not disclose the Forkner messages earlier. The FAA is aware of "more potentially damaging messages from Boeing employees that the company has not turned over to the agency". In November 2019, the FAA announced that it had withdrawn Boeing's authority, previously held under the Organization Designation Authorization, to issue airworthiness certificates for individual new 737 MAX aircraft. The FAA denied allegations that the ODA enabled plane makers to police themselves or self-certify their aircraft. After the overall grounding is lifted, the FAA will issue such certificates directly; aircraft already delivered to customers will not be affected. In the same month, the FAA pushed back at Boeing's attempts to publicize a certification date, saying the agency will take all the time it needs. On December 9, 2019, in an internal email sent to employees in the FAA's Aircraft Certification Service (AIR), it was revealed that the agency was moving to create a new safety branch to address shortcomings in its oversight following the two MAX crashes and a controversial reorganization. The email obtained by The Washington Post emphasized the complexities of aviation safety, but did not mention the MAX directly as it was written in bureaucratic language. December 2019, The Air Current reported on pilots attempting the procedure with "inconsistent, confusing" results. On December 6, 2019, the FAA posted an updated Master minimum equipment list for the 737 MAX; in particular, both flight computers must be operational before flight, as they now compare each other's sensors prior to activating MCAS. On December 11, 2019, Dickson announced that MAX would not be recertified before 2020, and reiterated that FAA did not have a timeline. The following day, Dickson met with Boeing chief executive Dennis Muilenburg to discuss Boeing's unrealistic timeline and the FAA's concerns that Boeing's public statements may be perceived as attempting to force the FAA into quicker action. In January 2020, Boeing targeted mid-2020 for recertification, but the FAA expressed that it was "pleased" with progress made and may approve the aircraft sooner within the United States. In February 2020, the FAA explained why the agency waited for empirical evidence to draw a common link to the crashes before grounding the airplane. In April 2020, the second revision to the list removed several exemptions and fault tolerances to ensure greater availability of the aircraft's redundant systems. On September 30, 2020, FAA administrator and former Delta Air Lines Boeing 737 captain, Stephen Dickson, conducted a two-hour test flight at the controls of the MAX, after completing the new training proposed by Boeing. He had previously announced that the FAA would not certify the MAX until he had flown the aircraft himself. On November 18, 2020, the FAA issued a Continuing Airworthiness Notification which rescinded its grounding order, subject to mandatory updates on each individual aircraft. Other regulators are independent and are expected to follow; some are waiting for the EASA. Public commentaries On August 3, 2020, the FAA announced its final list of design, operation, maintenance and training changes that must be completed before the MAX can return to service. The design changes include updated flight software, a new angle of attack sensor failure alert, revised crew manuals and changes to wiring routing. All design approvals were conducted by the FAA directly; no oversight was delegated to Boeing. The design changes must be implemented on all MAX aircraft already produced and in storage, as well as new production. The FAA documents were published in the Federal Register on August 6, opening a 45-day public comment period. The FAA's response and final Airworthiness Directive is then expected to be published no earlier than mid-October, with U.S. domestic flights expected to resume 30 to 60 days later. During the FAA public comment period, EASA inquired about adding a third angle of attack sensor, and Transport Canada inquired whether the stick shaker, a mechanical stall warning device, could be suppressed during false alarm situations to reduce pilot workload. Airline passenger organization FlyersRights remain skeptical whether the proposed software and computer fixes sufficiently mitigate inherent flaws with the MAX's airframe. The British Airline Pilots' Association warned that one of the proposed changes to recovery procedures, which may require effort from both pilots to operate the manual trim wheel, is "extremely undesirable" and could result in a scenario similar to the Ethiopian Airlines crash. The National Safety Committee of the National Air Traffic Controllers Association recommended that the MAX should be required to meet all current requirements relating to crew alerting systems (whereas it currently benefits from exemptions). Technical Advisory Board The Technical Advisory Board, a multi-agency panel, was created shortly after the second crash, as a panel of government flight-safety experts for independently reviewing Boeing's redesign of the MAX. It includes experts from the United States Air Force (USAF), the Volpe National Transportation Systems Center, NASA and FAA. "The TAB is charged with evaluating Boeing and FAA efforts related to Boeing's software update and its integration into the 737 Max flight control system. The TAB will identify issues where further investigation is required prior to FAA approval of the design change", said the FAA. The TAB reviewed Boeing's MCAS software update and system safety assessment. On November 8, the TAB presented its preliminary report to the FAA, finding that the MCAS design changes are compliant with the regulations and safe. Joint Authorities Technical Review On April 19, 2019 a multinational "Boeing 737 MAX Flight Control System Joint Authorities Technical Review" (JATR) team was commissioned by the FAA to investigate how it approved MCAS, whether changes need to be made in the FAA's regulatory process and whether the design of MCAS complies with regulations. On June 1, Ali Bahrami, FAA Associate Administrator for Aviation Safety, chartered the JATR to include representatives from FAA, NASA and the nine civil aviation authorities of Australia, Brazil, Canada, China, Europe (EASA), Indonesia, Japan, Singapore and UAE. On September 27, the JATR chair Christopher A. Hart said that FAA's process for certifying new airplanes is not broken, but needs improvements rather than a complete overhaul of the entire system. He added "This will be the safest airplane out there by the time it has to go through all the hoops and hurdles". The JATR said that FAA's "limited involvement" and "inadequate awareness" of the automated MCAS safety system "resulted in an inability of the FAA to provide an independent assessment". The panel report added that Boeing staff performing the certification were also subject to "undue pressures... which further erodes the level of assurance in this system of delegation". About the nature of MCAS, "the JATR team considers that the STS/MCAS and EFS functions could be considered as stall identification systems or stall protection systems, depending on the natural (unaugmented) stall characteristics of the aircraft". The report recommends that FAA reviews the jet's stalling characteristics without MCAS and associated system to determine the plane's safety and consequently if a broader design review was needed."Boeing elected to meet the objectives of SAE International's Aerospace Recommended Practice 4754A, Guidelines for Development of Civil Aircraft and Systems, (ARP4754A) for development assurance of the B737 MAX. [...] The use of ARP4754A is consistent with the guidance contained in FAA Advisory Circular (AC) 20-174, Development of Civil Aircraft and Systems. The JATR team identified areas where the Boeing processes can be improved to more robustly meet the development assurance objectives of ARP4754A.[...] An integrated SSA to investigate the MCAS as a complete function was not performed. The safety analyses were fragmented among several documents, and parts of the SSA from the B737 NG were reused in the B737 MAX without sufficient evaluation.[...] The JATR team identified specific areas related to the evolution of the design of the MCAS where the certification deliverables were not updated during the certification program to reflect the changes to this function within the flight control system. In addition, the design assumptions were not adequately reviewed, updated, or validated; possible flight deck effects were not evaluated; the SSA and functional hazard assessment (FHA) were not consistently updated; and potential crew workload effects resulting from MCAS design changes were not identified." The JATR found that Boeing did not carry out a thorough verification by stress-testing of the MCAS. The JATR also found that Boeing exerted "undue pressures" on Boeing ODA engineering unit members (who had FAA authority to approve design changes). FAA Airworthiness Directive for return to service The FAA also indicated that non-U.S.-registered MAX aircraft would not be allowed access to U.S. airspace if the aviation authority of the state of registration does not require compliance with the amended design or "an alternative that achieves at least an equivalent level of safety", pursuant to Article 33 of the ICAO Chicago Convention. It has also been suggested that, under Article 33, other countries have no legal grounds to continue banning U.S.-registered MAX aircraft from their airspace even if they have not themselves authorized the resumption of flights. As of January 26, 2021, this remains a purely theoretical issue. FAA CANIC for return to service The FAA issued a CANIC to notify the international community of the final rule/airworthiness directive (AD) for return to service and that it rescinded the Emergency Order of Prohibition. It also notifies of the release of documents: Summary of the FAA's Review of the Boeing 737 MAX; Boeing 737 Flight Standardization Board Report, identifying special pilot training for the 737 MAX; FAA Safety Alert for Operators (SAFO) identifying changes to pilot training; and FAA SAFO identifying changes to the maintenance program. EASA The EASA and Transport Canada announced they will independently verify FAA recertification of the 737 MAX. For product certifications, the EASA is already in the process of significantly changing its approach to the definition of Level of Involvement (LoI) with Design Organisations. Based on an assessment of risk, an applicant makes a proposal for the Agency's involvement "in the verification of the compliance demonstration activities and data". EASA considers the applicant's proposal in determining its LOI. In a letter sent to the FAA on April 1, 2019, EASA stated four conditions for recertification: "1. Design changes proposed by Boeing are EASA approved (no delegation to FAA) 2. Additional and broader independent design review has been satisfactorily completed by EASA 3. Accidents of JT610 and ET302 are deemed sufficiently understood 4. B737 MAX flight crews have been adequately trained." In a May 22 statement, the EASA reaffirmed the need to independently certify the 737 MAX software and pilot training. In addition to system analysis mentioned above, EASA raised concerns with the autopilot not engaging or disengaging upon request, or that the manual trim wheel is electronically counteracted upon, or requires substantial physical force to overcome the aerodynamic effects in flight. In September 2019, the European Union received parliamentary questions for written answers about the independent testing and re-certification of critical parts of the Boeing 737 MAX by the EASA: Could the Commission confirm whether these tests will extend beyond the MCAS flight software issue to the real problem of the aerodynamic instability flaw which the MCAS software was created to address? Does the Commission have concerns about the limited scope of the FAA's investigation into the fatal loss of control, and is EASA basing its re-certification of the 737 Max on that investigation? What assurances can the Commission give that the de facto delegation of critical elements of aircraft certification to the same company that designed and built the aircraft, and the practice of delegated oversight, does not exist in Europe? EASA stated it was satisfied with changes to the flight control computer architecture; improved crew procedures and training are considered a simplification but still work in progress; the integrity of the angle of attack system is still not appropriately covered by Boeing's response. The EASA recommends a flight test to evaluate aircraft performance with and without the MCAS. EASA said it will send its own test pilots and engineers to fly certification flight tests of the modified 737 MAX. EASA also said it prefers a design that takes readings from three independent Angle of Attack sensors. EASA's leaders want Boeing and the FAA to commit for longer-term safety enhancements. Mr. Ky is said to seek a third source of the angle of attack. EASA is contemplating the installation of a third sensor or equivalent system at a later stage, once the planes return to service. On October 18, 2019, EASA Executive Director Patrick Ky said: "For me it is going to be the beginning of next year, if everything goes well. As far as we know today, we have planned for our flight tests to take place in mid-December which means decisions on a return to service for January, on our side". On August 27, 2020, EASA announced that it planned to start flight testing the MAX on September 7. The flight tests, conducted in Vancouver, Canada, would follow a week of simulator work at London Gatwick Airport. Afterward, the Joint Operations Evaluation Board (JOEB) would start its testing procedures on September 14. On November 18, 2020, after the FAA cleared the MAX for return to service in the U.S., EASA indicated that it would shortly issue its own proposed airworthiness directive. After the 28-day public comment period, the final directive would then be published in late December 2020 or early in 2021. On January 27, 2021, EASA formally cleared the MAX to resume service. The main difference with FAA requirements is the ability (also mandated by Transport Canada) to disable the stick-shaker warning if pilots are certain that they understand the underlying cause. Certain approaches requiring precision navigation are however not yet approved as EASA is awaiting data from Boeing as to the aircraft's ability to maintain the required performance in the event of sensor failures. Some EASA member states issued their own orders banning the MAX from their airspace; these individual bans will also need to be lifted. UK CAA The UK Civil Aviation Authority, no longer part of EASA following the UK's withdrawal from the European Union, issued its own airworthiness directive on January 27, 2021, mirroring EASA's additional requirements. Transport Canada Transport Canada accepted FAA's MAX certification in June 2017 under a bilateral agreement. However, former Canadian Minister of Transport Marc Garneau said in March 2019, that Transport Canada would do its own certification of Boeing's software update "even if it's certified by the FAA.". On October 4, 2019, the head of civil aviation for Transport Canada, said that global regulators are considering the requirements for the 737 MAX to fly again, weighing in the "startle factors" that can overwhelm pilots lacking sufficient exposure in simulation scenarios. He also said that Transport Canada raised questions over the architecture of the angle of attack system. On November 19, 2019, an engineering manager in aircraft integration and safety assessment at Transport Canada emailed FAA, EASA and Brazil's National Civil Aviation Agency, calling for removal of key software from the 737 MAX by stating "The only way I see moving forward at this point is that Boeing's MCAS system has to go," although the views were at the working level and had not been subject to systematic review by Transport Canada. On August 20, 2020, Transport Canada announced that it would be conducting its own flight tests the following week, as part of its independent review aimed at validating key areas of the FAA certification. Transport Canada confirmed it was working with EASA and the Brazilian regulator ANAC, in the Joint Operational Evaluation Board (JOEB) which is set to evaluate minimum pilot training requirements in mid-September. EASA concluded a series of recertification flights on September 11. Following the FAA's clearance to resume flights in the U.S., Transport Canada indicated that its own recertification process was ongoing, and that it intends to mandate additional pre-flight and in-flight procedures as well as differences in pilot training requirements. It did not indicate a timeline, though it did state that it expected to complete the process "very soon", as of November 2020. In December 2020, in addition to the changes required by the FAA, Transport Canada mandated the possibility for the pilot to disable the stick shaker when it is erroneously activated. On January 18, 2021, it announced the return to service of the MAX in Canadian airspace from January 20, by lifting the prohibiting its commercial operation. Indian DGCA India's regulator, Directorate General of Civil Aviation (DGCA), will conduct its own validation tests of the MAX before authorizing it in India's airspace. Arun Kumar, Director General of DGCA, said India will adopt a "wait and watch" policy and not hurry to reauthorize the plane to fly. He also said an independent validation will be performed to ensure safety and MAX pilots will have to train on a simulator. India's SpiceJet has already received 13 MAX jets and has 155 more on order. On 26 August 2021, the DGCA lifted its ban on the 737 MAX in India, with operational rules based on the EASA's directive issued in February 2021. UAE's GCAA The UAE's director general of the General Civil Aviation Authority (GCAA), Said Mohammed al-Suwaidi, announced GCAA will conduct its own assessment, rather than follow the FAA. The UAE regulator had yet not seen Boeing's fixes in detail. He did not expect the 737 MAX to be back in service in 2019. On November 22, 2020, following the recertification by FAA, the GCAA had established a Return to Service Committee on Boeing 737 MAX that included specialists from the required areas who were working with their counterparts in the FAA and the EASA. The GCAA would issue a Safety Decision stipulating technical requirements to ensure a safe return to service of the MAX aircraft with the corresponding certification timelines. UAE carrier flydubai is one of the biggest customers of the MAX aircraft, having ordered 250 of the jets since 2013. It operated 13 MAX 8 and MAX 9s. Australian CASA Australia's Civil Aviation Safety Authority said that the FAA decision would be an important factor in allowing the MAX to fly, but CASA will make its own decision. In October 2019 SilkAir flew its six 737 MAXs from Singapore to Alice Springs Airport for storage during Singapore's wet season. On February 26, 2021, the Australian Civil Aviation Safety Authority lifted its ban on the MAX, accepting the return-to-service requirements set by the FAA. Australia is the first nation in the Asia-Pacific region to clear the aircraft to return to service. Brazilian ANAC According to the first Brazilian government statement on the MAX issue, the National Civil Aviation Agency of Brazil (ANAC) had been working closely with the FAA on getting the airplane back into service by the end of 2019. Brazil's largest domestic airline, Gol Transportes Aéreos, is a major MAX customer with an order over 100 aircraft. On November 25, 2020, less than a week after the FAA cleared the MAX to return to service in the U.S., ANAC withdrew its Airworthiness Directive that had ordered the grounding of the aircraft. IATA On November 25, 2020, the IATA called on all global regulators to authorize the return of the MAX as soon as possible. Projections In November 2019, financial analysts forecast a jet surplus that could result when the MAX does return to service; new aircraft will be delivered while airlines move stand-in aircraft back into storage. Boeing initially hoped that flights could resume by July 2019; by June 3, Muilenburg expected to have the planes flying by the end of 2019 but declined to provide a timeline. On July 18, Boeing reaffirmed Muilenburg's prediction, hoping to return the MAX to flight during the fourth quarter of 2019. Boeing indicated that this was its best estimate and that the date could still slip. By July 2019, United Airlines purchased 19 used 737-700s to fill in for MAX aircraft, to be delivered in December 2019. United had expected to receive 30 MAX aircraft by the end of 2019 and a further 28 in 2020. In September 2019, Boeing CEO Dennis Muilenburg stated that the MAX might return in phases around the world due to the current state of regulatory divide on approving the airplane. Later that same month Boeing told its suppliers that the plane could return to service by November. On November 11, 2019, the company stated that deliveries would resume in December 2019 and commercial flights in January 2020. In January 2020, Boeing said it was not expecting the airplane's recertification until mid 2020. On January 14, 2020, American Airlines cancelled more of its MAX flights until June 3. On January 16, Southwest Airlines removed the MAX from its schedule until June 6, to allow pilots to spend time in simulators as newly recommended. On January 22, United Airlines announced that it was not expecting to return the MAX to service until after the peak summer season. By late April, Southwest Airlines had removed the MAX from its schedule until October 30, based on Boeing's "recent communication on the MAX return to service date". At that point, Boeing hoped to obtain regulatory approval in August, though sources expected that to be pushed back to the fall. At the end of October 2020, Boeing indicated that it expected recertification to occur before the end of the year, and anticipated that about half of the 450 aircraft currently stockpiled would be delivered in 2021. In December 2020, American Airlines operated the first public flight since the grounding: a demonstration flight for journalists, to regain public trust. Certification of forthcoming MAX variants The 737-10 has yet to be certified and is expected to be subject to additional requirements, including in particular an "angle-of-attack integrity enhancement" that will subsequently be retrofitted to existing variants. Improvements to the crew alerting system are also expected to be mandated. On March 31st, 2021, the FAA certified the high-density variant of the 737 MAX 8 for low-cost carriers, the MAX 8-200. They have identified the variant as being "functionally equivalent" to the MAX 8 and "operationally suitable". Europe's EASA is expected to follow suit. Ryanair will be the first and primary operator of the variant; Vietnam's VietJet Air also has a sizable order. The first jet for Ryanair is expected to be delivered in April, and in the peak summer season, the carrier will likely have 16 jets in their fleet, according to the CEO Michael O'Leary. Notes References Further reading 2019 in aviation Boeing 737
387545
https://en.wikipedia.org/wiki/Anycast
Anycast
Anycast is a network addressing and routing methodology in which a single destination IP address is shared by devices (generally servers) in multiple locations. Routers direct packets addressed to this destination to the location nearest the sender, using their normal decision-making algorithms, typically the lowest number of BGP network hops. Anycast routing is widely used by content delivery networks such as web and DNS hosts, to bring their content closer to end users. Addressing methods There are four principal addressing methods in the Internet Protocol: History The first documented use of anycast routing for topological load-balancing of Internet-connected services was in 1989, the technique was first formally documented in the IETF four years later in , and it was first applied to critical infrastructure in 2001 with the anycasting of the I-root nameserver. Early objections Early objections to the deployment of anycast routing centered on the perceived conflict between long-lived TCP connections and the volatility of the Internet's routed topology. In concept, a long-lived connection, such as an FTP file transfer (which might have taken hours to complete in the mid-1990s, when this issue was being debated) might be re-routed to a different anycast instance in mid-connection due to changes in network topology or routing, with the result that the server changes mid-connection, and the new server is not aware of the connection and does not possess the TCP connection state of the previous anycast instance. In practice, such problems were not observed, and these objections dissipated by the early 2000s. Many initial anycast deployments consisted of DNS servers, using principally UDP transport. Measurements of long-term anycast flows revealed very few failures due to mid-connection instance switches, far fewer (less than 0.017% or "less than one flow per ten thousand per hour of duration" according to various sources) than were attributed to other causes of failure. Numerous mechanisms were developed to efficiently share state between anycast instances. And some TCP-based protocols, notably HTTP, incorporated "redirect" mechanisms, whereby anycast service addresses could be used to locate the nearest instance of a service, whereupon a user would be redirected to that specific instance prior to the initiation of any long-lived stateful transaction. Internet Protocol version 4 Anycast can be implemented via Border Gateway Protocol (BGP). Multiple hosts (usually in different geographic areas) are given the same unicast IP address and different routes to the address are announced through BGP. Routers consider these to be alternative routes to the same destination, even though they are actually routes to different destinations with the same address. As usual, routers select a route by whatever distance metric is in use (the least cost, least congested, shortest). Selecting a route in this setup amounts to selecting a destination. Internet Protocol version 6 Anycast is supported explicitly in IPv6. , which covers IPv6 addressing architecture, reserves Interface Identifier 0 within an IPv6 subnet as the "Subnet Router" anycast address. In addition, reserves a block of 128 Interface Identifiers within a subnet as anycast addresses. Most IPv6 routers on the path of an anycast packet through the network will not distinguish it from a unicast packet, but special handling is required from the routers near the destination (that is, within the scope of the anycast address) as they are required to route an anycast packet to the "nearest" interface within that scope which has the proper anycast address, according to whatever measure of distance (hops, cost, etc.) is being used. The method used in IPv4 of advertising multiple routes in BGP to multiply-assigned unicast addresses also still works in IPv6, and can be used to route packets to the nearest of several geographically dispersed hosts with the same address. This approach, which does not depend on anycast-aware routers, has the same use cases together with the same problems and limitations as in IPv4. Applications With the growth of the Internet, network services increasingly have high-availability requirements. As a result, operation of anycast services () has grown in popularity among network operators. Domain Name System All Internet root nameservers are implemented as clusters of hosts using anycast addressing. All 13 root servers A–M exist in multiple locations, with 11 on multiple continents. (Root servers B and H exist in two U.S. locations.) The servers use anycast address announcements to provide a decentralized service. This has accelerated the deployment of physical (rather than logical) root servers outside the United States. documents the use of anycast addressing to provide authoritative DNS services. Many commercial DNS providers have switched to an IP anycast environment to increase query performance and redundancy, and to implement load balancing. IPv6 transition In IPv4 to IPv6 transitioning, anycast addressing may be deployed to provide IPv6 compatibility to IPv4 hosts. This method, 6to4, uses a default gateway with the IP address 192.88.99.1, as described in . This allows multiple providers to implement 6to4 gateways without hosts having to know each individual provider's gateway addresses. This method has been deprecated in . Content delivery networks Content delivery networks may use anycast for actual HTTP connections to their distribution centers, or for DNS. Because most HTTP connections to such networks request static content such as images and style sheets, they are generally short-lived and stateless across subsequent TCP sessions. The general stability of routes and statelessness of connections makes anycast suitable for this application, even though it uses TCP. Connectivity between Anycast and Multicast network Anycast rendezvous point can be used in Multicast Source Discovery Protocol (MSDP) and its advantageous application as Anycast RP is an intra-domain feature that provides redundancy and load-sharing capabilities. If the multiple anycast rendezvous point is used, IP routing automatically will select the topologically closest rendezvous point for each source and receiver. It would provide a multicast network with the fault tolerance requirements. Security Anycast allows any operator whose routing information is accepted by an intermediate router to hijack any packets intended for the anycast address. While this at first sight appears insecure, it is no different from the routing of ordinary IP packets, and no more or less secure. As with conventional IP routing, careful filtering of who is and is not allowed to propagate route announcements is crucial to prevent man-in-the-middle or blackhole attacks. The former can also be prevented by encrypting and authenticating messages, such as using Transport Layer Security, while the latter can be frustrated by onion routing. Reliability Anycast is normally highly reliable, as it can provide automatic failover without adding complexity or new potential points of failure. Anycast applications typically feature external "heartbeat" monitoring of the server's function, and withdraw the route announcement if the server fails. In some cases this is done by the actual servers announcing the anycast prefix to the router over OSPF or another IGP. If the servers die, the router will automatically withdraw the announcement. "Heartbeat" functionality is important because, if the announcement continues for a failed server, the server will act as a "black hole" for nearby clients; this is the most serious mode of failure for an anycast system. Even in this event, this kind of failure will only cause a total failure for clients that are closer to this server than any other, and will not cause a global failure. However, even the automation necessary to implement "heartbeat" routing withdrawal can itself add a potential point of failure, as seen in the 2021 Facebook outage. Mitigation of denial-of-service attacks In denial-of-service attacks, a rogue network host may advertise itself as an anycast server for a vital network service, to provide false information or simply block service. Anycast methodologies on the Internet may be exploited to distribute DDoS attacks and reduce their effectiveness: As traffic is routed to the closest node, a process over which the attacker has no control, the DDoS traffic flow will be distributed amongst the closest nodes. Thus, not all nodes might be affected. This may be a reason to deploy anycast addressing. The effectiveness of this technique depends upon maintaining the secrecy of any unicast addresses associated with anycast service nodes, however, since an attacker in possession of the unicast addresses of individual nodes can attack them from any location, bypassing anycast addressing methods. Local and global nodes Some anycast deployments on the Internet distinguish between local and global nodes to benefit the local community, by addressing local nodes preferentially. An example is the Domain Name System. Local nodes are often announced with the no-export BGP community to prevent hosts from announcing them to their peers, i.e. the announcement is kept in the local area. Where both local and global nodes are deployed, the announcements from global nodes are often AS prepended (i.e. the AS is added a few more times) to make the path longer so that a local node announcement is preferred over a global node announcement. See also Multihoming Line hunting, for an equivalent system for telephones References External links Best Practices in IPv4 Anycast Routing Tutorial on anycast routing configuration. Internet architecture Multihoming Domain Name System
11071892
https://en.wikipedia.org/wiki/CAP%20College%20Foundation
CAP College Foundation
CAP College Foundation, Inc. is a private, non-sectarian distance learning college in the Philippines. History CAP College Foundation, Inc. – a recognized pioneer in educational innovations in the Philippines – was established in 1980 as a non-stock, non-sectarian educational foundation. Instituted under Philippine laws, CAP College engages in education, research and related activities utilizing the non-traditional or non-formal as well as formal delivery system of instruction and grants degrees for programs recognized by the Commission on Higher Education (CHED). CAP College is patterned after the "open university" concept of education which is already well-established and widely accepted in Europe, North America and the United States, Australia, and Asia. With its non-traditional delivery of instruction, CAP College brings learning alternatives, new hope and opportunities to Filipinos both here and abroad. In order to make its programs attuned to the times and with the needs of its students, CAP College continues to expand its network and to develop its linkages with other educational institutions and organizations. It has also developed linkages with government and non-government organizations here and abroad. International linkages include the International Council for Open and Distance Education (ICDE), where CAP College is an institutional member, and the Asian Association of Open Universities (AAOU). Locally, it is affiliated with the Open and Distance Learning Foundation (ODLF) and the Association of Foundations (AF). In 2007, CAP College embarked on the digitalization of Distance Education. Through , CAP College harnessed the power of the Internet in serving its students worldwide through: on-line registration, downloading of instructional materials, on-line tutorials, individualized folders for students, and link to a career site via . Aside from the regular Distance Education Program, CAP College also operates the CAP College for the Deaf (CAP CFD). Academics CHED-RECOGNIZED LADDERIZED PROGRAMSIn order to provide the students the necessary platforms that will open pathways of opportunities for career and educational progression, the College sought, and was granted by the Commission on Higher Education, recognition of its Ladderized Programs. Under this program, a Certificate in Associate in Arts shall be awarded to the student after his completion of four (4) terms. Should circumstances prevent him from finishing his bachelor's degree, he will not be left empty-handed since he has earned already his Associate certificate – a tool he can use when seeking better job opportunities. The Diploma in Bachelor of Arts or Bachelor of Science shall be awarded to him after his completion of all the terms required by the program – 6 terms for AB Programs or 7 terms for BSBA Programs. CHED-TESDA INTERFACEIn addition to the CHED-recognized Ladderized Programs, CAP College was granted recognition by the Technical Education and Skills Development Authority (TESDA) as a pioneering institution in the implementation of the Ladderized Education System under Executive Order No. 358 entitled "To Institutionalize a Ladderized Interface Between Technical-Vocational Education & Training (TVET) and Higher Education (HE)". Under this Executive Order, CAP College has been granted recognition to offer AB Information Technology. With Executive Order No. 358, Higher Education Institutions can now be innovative as they offer programs that answer the needs of the industry. They may now embed Tech-Voc programs in their degree programs and TESDA shall award their students National Certificate (NC) of Skills Competency in addition to the baccalaureate degree that the students will receive after completion of the requirements of the program. For AB Information Technology, the student qualifies for the following National Certifications: NC II in Computer Hardware Servicing after completing 2 terms and 84-Hr. Computer Preventive Maintenance Course; and NC IV in Programming after completing 6 terms. His Certificate in Associate in Arts in Information Technology is awarded to him after completing 4 terms and his Diploma in AB Information Technology is awarded to him after completing the 6th term. Instruction Daily classroom attendance is not required at CAP College. The student learns independently and paces himself to be able to meet the requirements of the course by the end of his academic term. However, the student is expected to complete at least one subject every month. The student's basic texts are printed modules which he shall receive upon enrollment. A subject consists of an average of five modules and each module contains two or more lessons. Also included in the modules are study guides, list of suggested readings, Self-Progress Check Tests and Module Tests. Audio-video CD-ROM's, educational software and the Internet are being made available to supplement the teaching-learning process. One-Week Reviewers and Final Examination Reviewers on selected subjects are being provided also to prepare the student for the tests. After completing the lessons and Self-Progress Check Tests in each module, the student shall take the self-administered Module Tests and submit these to CAP College for correction and evaluation. The process shall be repeated for the remaining modules. When all the Module Tests of a subject have been submitted, the student may take the Final Examination for that particular subject. However, he may choose to finish all the Module Tests of all subjects before taking the Final Examinations. Final Examinations are taken in person at CAP College or at its designated Distance Education Learning Centers. All accounts must be paid before the student may be allowed to take these examinations. The student must present his identification card when taking the Final Examinations. Two hours are allotted to complete the Final Examination for each subject. For students residing or working abroad, arrangements will be made for them to take the examinations at the nearest Philippine Embassy, consulate office, place of worship or at a venue that shall be acceptable to both the College and the student. A proctor shall be assigned to administer the examinations. Proctor's fee and mailing expenses shall be charged to the students. College for the Deaf The CAP College for the Deaf (CAP CFD) is the first college for the Deaf in Manila and one of the first post-secondary training programs for the Deaf in the Philippines. CAP CFD opened in 1989, giving hope to deaf high school graduates who are looking forward to college education that will prepare them to become productive members of Philippines society. Notable alumni Dingdong Avanzado – singer, actor, politician and TV host Carmi Martin – actress Onemig Bondoc – actor and television host Manilyn Reynes – successful actress, singer, TV host and commercial model Marvin Agustin – actor, chef and entrepreneur Ana Roces – actress References External links CAP College on the Web - official website of the school College Assurance Plan - official website of CAP Group of Companies Commission on Higher Education - contains the list of colleges recognized by the Philippine government Distance education institutions based in the Philippines Schools for the deaf in the Philippines Special schools in the Philippines Educational institutions established in 1988 Universities and colleges in Makati 1988 establishments in the Philippines
2920561
https://en.wikipedia.org/wiki/Free%20Software%20Directory
Free Software Directory
The Free Software Directory (FSD) is a project of the Free Software Foundation (FSF). It catalogs free software that runs under free operating systems—particularly GNU and Linux. The cataloged projects are often able to run in several other operating systems. The project was formerly co-run by UNESCO. Unlike some other directories that focus on free software, Free Software Directory staff verify the licenses of software listed in the directory. Coverage growth and usages FSD has been used as a source for assessing the share of free software, for example finding in September 2002 an amount of "1550 entries, of which 1363 (87.9%) used the GPL license, 103 (6.6%) used the LGPL license, 32 (2.0%) used a BSD or BSD-like license, 29 (1.9%) used the Artistic license, 5 (0.3%) used the MIT license". By September 2009, the Directory listed 6,000 packages whose number grew up to 6,500 in October 2011, when the newly updated directory was launched. All listed packages are "free for any computer user to download, run and share. Each entry is individually checked and tested ... so users know that any program they come across in the directory will be truly free software ... with free documentation and without proprietary software requirements". Several scientific publications review or refer to the directory. It has been remarked that the Directory "only includes software that runs on free operating systems. The FSF/UNESCO Free Software Directory is also a collaborative project, offering a web interface for users to enter and update entries". Among the critical issues of the previous version, it has been pointed out that while "available software is described using a variety of textual metadata, including the components upon which a particular piece of software depends", "unfortunately, those dependencies are only listed by name, and locating and retrieving them is left to the user". On the other hand, the accuracy of the directory review on licenses is acknowledged. The code review from the directory's editorial board is suitable for obtaining statistics on subsets of free software packages reliably clustered by license. In September 2011, the Free Software Directory was re-implemented as a wiki, using MediaWiki and the Semantic MediaWiki extension, to allow users to directly add to and modify its contents. Semantic MediaWiki provides the directory with semantic web technologies by adding "advanced search and presentation capabilities, structured to be useful for reading by both humans and data-mining programs". The new edition of the directory has been described as designed to ease and support with semantics the discovery and harvesting of information on free software programs. "An extensive and flexible category system, plus over 40,000 keywords and more than 40 different fields of information, enhance both simple and advanced searching". A recent snapshot of the taxonomy of the projects reviewed and accepted in the directory is the following: accessibility accounting addressbooks addresses archive audio barcodes barcoding bitcoin cad calculating calendar command-line compressing console copying daemon database desktop-enhancement e-mail ebooks ecommerce editing education email fax faxing file-manager frontend gameplaying gnome-app graphics hobbies html images interface internet-application kde-app library live-communications localization mathematics mixing organizing pim playing printing productivity project-management reading science security software-development specialized spreadsheet sql stock-market storage system-administration telephony text text-creation timekeeping timetracker video web web-authoring window-manager x-window-system xml See also References External links The Free Software Directory Free Software Foundation UNESCO MediaWiki websites Semantic wikis
21291898
https://en.wikipedia.org/wiki/Windows%202.0x
Windows 2.0x
Windows 2.0 is a 16-bit Microsoft Windows GUI-based operating environment that was released on December 9, 1987, and the successor to Windows 1.0. This product's family includes Windows 2.0, a base edition for 8086 real mode, and Windows/386 2.0, an enhanced edition for i386 protected mode. On December 31, 2001, Microsoft declared Windows 2.0 obsolete and stopped providing support and updates for the system. Features Windows 2.0 allowed application windows to overlap each other, unlike its predecessor Windows 1.0, which could display only tiled windows. Windows 2.0 also introduced more sophisticated keyboard-shortcuts and the terminology of "Minimize" and "Maximize", as opposed to "Iconize" and "Zoom" in Windows 1.0. The basic window setup introduced here would last through Windows 3.1. New features in Windows 2.0 included support for the new capabilities of the i386 CPU in some versions , 256-color VGA graphics, and EMS memory support. It was also the last version of Windows that did not require a hard disk. With the improved speed, reliability and usability, computers now started becoming a part of daily life for some workers. Desktop icons and use of keyboard shortcuts helped to speed up work. The Windows 2.x EGA, VGA, and Tandy drivers notably provided a workaround in Windows 3.0 for users who wanted color graphics on 8086 machines (a feature that version normally did not support). IBM licensed Windows's GUI for OS/2 as Presentation Manager, and the two companies stated that it and Windows 2.0 would be almost identical. Editions Windows 2.0x came in two different variants with different names and CPU support. The first variant simply said "Windows" on the box, with a version number on the back distinguishing it from Windows 1.x. The second was billed on the box as "Windows/386" This distinction continued to Windows 2.1x, where the naming convention changed to Windows/286 and Windows/386 to clarify that they were different versions of the same product. Windows The basic edition only supports 8086 real mode. This edition would be renamed Windows/286 with the release of Windows 2.1x. Despite its name, Windows/286 remained fully operational on an 8088 or 8086 processor, although the high memory area would not be available on an 8086-class processor; however, expanded memory (EMS) could still be used, if present. A few PC vendors shipped Windows/286 with 8086 hardware; an example was IBM's PS/2 Model 25, which had an option to ship with a "DOS 4.00 and Windows kit" for educational markets, which included word processing and presentation software useful for students, which resulted in some confusion when purchasers of this system received a box labeled Windows/286 with an 8086-based computer. Windows/386 Windows/386 was available as early as September 1987, pre-dating the release of Windows 2.0 in December 1987. Windows/386 was much more advanced than its 286 sibling. It introduced a protected mode kernel, above which the GUI and applications run as a virtual 8086 mode task. Windows/386 had fully preemptive multitasking, and allowed several MS-DOS programs to run in parallel in "virtual 8086" CPU mode, rather than always suspending background applications. (Windows applications could already run in parallel through cooperative multitasking.) With the exception of a few kilobytes of overhead, each DOS application could use any available low memory before Windows was started. Windows/386 ran Windows applications in a single Virtual 8086 box, with EMS emulation. In contrast, Windows 3.0 in standard or enhanced mode ran Windows applications in 16 bits protected mode segments. Windows/386 also provided EMS emulation, using the memory management features of the i386 to make RAM beyond 640k behave like the banked memory previously only supplied by add-in cards and used by popular DOS applications. (By overwriting the WIN200.BIN file with COMMAND.COM, it is possible to use the EMS emulation in DOS without starting the Windows GUI.) There was no support for disk-based virtual memory, so multiple DOS programs had to fit inside the available physical memory; therefore, Microsoft suggested buying additional memory and cards if necessary. Neither of these versions worked with DOS memory managers like CEMM or QEMM or with DOS extenders, which have their own extended memory management and run in protected mode as well. This was remedied in version 3.0, which is compatible with Virtual Control Program Interface (VCPI) in "standard mode" and with DOS Protected Mode Interface (DPMI) in "386 enhanced" mode (all versions of Windows from 3.0 to 98 exploit a loophole in EMM386 to set up protected mode). Windows 3.0 also had the capability of using the DWEMM Direct Write Enhanced Memory Module. This is what enables the far faster and more sleek graphical user interface, as well as true extended memory support. BYTE in 1989 listed Windows/386 as among the "Distinction" winners of the BYTE Awards, describing it as "serious competition for OS/2" as it "taps into the power of the 80386". Application support The first Windows versions of Microsoft Word and Microsoft Excel ran on Windows 2.0. Third-party developer support for Windows increased substantially with this version (some shipped the Windows Runtime software with their applications, for customers who had not purchased the full version of Windows). However, most developers still maintained DOS versions of their applications, as Windows users were still a distinct minority of their market. Windows 2.0 was still very dependent on the DOS system and it still hadn't passed the 1 megabyte mark in terms of memory. Stewart Alsop II predicted in January 1988 that "Any transition to a graphical environment on IBM-style machines is bound to be maddeningly slow and driven strictly by market forces", because the GUI had "serious deficiencies" and users had to switch to DOS for many tasks. There were some applications that shipped with Windows 2.0. They are: CALC.EXE – a calculator CALENDAR.EXE – calendaring software CARDFILE.EXE – a personal information manager CLIPBRD.EXE – software for viewing the contents of the clipboard CLOCK.EXE – a clock CONTROL.EXE – the system utility responsible for configuring Windows 2.0 CVTPAINT.EXE - Converted paint files to the 2.x format MSDOS.EXE – a simple file manager NOTEPAD.EXE – a text editor PAINT.EXE – a raster graphics editor that allows users to paint and edit pictures interactively on the computer screen PIFEDIT.EXE – a program information file editor that defines how a DOS program should behave inside Windows REVERSI.EXE – a computer game of reversi SPOOLER.EXE – the print spooler of Windows, a program that manages and maintains a queue of documents to be printed, sending them to the printer as soon as the printer is ready TERMINAL.EXE – a terminal emulator WRITE.EXE – a simple word processor Legal conflict with Apple On March 17, 1988, Apple Inc. filed a lawsuit against Microsoft and Hewlett-Packard, accusing them of violating copyrights Apple held on the Macintosh System Software. Apple claimed the "look and feel" of the Macintosh operating system, taken as a whole, was protected by copyright and that Windows 2.0 violated this copyright by having the same icons. The judge ruled in favor of Hewlett-Packard and Microsoft on all but 10 of the 189 graphical user interface elements on which Apple sued, and the court found the remaining 10 GUI elements could not be copyrighted. Windows 2.1x The successor to Windows 2.0, called Windows 2.1x was officially released in the United States and Canada on May 27, 1988. The final entry in the 2.x series, Windows 2.11, was released in March 1989. See also DESQview 386 VM/386 References External links GUIdebook: Windows 2.0 Gallery – A website dedicated to preserving and showcasing Graphical User Interfaces ComputerHope.com: Microsoft Windows history Microsoft article with details about the different versions of Windows 1987 software Products and services discontinued in 2001 2.0x History of Microsoft History of software Products introduced in 1987
2251780
https://en.wikipedia.org/wiki/Silicon%20Beach%20Software
Silicon Beach Software
Silicon Beach Software was an early American developer of software products for the Macintosh personal computer. It was founded in San Diego, California in 1984 by Charlie Jackson and his wife Hallie. Jackson later co-founded FutureWave Software with Jonathan Gay. FutureWave produced the first version of what is now Adobe Flash. Although Silicon Beach Software began as a publisher of game software, it also published what was called "productivity software" at the time. Silicon Beach's best known "productivity software" product was SuperPaint, a graphics program which combined features of Apple's MacDraw and MacPaint with several innovations of its own. SuperPaint2 and Digital Darkroom were the first programs on the Macintosh to offer a Plug-in Architecture, allowing outside software developers to extend both programs' capabilities. Silicon Beach coined the term "plug-in". Silicon Beach was a pioneer in graphic tools for desktop publishing. Not only was SuperPaint a tool that had advanced graphic editing capabilities for its day, but Digital Darkroom was also a pioneering photo editor. It was grayscale only, not color (like the early Macintosh computers), but had a number of interface innovations, including the Magic Wand tool, which also appeared later in Photoshop. It also had a proprietary option for printing grayscale content on dot matrix printers. Digital Darkroom was used professionally to clean up scanned images for clip art libraries. Another Silicon Beach product was SuperCard which, like SuperPaint, superseded the capabilities of an Apple-branded product (in this case, HyperCard). SuperCard used a superset of the HyperTalk programming language and addressed common complaints about HyperCard by adding native support for color, multiple windows, support for vector images, menus and other features. Silicon Beach Software produced video games for the Macintosh. The most well known is Dark Castle released in 1986. It was ported to several other operating systems by other companies. Sequel Beyond Dark Castle was Silicon Beach's last game, because productivity software was much more lucrative. Their 1985 release, Airborne!, was the first Macintosh game to feature digitized sound. Silicon Beach Software is credited with coining the term Silicon Beach to refer to San Diego in the same way that Silicon Valley refers to the Santa Clara Valley and San Jose area. Silicon Beach was acquired by Aldus Corporation in 1990 and Aldus, in turn, by Adobe Systems in 1994. Other products Airborne! (1985) combat game. A demo before the game was released was called Banzai!. Enchanted Scepters (1985) Point-and-click adventure game made with the engine that later became World Builder. World Builder (1986) graphical adventure game authoring package. Apache Strike (1987) 3D helicopter game. Beyond Dark Castle (1987) Sequel to Dark Castle. Super 3D (1988) 3D modeling application. Silicon Press (1986) card and label printing software Personal Press (1988) easy to use desktop publishing software, later renamed Adobe Home Publisher References External links Coverage of MacWorld Boston 1988 including SuperPaint and Digital Darkroom with founder Charlie Jackson from The Computer Chronicles Video review of Beyond Dark Castle and Apache Strike with Silicon Beach development V.P. Eric Zocher from The Computer Chronicles Silicon Beach Software profile at MobyGames Macintosh software companies Defunct video game companies of the United States Dark Castle
4084715
https://en.wikipedia.org/wiki/Andrew%20Diey
Andrew Diey
Andrew William Langmanis Diey (; born 31 October 1973, Islington, London, England) is an English electronic musician, sound designer and record producer. As a solo artist, he is best known as Black Faction, or his previous moniker Foreign Terrain. Diey first came to prominence as a musician on the Skam Records Manchester, offshoot label MASK 300, under his Foreign Terrain moniker, and later afterwards he moved onto using the more known Black Faction name for various labels. His career as a sound designer is also noteworthy: his works have appeared in many high-profile computer games and UK television broadcasts, and also on BBC radio. Diey is the owner and senior creative director at Radium Audio. He has designed the interior sounds of a Bentley Continental GT car. Designed the audio for Ferrari California launch site, and has worked on many high-profile adverts and digital projects. Andrew Diey in 2011 announces his return to music with a new project Dalston Ponys a live production and dj set which is contains live interactive prototypes created by Radium Audio. History Diey first became involved in electro acoustic music, at the age of 18. He attended the Electronic Music Studio in Stockholm, Sweden in 1993. He moved back from Sweden in 1995, to study for an HND in Music Technology at City College in Manchester, England. In 1997 he attended Salford University, on the Electro Acoustics BEng undergraduate course. In 1998 he attended IRCAM in Paris, as a sound designer, and working with IRCAM software. Professional work His career within sound can be split into three periods. Game audio development, music for broadcast and sound for advertising His works have been released on CDs for critical acclaim, he received a 5-album contract from his first demo tapes with Soleilmoon Portland, Oregon, he has created audio content for over 25 Game Development titles, produced hundreds of hours of broadcast music and sound including BAFTA nominated material. His most recent works have been for National Geographic HD Channel – Inside the Ultimate, producing the music and sound design using surround sound techniques. Diey currently owns and operates Emmy Award Winning Radium Audio Ltd, in London. A music & sound design company. BBC Sound Designer In 2006 Diey won the position of New Talent BBC Sound Designer competition, which was open to UK residents. Working in-house as a BBC sound designer. BBC New Talent Winner 2006 - Interview & Article BBC Sound Design Article - Advice for sound designers 2007 Diey has been working with D&AD, and also developing a series of productions for various interactive companies, brand agencies within sound branding, and sonification. 2007 Big Chip Award WINNER: Best Micro Business Big Chip Award. On 24 May Diey and his team at radium audio ltd won at The Big Chip awards. Judged by a distinguished panel of experts from organisations such as New Media Age, the BBC, Trevor Beattie's new agency and The Chase, the Big Chips are the top awards for ICT and new media outside London, attracting entries from the region's best over the last 9 years. 2007 Best of Manchester Music Award WINNER: Best of Manchester Music 2007 URBIS Best of Manchester Andrew Diey was presented with the Award by Manchester Graphic Designer Peter Saville, at URBIS Center in Manchester 2007 Roses Design Awards Nomination 2 Categories - Emerging Designer of 2007 & Best Animation [Sound & Music] Roses 2007 Nominations The prestigious design awards took place in Manchester on 18 October 2007 Creating bespoke audio in for manufactured products for Bentley Motors Plc, Sony, and Microsoft. 2008 Diey and Radium Audio Ltd are known for audio and global brands including Ferrari, Honda, Rolex, Universal, Ford, Nissan, Toyota, Philips, LG, Warner Brothers - Harry Potter brand see Radium Audio Ltd 2006: Bentley Motors Plc Creation of in car Sound Design for Continental GT car range. 2007: Sonic Interaction Design: Nomination as a UK representative to the Management Committee of COST Action IC0601. EU funded | European network, of 18 delegates from 14 countries to research and the exploitation of sound as one of the principal channels conveying information, meaning, and aesthetic/emotional qualities in interactive contexts. tags: Interaction Design, Auditory Display and Sonification, Sound and Music Computing, Sound Modeling, Sound Perception and Cognition. 2007: Diey is currently involved with a research project on product sound and its uses beyond interface and usability. His work is directed towards, Brand and Differentiation for online commerce and experience. Television Aug 2005: My Child Won’t Stop Eating ITV: 1 x 60 Min program of original music and sound design. Transmission: Granada / ITV Apr 2005: I Survived- Series 1 Granada / Discovery Channel / C5: 4 x 30 Min programs of original music and sound design. Transmission: Granada / Discovery Channel / C5 Aug 2004: Dirtbusters Granada / ITV: 8 x 30 Min programs of original music and sound design. Transmission: Granada / ITV Jun - Aug 2004: Building the Ultimate- Series 2 Granada / Discovery Channel / C5: 6 x 30 Min programs of original music and sound design. Transmission: Granada / Discovery Channel / C5 2003–2004: Tonight - With Trevor McDonald Granada / ITV / ITN: Creation of Music / Original sound design / Several programs. Transmission: Granada / ITN Nov 2003: Russell Crowe's -"Greatest Fights" Granada / Channel 4: 1 x 60 Min program of original sound design. Transmission: Granada / Channel 4 / Oct – Dec 2003: Building the Ultimate- Series 1 Granada / Discovery Channel / C5: 8 x 30 Min programs of original music and sound design. Transmission: Granada / Discovery Channel / C5 Sep 2003: "Who Got Marc Bolan’s Missing Millions?" Granada / Channel 4: 1 x 60 Min program of original sound design. Transmission: Granada / Channel 4 Aug – Sep 2003: "The History of British Sculpture [with Lloyd Grossman]" Polar Pictures / Channel 5: 6 x 30 Min programs of original music and sound design. Transmission: Channel 5 Jul 2003: "Behind the Scenes [Hulk, Charlie’s Angels, Terminator 3]" Endemol TV/ Channel 5: 3 x 60 Min program of original sound design. Transmission: Endemol / Channel 5 Jul 2003: Risky Business BBC / BBC Three: 3 x 1 hour programs of original music and sound design. Transmission: BBC Three May 2003: "Secrets of the Dark Ages [Barbarians]" Granada Television / Channel 4: 3 x 1 hour programs of original music and sound design. Transmission: Channel 4 Commercials or adverts 2005: [Online & CD Media] Brother Corp Japan DCP Inkjet products. 3 x 6 Mins Creation of Music / Original sound design / for several programs. Transmission: Online www.brother.com / March 2005 – Ongoing 2005: [Radio] Imperial War Museum Smooth FM Network. 2 x 2 Mins sec original sound design. Transmission: Smooth FM / Sept 2004 – Jan 2005 2004: [Online & CD Media] Brother Corp Japan Multifunction Inkjet products. 5 x 6 Mins Creation of Music / Original sound design / for several programs. Transmission: Online www.brother.com / Sept 2004 – Ongoing 2003: [Radio] Gleeson Homes Jazz FM Network. 1 x 40 sec original sound design. Transmission: Jazz FM / Dec 2003 2003: [Television] Electrolux "Things are changing". Various TV European Networks. 1 x 3 Min original music & sound design. Transmission: Various TV European Networks / Oct 2003 Radio 2006: In-House Sound Designer - BBC Current position of sound design creating and sound treatment for radio drama 2003: Various SFX - Jazz FM Additional Sound Effects / Sound Design Idents. Transmission: Jazz FM 2003: "The Tuner" by Kev Fegan BBC Radio 4. 1 hour of original music and sound design. Transmission: BBC Radio 4 / Jan 2001 2003: "Fireface" by Marius von Mayenburg BBC Radio 3's season of new drama, The Wire. Soundtrack and sound design. Transmission: BBC Radio 3 / Dec 2001 2003: TT [Isle of Man] Racing SFX BBC Radio 4. Additional Sound Effects - Source material supplied by Alchemy Audio Lab. Transmission: BBC Radio 4 Computer games Sound design 2002: Twin Caliber / Rage Software / Format Xbox / PlayStation 2 2002: Gun Metal / Rage Software / Format Xbox 2002: Automobili Lamborghini / Rage Software / Format Xbox / PlayStation 2 2002: Lucky & Wilde / Eutechnyx Software / Format Xbox / PlayStation 2 [Canned by publisher]. 2003: Street Racing Syndicate / Eutechnyx Software / Format Xbox / PlayStation 2 2003: Juiced / Juice Games / Format Xbox / PlayStation 2 / PC 2004: Crash n Burn / Climax Group LTD / Format Xbox / PlayStation 2 / PC 2005: Test Drive Unlimited / Eden Games / Format Xbox / PlayStation 2 / PC 2005: LA Rush / Midway Games / Format Xbox / PlayStation 2 / PC 2005: Evolution GT / Milestone Games Italy / Format Xbox / PlayStation 2 / PC FMV [In-Game Full Motion Video] 2003: Street Racing Syndicate / Eutechnyx Software / Format Xbox / PlayStation 2 2003: Thief III / Eidos Interactive / Format Xbox / PlayStation 2 / PC 2004: The Punisher / Full Sound Design Post Production 5.1 Surround Sound / THQ / Volition Software 2007: Resistance Fall of Man / Sound Design Post Production / Maverick media / Indigo Games / SCEE Sound design and music for shorts 2004: Hells Corner Yorkshire TV. 1 x 60 Min program of original Sound Design & Music. Transmission: ITV / TBA 2003: Domestic BBC Two / Hurricane Films. 1 x 10 Min program of original Sound Design. Transmission: BBC Two / BBC Three / TBA. Credited as: Music by Alchemy Audio Lab 2003: Interloper Feature Short by Interloper Films, directed by Robert Ford, Manchester. Credited as: Music by Alchemy Audio Lab Other credits and appointments 2005: Manchester Metropolitan University & Jodrell Bank Radio Telescope [Cosmic Composer Project] in collaboration with NESTA [National Endowment for Science, Technology & Arts]. Creation sounds gathered from Space for use in schools with Interactive Whiteboards. 2005: Manchester Metropolitan University [Sound Design 2 Picture series II]. 2006: Bentley Motors Plc Creation of in car Sound Design for Continental GT car range. 2004: Reflex Communications [Clever Molecule project] Original Music and Sound Design – Winner of "Best Digital Media Project" – Audio Visual Awards 2004: Manchester Metropolitan University [Sound Design 2 Picture series I] 2004: Co-Ordination - International Festival of Electronic Music and Audiovisual Arts Working with Native Instruments, Propellerheads, TC Electronic, Korg, M-Audio, Ableton. Manchester, United Kingdom. [27 April to 8 May 2004] 2004: Showreel - [Quarterly Magazine for Film & TV professionals] 3 Page Article: An insiders guide to Sound Design for Games & Film. Showreel: Vol 1 :Issue: 03 [Jan 2004] 2004: Manchester Metropolitan University- [Sound Design | Sonic Arts Course]. Appointed Lecturer: Sound Design | Creative Audio | Sonic Art [Dec 2003]. 2003: MacUser - [Bi- Monthly Magazine]. 3 Page Audio: Workshops on MetaSynth - A Macintosh Audio Application. MacUser: Vol 19 :Issue: 24 [Nov 2003] - & MacUser CD Examples 2003: The Cornerhouse Manchester Andrew Diey appointed Lecturer on Sound Design for films - teach discuss and build workshops on sound design. 2003: The Game Plan - EU Andrew Diey appointed Lecturer on Sound Design for Computer Games, Part of the Game on Expo 2003. Europe wide. External links Radium Audio Ltd - Radium Audio Ltd Full Discography - Full Discography at Kompaktliste in Germany Polish Record Label Vivo - Vivo Records Poland Andrew Diey article - on TC Electronic's Powercore Diey recording Jump Jet Harriers- on TC Electronic's Powercore BBC Experimental Music Pages BBC Review of Black Faction Creative Match Article - Andrew Diey & Sound Effects Sound Design Article - Andrew Diey - Young Sound Designers Creative Review - Designing a Sonic Experience for Bentley Motors Richard Wand Blog - Article on Sound Branding within Gestural Surfaces English electronic musicians 1973 births Living people People from Islington (district) Alumni of the University of Salford
40629989
https://en.wikipedia.org/wiki/Fire%20OS
Fire OS
Fire OS is a mobile operating system based on the Android Open Source Project and created by Amazon for its Fire tablets, Echo smart speakers and Fire TV devices. It includes proprietary software, a customized user interface primarily centered on content consumption, and heavy ties to content available from Amazon's own storefronts and services. Apps for Fire OS are provided through the Amazon Appstore. History Amazon only began referring to the Android derivative as Fire OS with its third iteration of Fire tablets. Unlike previous Fire models, whose operating system is listed as being "based on" Android, the "Fire OS 3.0" operating system is listed as being "compatible with" Android. Fire OS 5 which is based on Android 5.1 "Lollipop", added an updated interface. The home screen features a traditional application grid and pages for content types as opposed to the previous carousel interface. It also introduces "On Deck", a function which automatically moves content out of offline storage to maintain storage space for new content, the speed reading tool "Word Runner", and screen color filters. Parental controls were enhanced with a new web browser for FreeTime mode featuring a curated selection of content appropriate for children, as well as "Activity Center" for monitoring usage by children. Fire OS 5 removed support for device encryption; an Amazon spokesperson stated that encryption was an enterprise-oriented feature that was underused. In March 2016, after the removal was publicized and criticized in the wake of the FBI–Apple encryption dispute, Amazon announced that it would be restoring the feature in a future patch. Fire OS 6 which is based on Android 7.1.2 "Nougat", its main changes/new additions include: Adoptable storage: allows users to format and use their SD card as internal storage, Doze/App standby: aim to improve battery life by forcing device to sleep when user is not actively using it. This adds restrictions on Apps that want to do background processing and polling etc. MediaTek Exploits (2019) In early 2019 exploit(s) for 6 Fire Tablet models and 1 Fire TV model were discovered and allowed: gaining temporary root access, permanent root access, bootloader unlocking, these exploit(s) were caused due to security vulnerability(s) in multiple MediaTek chipsets Fire OS 7 Which is based on Android 9.0 "Pie" was released in 2019 and is available for all 8th generation and above Fire tablets. Further History In February of 2022 Amazon announced that the Docs app would be replaced by document creation functionality in the Files app in May of 2022. In February of 2022 Amazon introduced a improved home editing system. Features Fire OS uses a customized user interface designed to prominently promote content available through Amazon services, such as Amazon Appstore, Prime Video, Amazon Music & Audible, and Kindle Store. Its home screen features a carousel of recently accessed content and apps, with a "favorites shelf" of pinned apps directly below it. Sections are provided for different types of content, such as apps, games, music, audiobooks, and video among others. A search function allows users to search through their local content library or Amazon's stores. Similarly to Android, sliding from the top of the screen exposes quick settings and notifications. Fire OS also provides integration with Goodreads, Facebook, and Twitter. X-Ray is also integrated into its playback functions, allowing users to access supplemental information on what they are currently viewing. The OS features a user system, along with Kindle FreeTime, a suite of parental controls which allow parents to set time limits for using certain types of content. Another feature is Amazon GameCircle, which is a retired online multiplayer social gaming network released by Amazon. It allowed players to track their achievements and compare their high scores on a leader board. It debuted in July 2012 and was retired September 5, 2018. Amazon's ecosystem Fire OS devices comes with Amazon's software and content ecosystems such as Here WeGo with a clone of Google Maps API 1.0. Amazon cannot use the Android trademarks to market the devices. Apps for Fire OS are provided through the Amazon Appstore. Fire OS devices do not have Google mobile services, including the Google Play Store or proprietary APIs, such as Google Maps or Google Cloud Messaging. Google Play Store can be installed, and third-party apps can still be sideloaded via APK files, although full compatibility is not guaranteed if the app depends on Google services. Members of the Open Handset Alliance (which include the majority of Android OEMs) are contractually forbidden to produce Android devices based on forks of the OS; therefore, Fire tablets are manufactured by Quanta Computer, which is not an OHA member. List of Fire OS versions The releases are categorized by major Fire OS versions based upon a certain Android codebase first and then sorted chronologically. Fire OS 1 – based on Android 2.3 Gingerbread system version = 6.3.1 system version = 6.3.2 – longer movie rentals, Amazon cloud synchronization system version = 6.3.4 – latest version for Kindle Fire (1st Generation) (2011) Fire OS 2 – based on Android 4.0 Ice Cream Sandwich system version = 7.5.1 – latest version for Kindle Fire HD (2nd Generation) (7" 2012) system version = 8.5.1 – latest version for Kindle Fire HD 8.9" (2nd Generation) (2012) system version = 10.5.1 – latest version for Kindle Fire (2nd Generation) (2012) Fire OS 2.4 – based on Android 4.0.3(?) Fire OS 3 Mojito – based on Android 4.1 Jelly Bean 3.1 3.2.8 – rollback point for Kindle Fire HDX (2013) 3.5.0 – introduces support for Fire Phone; Android 4.2.2 codebase 3.5.1 – Fire Phone maintenance version Fire OS 4 Sangria – based on Android 4.4 KitKat 4.1.1 4.5.5.1 4.5.5.2 4.5.5.3 – latest version for some tablets released in 2013, Kindle Fire HDX (3rd Generation), Kindle Fire HDX 8.9" (3rd Generation), Kindle Fire HD (3rd Generation) 4.5.5.5 – latest version for some tablets released in 2013 (e.g. some Kindle Fire tablets of 3rd Generation) 4.6.6.0 – Fire Phone 4.6.6.1 – latest version for the Fire Phone 4.7.8.4 – Last version for the fire phone (2019) 4.8.2.9 – Last version for the fire phone (2019) Fire OS 5 Bellini – based on Android 5.1.1 Lollipop 5.0 5.0.5.1 – introduction of Fire TV 5.0.1 5.1.1 5.1.2 5.1.2.1 5.1.4 5.2.1.0 – Fire TV devices 5.2.1.1 5.2.1.2 5.2.4.0 5.2.6.0 5.2.6.1 5.2.6.2 5.2.8.4 5.3.1.0 5.3.1.1 – August 2016 5.3.2.0 – November 2016 5.3.2.1 – December 2016 5.3.3.0 – March 2017 5.3.6.4 – version for Fire HD 8 (6th Generation) 5.3.6.8 5.3.7.0 5.3.7.1 5.3.7.2 – for Fire HD 8 & Fire HD 10 (7th Generation) 5.4.0.0 – June 2017 5.4.0.1 – August 2017 5.5.0.0 – November 2017: Only for Fire HD 10 (2017) with hands-free Alexa 5.6.0.0 – November 2017 5.6.0.1 – January 2018 5.6.1.0 – March 2018: version for tablets released in 2014 (e.g. some Fire tablets of 4th Generation) 5.6.2.0 – July 2018: Hands-Free Alexa For Fire 7 & HD 8 (2017) only 5.6.2.3 – April 2018: Latest version for first and second generation Fire TV devices 5.6.3.0 – November 2018: for Fire 7 (5th to 7th Generation); Due to a mistake, this version was accidentally released as 5.3.6.4 on some Fire tablets instead of 5.6.3.0, but includes the same update features. 5.6.3.8 – April 2019 5.6.4.0 – May 2019, September 2019: for Fire HD 8 5.6.6.0 – May 2020 5.6.7.0 – August 2020 5.6.8.0 – November 2020: Latest version for Fire (5th Generation), Fire HD 6 (4th Generation), Fire HD 7 (4th Generation), Fire HD 8 (5th and 6th Generation), Fire HDX 8.9 (4th Generation), and Fire HD 10 (5th Generation) 5.6.9.0 – December 2020: Latest version for Fire (7th Generation), Fire HD 8 (7th Generation), and Fire HD 10 (7th Generation) 5.8.6.8 – July 2019 5.8.7.9 – August 2019 5.7.8.2 – September 2019 Fire OS 6 – based on Android 7.1.2 Nougat 6.2.1.0 – October 2017, released on third generation Fire TV 6.2.1.2 – December 2017 6.2.1.3 – May 2018 6.3.0.1 – November 2018 6.3.1.2 – July 2019: version for Fire 7 (9th Generation) 6.3.1.3 (information needed) 6.3.1.4 (information needed) 6.3.1.5 – September 2019: last version of FireOS 6 for Fire HD 8 (8th generation) 6.5.3.4 – September 2019: Last version for Fire 7 (7th generation) 6.5.3.5 – November 2019 Fire OS 7 – based on Android 9.0 Pie 7.3.1.0 – October 2019: First version for Fire HD 10 (9th Generation) 7.3.1.1 – October 2019: Second version for Fire HD 10 (9th Generation) 7.3.1.2 – February 2020: Third version for Fire HD 10 (9th Generation) 7.3.1.3 – April 2020: Fourth version for Fire HD 10 (9th Generation) 7.3.1.4 – June 2020: Fifth version for Fire HD 10 (9th Generation) 7.3.1.5 – August 2020: First version of FireOS 7 for Fire HD 8 (8th Generation) 7.3.1.6 – October 2020 7.3.1.7 – November 2020 7.3.1.8 – February 2021 7.3.1.9 – May 2021 7.3.2.1 – September 2021 7.3.2.2 – November 2021: Latest version for 8th Generation, 9th Generation, 10th Generation and 11th Generation devices List of Fire OS devices Fire Tablets Fire TV Fire Phone Amazon Echo Amazon Echo Show See also Nokia X software platform, a similar fork by Nokia Tizen, a Linux-based OS by Samsung Electronics with an optional Android runtime Sailfish OS, a Linux-based mobile OS by Jolla which includes an Android runtime BlackBerry 10, a QNX-based mobile OS by BlackBerry which includes an Android runtime and comes with the Amazon Appstore preloaded Comparison of mobile operating systems References External links Amazon (company) Android (operating system) Android forks ARM operating systems Mobile operating systems Tablet operating systems
25190769
https://en.wikipedia.org/wiki/Open-source-software%20movement
Open-source-software movement
The open-source-software movement is a movement that supports the use of open-source licenses for some or all software, a part of the broader notion of open collaboration. The open-source movement was started to spread the concept/idea of open-source software. Programmers who support the open-source-movement philosophy contribute to the open-source community by voluntarily writing and exchanging programming code for software development. The term "open source" requires that no one can discriminate against a group in not sharing the edited code or hinder others from editing their already-edited work. This approach to software development allows anyone to obtain and modify open-source code. These modifications are distributed back to the developers within the open-source community of people who are working with the software. In this way, the identities of all individuals participating in code modification are disclosed and the transformation of the code is documented over time. This method makes it difficult to establish ownership of a particular bit of code but is in keeping with the open-source-movement philosophy. These goals promote the production of high-quality programs as well as working cooperatively with other similarly-minded people to improve open-source technology. Brief history The label "open source" was created and adopted by a group of people in the free-software movement at a strategy session held at Palo Alto, California, in reaction to Netscape's January 1998 announcement of a source-code release for Navigator. One of the reasons behind using the term was that "the advantage of using the term open source is that the business world usually tries to keep free technologies from being installed." Those people who adopted the term used the opportunity before the release of Navigator's source code to free themselves of the ideological and confrontational connotations of the term "free software". Later in February 1998, Bruce Perens and Eric S. Raymond founded an organization called Open Source Initiative (OSI) "as an educational, advocacy, and stewardship organization at a cusp moment in the history of that culture." Evolution In the beginning, a difference between hardware and software did not exist. The user and programmer of a computer were one and the same. When the first commercial electronic computer was introduced by IBM in 1952, the machine was hard to maintain and expensive. Putting the price of the machine aside, it was the software that caused the problem when owning one of these computers. Then in 1952, a collaboration of all the owners of the computer got together and created a set of tools. The collaboration of people were in a group called PACT (The Project for the Advancement of Coding techniques). After passing this hurdle, in 1956, the Eisenhower administration decided to put restrictions on the types of sales AT&T could make. This did not stop the inventors from developing new ideas of how to bring the computer to the mass population. The next step was making the computer more affordable which slowly developed through different companies. Then they had to develop software that would host multiple users. MIT computation center developed one of the first systems, CTSS (Compatible Time-Sharing System). This laid the foundation for many more systems, and what we now call the open-source software movement. The open-source movement is branched from the free-software movement which began in the late 80s with the launching of the GNU project by Richard Stallman. Stallman is regarded within the open-source community as sharing a key role in the conceptualization of freely-shared source code for software development. The term "free software" in the free software movement is meant to imply freedom of software exchange and modification. The term does not refer to any monetary freedom. Both the free-software movement and the open-source movement share this view of free exchange of programming code, and this is often why both of the movements are sometimes referenced in literature as part of the FOSS or "Free and Open Software" or FLOSS "Free/Libre Open-Source" communities. These movements share fundamental differences in the view on open software. The main, factionalizing difference between the groups is the relationship between open-source and proprietary software. Often, makers of proprietary software, such as Microsoft, may make efforts to support open-source software to remain competitive. Members of the open-source community are willing to coexist with the makers of proprietary software and feel that the issue of whether software is open source is a matter of practicality. In contrast, members of the free-software community maintain the vision that all software is a part of freedom of speech and that proprietary software is unethical and unjust. The free-software movement openly champions this belief through talks that denounce proprietary software. As a whole, the community refuses to support proprietary software. Further there are external motivations for these developers. One motivation is that, when a programmer fixes a bug or makes a program it benefits others in an open-source environment. Another motivation is that a programmer can work on multiple projects that they find interesting and enjoyable. Programming in the open-source world can also lead to commercial job offers or entrance into the venture capital community. These are just a few reasons why open-source programmers continue to create and advance software. While cognizant of the fact that both the free-software movement and the open-source movement share similarities in practical recommendations regarding open source, the free-software movement fervently continues to distinguish themselves from the open-source movement entirely. The free-software movement maintains that it has fundamentally different attitudes towards the relationship between open-source and proprietary software. The free-software community does not view the open-source community as their target grievance, however. Their target grievance is proprietary software itself. Legal issues The open-source movement has faced a number of legal challenges. Companies that manage open-source products have some difficulty securing their trademarks. For example, the scope of "implied license" conjecture remains unclear and can compromise an enterprise's ability to patent productions made with open-source software. Another example is the case of companies offering add-ons for purchase; licensees who make additions to the open-source code that are similar to those for purchase may have immunity from patent suits. In the court case "Jacobsen v. Katzer", the plaintiff sued the defendant for failing to put the required attribution notices in his modified version of the software, thereby violating license. The defendant claimed Artistic License in not adhering to the conditions of the software's use, but the wording of the attribution notice decided that this was not the case. "Jacobsen v Katzer" established open-source software's equality to proprietary software in the eyes of the law. In a court case accusing Microsoft of being a monopoly, Linux and open-source software was introduced in court to prove that Microsoft had valid competitors and was grouped in with Apple. There are resources available for those involved open-source projects in need of legal advice. The Software Freedom Law Center features a primer on open-source legal issues. International Free and Open Source Software Law Review offers peer-reviewed information for lawyers on free-software issues. Formalization The Open Source Initiative (OSI) was instrumental in the formalization of the open-source movement. The OSI was founded by Eric Raymond and Bruce Perens in February 1998 with the purpose of providing general education and advocacy of the open-source label through the creation of the Open Source Definition that was based on the Debian Free Software Guidelines. The OSI has become one of the main supporters and advocators of the open-source movement. In February 1998, the open-source movement was adopted, formalized, and spearheaded by the Open Source Initiative (OSI), an organization formed to market software "as something more amenable to commercial business use" The OSI applied to register "Open Source" with the US Patent and Trademark Office, but was denied due to the term being generic and/or descriptive. Consequently, the OSI does not own the trademark "Open Source" in a national or international sense, although it does assert common-law trademark rights in the term. The main tool they adopted for this was The Open Source Definition. The open-source label was conceived at a strategy session that was held on February 3, 1998 in Palo Alto, California and on April 8 of the same year, the attendees of Tim O’Reilly's Free Software Summit voted to promote the use of the term "open source". Overall, the software developments that have come out of the open-source movement have not been unique to the computer-science field, but they have been successful in developing alternatives to propriety software. Members of the open-source community improve upon code and write programs that can rival much of the propriety software that is already available. The rhetorical discourse used in open-source movements is now being broadened to include a larger group of non-expert users as well as advocacy organizations. Several organized groups such as the Creative Commons and global development agencies have also adopted the open-source concepts according to their own aims and for their own purposes. The factors affecting the open-source movement's legal formalization are primarily based on recent political discussion over copyright, appropriation, and intellectual property. Social structure of open source contribution teams Historically, researchers have characterized open source contributors as a centralized, onion-shaped group. The center of the onion consists of the core contributors who drive the project forward through large amounts of code and software design choices. The second-most layer are contributors who respond to pull requests and bug reports. The third-most layer out are contributors who mainly submit bug reports. The farthest out layer are those who watch the repository and users of the software that's generated. This model has been used in research to understand the lifecycle of open source software, understand contributors to open source software projects, how tools such as can help contributors at the various levels of involvement in the project, and further understand how the distributed nature of open source software may affect the productivity of developers. Some researchers have disagreed with this model. Crowston et al.'s work has found that some teams are much less centralized and follow a more distributed workflow pattern. The authors report that there's a weak correlation between project size and centralization, with smaller projects being more centralized and larger projects showing less centralization. However, the authors only looked at bug reporting and fixing, so it remains unclear whether this pattern is only associated with bug finding and fixing or if centralization does become more distributed with size for every aspect of the open source paradigm. An understanding of a team's centralization versus distributed nature is important as it may inform tool design and aid new developers in understanding a team's dynamic. One concern with open source development is the high turnover rate of developers, even among core contributors (those at the center of the "onion"). In order to continue an open source project, new developers must continually join but must also have the necessary skill-set to contribute quality code to the project. Through a study of GitHub contribution on open source projects, Middleton et al. found that the largest predictor of contributors becoming full-fledged members of an open source team (moving to the "core" of the "onion") was whether they submitted and commented on pull requests. The authors then suggest that GitHub, as a tool, can aid in this process by supporting "checkbox" features on a team's open source project that urge contributors to take part in these activities. Motivations of programmers With the growth and attention on the open-source movement, the reasons and motivations of programmers for creating code for free has been under investigation. In a paper from the 15th Annual Congress of the European Economic Association on the open-source movement, the incentives of programmers on an individual level as well as on a company or network level were analyzed. What is essentially the intellectual gift giving of talented programmers challenges the "self-interested-economic-agent paradigm", and has made both the public and economists search for an understanding of what the benefits are for programmers. Altruism: The argument for altruism is limited as an explanation because though some exists, the programmers do not focus their kindness on more charitable causes. If the generosity of working for free was a viable motivation for such a prevalent movement, it is curious why such a trend has not been seen in industries such as biotechnology that would have a much bigger impact on the public good. Community sharing and improvement: The online community is an environment that promotes continual improvements, modifications, and contributions to each other's work. A programmer can easily benefit from open-source software because by making it public, other testers and subprograms can remove bugs, tailor code to other purposes, and find problems. This kind of peer-editing feature of open-source software promotes better programs and a higher standard of code. Recognition: Though a project may not be associated with a specific individual, the contributors are often recognized and marked on a project's server or awarded social reputation. This allows for programmers to receive public recognition for their skills, promoting career opportunities and exposure. In fact, the founders of Sun Microsystems and Netscape began as open-source programmers. Ego: "If they are somehow assigned to a trivial problem and that is their only possible task, they may spend six months coming up with a bewildering architecture...merely to show their friends and colleagues what a tough nut they are trying to crack." Ego-gratification has been cited as a relevant motivation of programmers because of their competitive community. An OSS (open-source software) community has no clear distinction between developers and users, because all users are potential developers. There is a large community of programmers trying to essentially outshine or impress their colleagues. They enjoy having other programmers admire their works and accomplishments, contributing to why OSS projects have a recruiting advantage for unknown talent than a closed-source company. Creative expression: Personal satisfaction also comes from the act of writing software as an equivalent to creative self-expression – it is almost equivalent to creating a work of art. The rediscovery of creativity, which has been lost through the mass production of commercial software products can be a relevant motivation. Gender diversity of programmers The vast majority of programmers in open-source communities are male. In a study for the European Union on free and open-source software communities, researchers found that only 1.5% of all contributors are female. Although women are generally underrepresented in computing, the percentage of women in tech professions is actually much higher, close to 25%. This discrepancy suggests that female programmers are overall less likely than male programmers to participate in open-source projects. Some research and interviews with members of open-source projects have described a male-dominated culture within open-source communities that can be unwelcoming or hostile towards females. There are initiatives such as Outreachy that aim to support more women and other underrepresented gender identities to participate in open-source software. However, within the discussion forums of open-source projects the topic of gender diversity can be highly controversial and even inflammatory. A central vision in open-source software is that because the software is built and maintained on the merit of individual code contributions, open-source communities should act as a meritocracy. In a meritocracy, the importance of an individual in the community depends on the quality of their individual contributions and not demographic factors such as age, race, religion, or gender. Thus proposing changes to the community based on gender, for example, to make the community more inviting towards females, go against the ideal of a meritocracy by targeting certain programmers by gender and not based on their skill alone. There is evidence that gender does impact a programmer's perceived merit in the community. A 2016 study identified the gender of over one million programmers on GitHub, by linking the programmer's GitHub account to their other social media accounts. Between male and female programmers, the researchers found that female programmers were actually more likely to have their pull requests accepted into the project than male programmers, however only when the female had a gender-neutral profile. When females had profiles with a name or image that identified them as female, they were less likely than male programmers to have their pull requests accepted. Another study in 2015 found that of open-source projects on GitHub, gender diversity was a significant positive predictor of a team's productivity, meaning that open-source teams with a more even mix of different genders tended to be more highly productive. Many projects have adopted the Contributor Covenant code of conduct in an attempt to address concerns of harassment of minority developers. Anyone found breaking the code of conduct can be disciplined and ultimately removed from the project. In order to avoid offense to minorities many software projects have started to mandate the use of inclusive language and terminology. Evidence of open-source adoption Libraries are using open-source software to develop information as well as library services. The purpose of open source is to provide a software that is cheaper, reliable and has better quality. The one feature that makes this software so sought after is that it is free. Libraries in particular benefit from this movement because of the resources it provides. They also promote the same ideas of learning and understanding new information through the resources of other people. Open source allows a sense of community. It is an invitation for anyone to provide information about various topics. The open-source tools even allow libraries to create web-based catalogs. According to the IT source there are various library programs that benefit from this. Government agencies and infrastructure software — Government Agencies are utilizing open-source infrastructure software, like the Linux operating system and the Apache Web-server into software, to manage information. In 2005, a new government lobby was launched under the name National Center for Open Source Policy and Research (NCOSPR) "a non-profit organization promoting the use of open source software solutions within government IT enterprises." Open-source movement in the military — Open-source movement has potential to help in the military. The open-source software allows anyone to make changes that will improve it. This is a form of invitation for people to put their minds together to grow a software in a cost efficient manner. The reason the military is so interested is because it is possible that this software can increase speed and flexibility. Although there are security setbacks to this idea due to the fact that anyone has access to change the software, the advantages can outweigh the disadvantages. The fact that the open-source programs can be modified quickly is crucial. A support group was formed to test these theories. The Military Open Source Software Working Group was organized in 2009 and held over 120 military members. Their purpose was to bring together software developers and contractors from the military to discover new ideas for reuse and collaboration. Overall, open-source software in the military is an intriguing idea that has potential drawbacks but they are not enough to offset the advantages. Open source in education — Colleges and organizations use software predominantly online to educate their students. Open-source technology is being adopted by many institutions because it can save these institutions from paying companies to provide them with these administrative software systems. One of the first major colleges to adopt an open-source system was Colorado State University in 2009 with many others following after that. Colorado State Universities system was produced by the Kuali Foundation who has become a major player in open-source administrative systems. The Kuali Foundation defines itself as a group of organizations that aims to "build and sustain open-source software for higher education, by higher education." There are many other examples of open-source instruments being used in education other than the Kuali Foundation as well. "For educators, The Open Source Movement allowed access to software that could be used in teaching students how to apply the theories they were learning". With open networks and software, teachers are able to share lessons, lectures, and other course materials within a community. OpenTechComm is a program that is dedicated to "open access, open use, and open edits- text book or pedagogical resource that teachers of technical and professional communication courses at every level can rely on to craft free offerings to their students." As stated earlier, access to programs like this would be much more cost efficient for educational departments. Open source in healthcare — Created in June 2009 by the nonprofit eHealthNigeria, the open-source software OpenMRS is used to document health care in Nigeria. The use of this software began in Kaduna, Nigeria to serve the purpose of public health. OpenMRS manages features such as alerting health care workers when patients show warning signs for conditions and records births and deaths daily, among other features. The success of this software is caused by its ease of use for those first being introduced to the technology, compared to more complex proprietary healthcare software available in first world countries. This software is community-developed and can be used freely by anyone, characteristic of open-source applications. So far, OpenMRS is being used in Rwanda, Mozambique, Haiti, India, China, and the Philippines. The impact of open source in healthcare is also observed by Apelon Inc, the "leading provider of terminology and data interoperability solutions". Recently, its Distributed Terminology System (Open DTS) began supporting the open-source MySQL database system. This essentially allows for open-source software to be used in healthcare, lessening the dependence on expensive proprietary healthcare software. Due to open-source software, the healthcare industry has available a free open-source solution to implement healthcare standards. Not only does open source benefit healthcare economically, but the lesser dependence on proprietary software allows for easier integration of various systems, regardless of the developer. Companies IBM IBM has been a leading proponent of the Open Source Initiative, and began supporting Linux in 1998. Microsoft Before summer of 2008, Microsoft has generally been known as an enemy of the open-source community. The company's anti-open-source sentiment was enforced by former CEO Steve Ballmer, who referred to Linux, a widely used open-source software, as a "cancer that attaches itself ... to everything it touches." Microsoft also threatened Linux that they would charge royalties for violating 235 of their patents. In 2004, Microsoft lost a European Union court case, and lost the appeal in 2007, and their further appeal in 2012: being convicted of abusing its dominant position. Specifically they had withheld inter-operability information with the open source Samba (software) project, which can be run on many platforms and aims to "removing barriers to interoperability". In 2008, however, Sam Ramji, the then head of open-source-software strategy in Microsoft, began working closely with Bill Gates to develop a pro-open-source attitude within the software industry as well as Microsoft itself. Ramji, before leaving the company in 2009, built Microsoft's familiarity and involvement with open source, which is evident in Microsoft's contributions of open-source code to Microsoft Azure among other projects. These contributions would have been previously unimaginable by Microsoft. Microsoft's change in attitude about open source and efforts to build a stronger open-source community is evidence of the growing adoption and adaptation of open source. See also Community source Digital rights Diversity in open-source software List of free and open-source software packages List of open-source hardware projects Mass collaboration Open-design movement Open-source model Open-source appropriate technology Open-source hardware Open-source governance Sharing economy P2P economic system Peer production The Virtual Revolution References Further reading ssy.org.uk/2012/01/the-online-revolution/ The Online Revolutionarchived at https://web.archive.org/web/20130718231856/http://ssy.org.uk/2012/01/the-online-revolution/
8551067
https://en.wikipedia.org/wiki/Lavastorm%20Analytics
Lavastorm Analytics
Lavastorm is a global analytics software company based in Massachusetts. The company's products are most often used by business analysts looking to take on more responsibility for data preparation and to build advanced analytics, or by IT groups who are looking for more agile ways to provision governed data to business analysts. History The company was founded as JLM Technologies in 1993 by Justin and LeAnn Lindsey with a group of engineers from the Massachusetts Institute of Technology. Early locations in Massachusetts included Cambridge, Allston, and Waltham. The name was changed to Lavastorm in May 1999. The company's initial focus was developing high performance Internet systems and applications, especially web sites. High-profile successes included Monster.com, the employment website; FamilySearch the free genealogy website sponsored by The Church of Jesus Christ of Latter-day Saints; EdgarWatch, the first on-Web system delivering SEC filing documents in real-time. V. Miller Newton was the CEO of Monster.com when it hired Lavastorm to redesign its web site and infrastructure; in 1999, he moved over to the role of CEO for Lavastorm until 2003. Lindsey stayed on as Chief Solutions Officer. In September 1999 and June 2000, Lavastorm raised US$55 million in venture capital funding from partnerships including Hummer Winblad Venture Partners, Oak Investment Partners, Lehman Brothers, Reuters Venture Capital, and Intel. In 18 months, the company expanded from 20 employees to over 200, opening a Silicon Valley office in San Jose, California, and acquiring PixelDance, a web design company in Watertown, Massachusetts. In the latter half of 2000 and 2001, as the dot-com bubble burst, Lavastorm reinvented itself. It laid off employees, and split off its Internet engineering services in San Jose, California, selling them to management. The Massachusetts operation, now only 20 employees, focused on making software for telecommunications revenue assurance after doing a project with Verizon Communications. In 2001 Lavastorm introduced the Revenue Assurance and Intercarrier Cost Management products. Drew Rockwell, a former executive at Verizon, was hired in 2002, and Newton left in 2003. Lindsey went to work for Hewlett-Packard, and later became the Chief Technology Officer for the Federal Bureau of Investigation and United States Department of Justice. In July 2005, Lavastorm was bought by Martin Dawes Systems, a United Kingdom-based company specializing in billing and customer relationship management (CRM) software for the communications industry. Combined company annual revenues were expected to be US$35 million. In February 2006, Lavastorm merged with Visual Wireless, a Sweden-based revenue assurance and fraud detection software company. The combined customer list includes BellSouth, Comcast, TeliaSonera, Telstra and Vodafone. Drew Rockwell, Lavastorm CEO, continued as head of the merged company. Lavastorm kept its name, but also became the Martin Dawes Systems revenue assurance and fraud management division. On December 22, 2011, Lavastorm was de-merged from Martin Dawes Systems and re-launched as a data analytic company. In March 2018, Lavastorm was acquired by Infogix, Inc. The product offering has been rebranded as 'Data3Sixty' Lavastorm Engineering The Silicon Valley spin-off called itself Lavastorm Engineering, and was one of the first companies producing mobile games. Paul Abbassi was CEO and CTO, Jason Loia was Director of Wireless Entertainment, and Albert So was the Chief Mobile Code Monkey (programmer). It produced over 30 games for mobile phones, using BREW and Java ME, mostly based on licenses from other companies, such as the movies Van Helsing, and The Incredibles, and Capcom's classic video game Mega Man. The company dissolved in 2005, with many developers moving on to found Punch Entertainment, Inc. References External links "Lavastorm Technologies, Inc." company information from Hoover's Software companies based in Massachusetts Companies based in Boston Software companies established in 1993 Software companies of the United States 1993 establishments in Massachusetts 1993 establishments in the United States Companies established in 1993 Business intelligence companies Business intelligence Business analysis Big data companies Data visualization software
1161828
https://en.wikipedia.org/wiki/Nokia%209000%20Communicator
Nokia 9000 Communicator
The Nokia 9000 Communicator was the first product in Nokia's Communicator series, announced at CeBIT 1996 and introduced into the market on 15 August 1996. The phone was large and heavy at but powerful at the time. It is powered by an Intel 24 MHz i386 CPU and has 8 MB of memory, which is divided between applications (4 MB), program memory (2 MB) and user data (2 MB). The operating system is PEN/GEOS 3.0. The Communicator is one of the earliest smartphones on the market, after the IBM Simon in 1994 and the HP OmniGo 700LX, a DOS-based palmtop PC with integrated cradle for the Nokia 2110 cellular phone, announced in late 1995 and shipped in March 1996. The Communicator was highly advanced, featuring sending and receiving e-mail and fax via its 9.6 kbit/s GSM modem, and it also had a web browser and business programs. It is formed of a clamshell design that opens up to reveal a monochrome LCD display with a 640x200 resolution and a full QWERTY keyboard similar to a Psion PDA. It was priced £1,000 in the UK upon launch (). Then-CEO of Nokia, Jorma Ollila, said in 2012 regarding the device: "We were five years ahead." 9110 The Nokia 9110 Communicator is the updated model of the Nokia 9000 Communicator in the Communicator series. Its biggest change from the 9000 is that it weighs much less. Specifications Operating system: GEOS (running on top of ROM-DOS) on the PDA side Main applications: Fax, short messages, email, Wireless imaging: digital camera connectivity, Smart messaging, TextWeb, Web browser, Serial Terminal, Telnet, Contacts, Notes, Calendar, Calculator, world time clock, Composer. Display: 640 x 200 Pixels Size: 158 mm × 56 mm × 27 mm Weight: 253 g Processor: Embedded AMD Elan SC450 486 processor at 33 MHz Memory: 8 MB total, 4 MB Operating System and applications, 2 MB program execution, 2 MB user data storage, MMC card. Successors The product line was discontinued in 2000 by the introduction of Nokia 9210 Communicator which introduced a wide TFT colour internal screen, 32-bit ARM9-based RISC CPU at 52 MHz, 16 MB of internal memory, enhanced web abilities and most importantly saw the operating system change to the Symbian operating system. The 9210i launched in 2002 increased the internal memory to 40 MB, video streaming and flash 5 support for the web browser. The 9xxx Communicators introduced features which later evolved into smartphones. Awards The Nokia 9000 Communicator received several awards including: GSM World Award (for innovation) at GSM World Conference 1997 Best Technological Advance by Mobile News UK Best New Product 1997 by Business Week magazine In popular culture The Nokia 9000 is used by Val Kilmer when he played Simon Templar in the 1997 remake of The Saint, and by Anthony Hopkins and Chris Rock in the action comedy Bad Company. The phone is also mentioned in Bret Easton Ellis' book Glamorama. References External links More info 9000 Products introduced in 1996 Mobile phones with an integrated hardware keyboard Mobile phones with infrared transmitter Flip phones
25709826
https://en.wikipedia.org/wiki/SitNGo%20Wizard
SitNGo Wizard
SitNGo Wizard (sometimes referred to as SNG Wizard) is a poker tool software program to aid online poker players in determining their optimal betting actions during the late stages of Sit and go poker contests. Details PokerStars Game #37838600446: Tournament #229947010, $6.00+$0.50 USD Hold'em No Limit - Level VII (100/200) - 2010/01/08 10:55:44 ET Table '229947010 1' 9-max Seat #7 is the button Seat 2: Serge Claws (7400 in chips) Seat 4: ElT007 (1345 in chips) Seat 7: Jonesbravo (8518 in chips) Seat 8: MR. DAO (6297 in chips) Seat 9: SteveAHigh (3440 in chips) Serge Claws: posts the ante 25 ElT007: posts the ante 25 Jonesbravo: posts the ante 25 MR. DAO: posts the ante 25 SteveAHigh: posts the ante 25 MR. DAO: posts small blind 100 SteveAHigh: posts big blind 200 <nowiki>***</nowiki> HOLE CARDS <nowiki>***</nowiki> Dealt to ElT007 [9d 6d] Serge Claws: folds ElT007: folds Jonesbravo: folds MR. DAO: calls 100 SteveAHigh: checks <nowiki>***</nowiki> FLOP <nowiki>***</nowiki> [2d 4d Ad] MR. DAO: bets 200 SteveAHigh: folds Uncalled bet (200) returned to MR. DAO MR. DAO collected 525 from pot <nowiki>***</nowiki> SUMMARY <nowiki>***</nowiki> Total pot 525 | Rake 0 Board [2d 4d Ad] Seat 2: Serge Claws folded before Flop (didn't bet) Seat 4: ElT007 folded before Flop (didn't bet) Seat 7: Jonesbravo (button) folded before Flop (didn't bet) Seat 8: MR. DAO (small blind) collected (525) Seat 9: SteveAHigh (big blind) folded on the Flop The software enables players to load their hand histories so that they can get computerized feedback regarding their choices. It accepts both manual and downloaded entry of tournament situations for analysis. The software is not intended to be used in-game and many of its features become inoperable while an online poker client software program is active. For example, PokerStars lists it among its prohibited third party tools. The software is based on the Independent Chip Model (ICM) and is in the class of software described as Automated Independent Chip Model (AICM). The program also uses Future Game Simulation Model. In addition to the user's hole card and table positions, the software uses number of opponents, stack sizes, and opponent calling ranges to calculate the optimal action. The software is capable of producing graphs to show the implications of varying an opponent's hand range. The game view feature contains a summary of the analysis and recommendation. The software includes a quiz model that enables users to practice making push or fold decisions. The quiz mode serves as a type of poker flash card generator by creating random games for users to practice decision making. The quiz feature is customizeable with parameters for difficulty level, number of players, table position, and several other considerations. Critical review The software is a substitute for having a poker coach in the sense that the software tells you what to do before and after games and reviews your performance to help you understand the mistakes you made. It is designed to help a user become better at making the right in-game decisions, which should improve the user's ability to compete in the current landscape of players who use software to improve their play, and thus improve the user's profitability. Some of the technical features are considered likely to be offputting to some users. Pokersoftware.com's review considered this to be the most powerful Poker AICM. The program is subject to some of the pitfalls of the ICM method, but it has the Future Game Simulation (FGS) feature that attempts to compensate for the downfall of the ICM method of short-stacking. The key advantage of the program is as an objective instructor of counterintuitive optimal play, which if learned will give a user an advantage over his/her untrained opponents. The graphics are not considered impressive. There are additional cosmetic issues such as its interface and navigation options that weigh against the program's functionality. Notes External links Official site Mathematical software Simulation software Poker tools
52902461
https://en.wikipedia.org/wiki/Katie%20Johnson%20%28footballer%29
Katie Johnson (footballer)
Katlyn Alicia Johnson Carreón (born 14 September 1994) is an American-born Mexican footballer who plays as a forward for National Women's Soccer League club San Diego Wave FC and the Mexico women's national team. Early life and education Born and raised in Monrovia, California, a suburb of Los Angeles, Johnson is the daughter of an American father, Dennis Johnson, and a Mexican mother, Esther Carreón. Her sister, Isabelle, also played soccer for USC. Johnson attended Flintridge Sacred Heart Academy in La Canada, California and was a high school All-American soccer player. She scored 57 goals during her high school career. USC Trojans, 2012–2016 Johnson played forward for the USC Trojan's women's soccer team in the 2012, 2013, 2014, and 2016 seasons. She was injured and did not play in the 2015 season. During her four seasons she appeared in 83 games and scored 24 goals and had 6 assists. She was named the Most Outstanding Player on Offense in the 2016 College Cup, scoring the only goal in the semi-final and two goals in the final to lead USC to its second national championship in women's soccer. Club career Seattle Reign, 2017 On 12 January 2017, Johnson was selected by Seattle Reign FC as the 16th overall selection in the 2017 NWSL College Draft. She made her debut for the club in a match against the Houston Dash on April 22, 2017 and scored her first goal helping the Reign win 5–1. Mostly coming off the bench as a substitute she finished the season with four goals and two assists. Sky Blue, 2018 In January 2018, Johnson was traded to Sky Blue FC. She was named Player of the Week for Week 21 of the 2018 NWSL season after scoring 2 goals in Sky Blue's 2–2 draw against the Utah Royals. Chicago Red Stars, 2019 - 2021 In January 2019, the Chicago Red Stars announced they had acquired Johnson from Sky Blue FC in exchange for the sixth overall pick and highest second-round pick in the 2020 NWSL College Draft. Johnson had her first appearance for the Red Stars as a substitute for Sam Kerr in the 46th minute of a 2–1 loss to the Portland Thorns in the 2019 Thorns Spring Invitational preseason tournament San Diego Wave FC, 2021 In December 2021, the San Diego Wave Futbol Club announced it has acquired the rights to Mexican international Johnson, fellow Southern California native Makenzy Doniak and Kelsey Turnbow in a trade with the Chicago Red Stars. In exchange, the Red Stars received roster protection in the 2022 NWSL Expansion Draft plus allocation money. International career Through birth and descent, Johnson was eligible to play for either the United States or Mexico national teams, ultimately chosing to represent the latter at the senior level. She made her debut on 9 December 2015 in a 0–3 loss against Canada at the International Women's Football Tournament of Natal of that year. Shortly after, Johnson appeared in two matches and scored one goal for the Mexico national team in the 2016 CONCACAF Women's Olympic Qualifying Championship. Mexico did not qualify to play in the Olympics. She scored the lone Mexican goal in Mexico's 4–1 friendly loss to the United States on 5 April 2018. Johnson scored three goals at the 2018 Central American and Caribbean Games helping Mexico win the gold medal. International goals Scores and results list Mexico's goal tally first See also List of Mexico women's international footballers References External links Seattle Reign FC player profile USC Trojans player profile 1994 births Living people Citizens of Mexico through descent Mexican women's footballers Women's association football forwards Mexico women's international footballers Mexican people of American descent People from Monrovia, California Sportspeople from Los Angeles County, California Soccer players from California American women's soccer players USC Trojans women's soccer players OL Reign draft picks OL Reign players NJ/NY Gotham FC players Chicago Red Stars players San Diego Wave FC players National Women's Soccer League players American sportspeople of Mexican descent
20332991
https://en.wikipedia.org/wiki/Jeff%20Byers
Jeff Byers
Jeff Byers (born September 7, 1985) is a former American football center who played in the National Football League. He played his college football for the University of Southern California. High school career Byers attended Loveland High School, where he participated in basketball, track and played competitive football. In 2003, he played both offense and defense: That season, on defense as a defensive lineman, he had 203 tackles, 56 tackles for loss, 10 sacks, 14 forced fumbles and 3 fumble recoveries (with a touchdown). On offense, as a center he recorded 34 pancake blocks in one game and never allowed a sack in his career. Loveland won the Class 4A state championship in 2003. He was named the 2004 Gatorade Player of the Year, as the nation's top high school football player. College career Although Byers arrived at USC as a center, the Trojans already had then-sophomore Ryan Kalil who kept the position for three seasons; as a result, he moved to left guard. As a freshman, Byers started 4 and played in all 13 games during the 2004 season, as the Trojans went on to win the National Championship. In the spring before the 2005 season, Byers had hip surgery and missed the entire season under a medical redshirt. He started the 2006 season as a reserve in the opening game against Arkansas, but suffered a back sprain in subsequent week that required surgery. He missed the rest of the season. Byers recovered in time for the 2007 season, where he started all 13 games: 12 starts at left offensive guard and as center against Washington State University. He missed most of the training camps the following spring after contracting Rocky Mountain spotted fever. He was able to recover and was selected as a team captain for the 2008 season. After starting all 12 regular season games, Byers was selected to the 2008 All-Pac-10 Second Team by conference coaches. Due to the health issues that caused him to miss the 2005 and 2006 seasons, Byers petitioned the NCAA for a "clock-extension waiver". In December 2008, the NCAA granted him an additional season of eligibility. Byers received his bachelor's degree in business administration in the summer of 2007, he is currently studying under the Master of Business Administration program at the USC Marshall School of Business. During the 2008 season, USC head coach Pete Carroll had Byers lecture the entire team on the subprime mortgage crisis. He made the 2007 Pac-10 All-Academic second team. In 2009, Byers was listed at No. 1 on Rivals.com′s preseason interior lineman power ranking in 2009. With the injury of starting center Kristofer O'Dowd, Byers was moved to starting center for the Trojans 2009 season opener against San Jose State. Professional career 2010 NFL Draft Undrafted in the 2010 NFL Draft, Byers was signed by his former college coach, Pete Carroll, to the Seattle Seahawks on April 30, 2010, but cut before the start of the season. Byers was then added to the Denver Broncos practice squad on September 6, 2010. Byers was activated by the Broncos in December 2010, but released the following summer. Following his release by the Broncos, Byers was added to the Panthers' practice squad on September 5, 2011. He was promoted to the Panthers' active roster on December 17, 2011. Byers announced his retirement on March 11, 2014. References External links USC Athletic Department Biography: Jeff Byers 1985 births Living people American football offensive linemen Carolina Panthers players Denver Broncos players People from Loveland, Colorado Players of American football from Colorado Seattle Seahawks players USC Trojans football players
19496019
https://en.wikipedia.org/wiki/Computer%20shogi
Computer shogi
Computer shogi is a field of artificial intelligence concerned with the creation of computer programs which can play shogi. The research and development of shogi software has been carried out mainly by freelance programmers, university research groups and private companies. By 2017, the strongest programs were outperforming the strongest human players. Game complexity Shogi has the distinctive feature of reusing captured pieces. Therefore, shogi has a higher branching factor than other chess variants. The computer has more positions to examine because each piece in hand can be dropped on many squares. This gives shogi the highest number of legal positions and the highest number of possible games of all the popular chess variants. The higher numbers for shogi mean it is harder to reach the highest levels of play. The number of legal positions and the number of possible games are two measures of shogis game complexity. The complexity of Go can be found at Go and mathematics. More information on the complexity of Chess can be found at Shannon number. Components The primary components of a computer shogi program are the opening book, the search algorithm and the endgame. The "opening book" helps put the program in a good position and saves time. Shogi professionals, however, do not always follow an opening sequence as in chess, but make different moves to create good formation of pieces. The "search algorithm" looks ahead more deeply in a sequence of moves and allows the program to better evaluate a move. The search is harder in shogi than in chess because of the larger number of possible moves. A program will stop searching when it reaches a stable position. The problem is many positions are unstable because of the drop move. Finally, the "endgame" starts when the king is attacked and ends when the game is won. In chess, there are fewer pieces which leads to perfect play by endgame databases; However, pieces can be dropped in shogi so there are no endgame databases. A tsumeshogi solver is used to quickly find mating moves. Computers versus humans In the 1980s, due to the immaturity of the technology in such fields as programming, CPUs and memory, computer shogi programs took a long time to think, and often made moves for which there was no apparent justification. These programs had the level of an amateur of kyu rank. In the first decade of the 21st century, computer shogi has taken large steps forward in software and hardware technology. In 2007, top shogi player Yoshiharu Habu estimated the strength of the 2006 world computer shogi champion Bonanza. He contributed to the newspaper Nihon Keizai Shimbun evening edition on 26 March 2007 about the match between Bonanza and then Ryūō Champion Akira Watanabe. Habu rated Bonanzas game at the level of 2 dan shogi apprentice (shōreikai). In particular, computers are most suited to brute-force calculation, and far outperform humans at the task of finding ways of checkmating from a given position, which involves many fewer possibilities. In games with time limits of 10 seconds from the first move, computers are becoming a tough challenge for even professional shogi players. The past steady progress of computer shogi is a guide for the future. In 1996 Habu predicted a computer would beat him in 2015. Akira Watanabe gave an interview to the newspaper Asahi Shimbun in 2012. He estimated the computer played at the 4 dan professional level. Watanabe also said the computer sometimes found moves for him. On 23 October 2005, at the 3rd International Shogi Forum, the Japan Shogi Association permitted Toshiyuki Moriuchi, 2005 Meijin, to play computer shogi program YSS. Toshiyuki Moriuchi won the game playing 30 seconds per move with a Bishop handicap. In 2012, a retired professional lost a match with computer publicly first, and in 2013, active shogi professionals too. Bonanza versus Watanabe (2007) The Japan Shogi Association (JSA) gave reigning Ryuo Champion Watanabe permission to compete against the reigning World Computer Shogi Champion Bonanza on 21 March 2007. Daiwa Securities sponsored the match. Hoki Kunihito wrote Bonanza. The computer was an Intel Xeon 2.66 GHz 8 core with 8 gigabytes of memory and 160-gigabyte hard drive. The game was played with 2 hours each and 1 minute byo-yomi per move after that. Those conditions favor Watanabe because longer time limits mean there are fewer mistakes from time pressure. Longer playing time also means human players can make long-term plans beyond the computers calculating horizon. The 2 players were not at the same playing level. Watanabe was 2006 Ryuo Champion and Bonanza was at the level of 2 dan shoreikai. Bonanza was a little stronger than before due to program improvements and a faster computer. Watanabe prepared for a weaker Bonanza as Watanabe studied old Bonanza game records. Bonanza moved first and played Fourth File Rook Bear-in-the-hole as Watanabe expected. Watanabe thought some of Bonanza's moves were inferior. However, Watanabe deeply analyzed these moves thinking that maybe the computer saw something that Watanabe did not see. Watanabe commented after the game that he could have lost if Bonanza had played defensive moves before entering the endgame. But the computer choose to attack immediately instead of taking its time (and using its impressive endgame strategies) which cost it the match. Bonanza resigned after move 112. After Bonanza's loss Watanabe commented on computers in his blog, “I thought they still had quite a way to go, but now we have to recognize that they've reached the point where they are getting to be a match for professionals.” Watanabe further clarified his position on computers playing shogi in the Yomiuri Shimbun on 27 June 2008 when he said "I think I'll be able to defeat shogi software for the next 10 years". Another indication Bonanza was far below the level of professional Watanabe came 2 months after the match at the May 2007 World Computer Shogi Championship. Bonanza lost to the 2007 World Computer Shogi Champion YSS. Then YSS lost to amateur Yukio Kato in a 15-minute game. Annual CSA tournament exhibition games (2003–2009) The winners of CSA tournaments played exhibition games with strong players. These exhibition games started in 2003. In each succeeding year, the human competition was stronger to match the stronger programs. Yukio Kato was the Asahi Amateur Meijin champion. Toru Shimizugami was the Amateur Meijin champion. Eiki Ito, the creator of Bonkras, said in 2011, at present, top Shogi programs like Bonkras are currently at a level of lower- to middle-class professional players. Akara versus Shimizu (2010) The computer program Akara defeated the womens Osho champion Ichiyo Shimizu. Akara contained 4 computer engines, Gekisashi, GPS Shogi, Bonanza and YSS. Akara ran on a network of 169 computers. The 4 engines voted on the best moves. Akara selects the move with the most votes. If there is a tie vote then Akara selects Gekisashis move. Researchers at the University of Tokyo and the University of Electro-Communications developed Akara. Shimizu moved first and resigned in 86 moves after 6 hours and 3 minutes. Shimizu said she was trying to play her best as if she was facing a human player. She played at the University of Tokyo on 11 October 2010. The allotted thinking time per player is 3 hours and 60 seconds byoyomi. 750 fans attended the event. This is the third time since 2005 that the Japan Shogi Association granted permission to a professional to play a computer, and the first victory against a female professional. Akara aggressively pursued Shimizu from the start of the game. Akara played with a ranging rook strategy and offered an exchange of bishops. Shimizu made a questionable move partway though the game, and Akara went on to win. Ryuo champion, Akira Watanabe, criticized Shimizus game. On 19 November 2010, the Daily Yomiuri quoted Watanabe. Watanabe said, "Ms. Shimizu had plenty of chances to win". Computers Bonanza and Akara beat amateurs Kosaku and Shinoda (2011) On 24 July 2011, there was a two-game amateur versus computer match. Two computer shogi programs beat a team of two amateurs. One amateur, Mr. Kosaku, was a Shoreikai three Dan player. The other amateur, Mr. Shinoda, was the 1999 Amateur Ryuo. The allotted time for the amateurs was main time 1 hour and then 3 minutes per move. The allotted time for the computer was main time 25 minutes and then 10 seconds per move. Bonkras versus Yonenaga (2011–2012) On 21 December 2011, computer program Bonkras crushed retired 68-year-old Kunio Yonenaga, the 1993 Meijin. They played 85 moves in 1 hour, 3 minutes 39 seconds on Shogi Club 24. Main time was 15 minutes then additional 60 seconds per move. Yonenaga was gote (white) and played 2. K-62. This move was to confuse the computer by playing a move not in Bonkrass joseki (opening book). On 14 January 2012, Bonkras again defeated Yonenaga. This match is the first Denou-sen match. The game had 113 moves. Time allowed was 3 hours and then 1 minute per move. Bonkras moved first and used a ranging rook opening. Yonenaga made the same second move, K-6b, as in the previous game he lost. Bonkras ran on a Fujitsu Primergy BX400 with 6 blade servers to search 18,000,000 moves per second. Yonenaga used 2 hours 33 minutes. Bonkras used 1 hour 55 minutes. Bonkras evaluated its game with Yonenaga in January 2012. Denou-sen (2013) Denou-sen is a human versus machine battle. This match is the second Denou-sen match. Niconico is sponsoring 5 games. 5 professional shogi players play 5 computers. The winners of the previous World Computer Shogi Championship play the professional shogi players. Each player starts with 4 hours. After the player finishes 4 hours, the player must complete each move in 60 seconds. Niconico is broadcasting the games live with commentary. Miura versus GPS Shogi Hiroyuki Miura said before his game he would play with "all his heart and soul". Miura decided to use trusted opening theory instead of an anti-computer strategy. The computer played book moves and they castled symmetrically to defend their kings. The computer attacked quickly and Miura counterattacked with a drop move. More than 8 hours later Miura resigned. After the game, Miura said that "he should not have prepared for the game the way he did. He should have prepared for the game with a genuine sense of urgency, if only he knew, the computer was so strong." Miura expressed disappointment and said he had yet to figure out where he went wrong. The evaluation of the game by GPS is on the GPS Shogi web site. Funae versus Tsutsukana (revenge match) On 31 December 2013, Funae and Tsutsukana played a second game. Tsutsukana was the same version that beat Funae on 6 April 2013. The computer was one Intel processor with 6 cores. Funae won. Denou-sen 3 (2014) In 2013, the Japan Shogi Association announced that five professional shogi players would play five computers from 15 March to 12 April 2014. On 7 October 2013, the Japan Shogi Association picked the five players. The professional shogi players played the winners of a preliminary computer tournament. The preliminary computer tournament was held 2–4 November 2013. Computer restrictions Each shogi program ran on a single Intel processor with 6 cores. No multiple processor systems were allowed. No changes were allowed to the shogi programs after the preliminary computer tournament. Professional shogi players trained with the shogi programs after the preliminary computer tournament. Each player started with 5 hours at 10 am. After the 5 hours, the player must complete each move in 1 minute. There was a 1 hour lunch break at 12:00 and a half hour dinner break at 5 pm. Niconico is broadcasting the games live with commentary. Japanese auto parts maker Denso developed a robotic arm to move the pieces for the computer. Yashiki versus Ponanza Ōshō and Kiō champion Akira Watanabe wrote in his blog that "a human cannot think of some of Ponanza's moves such as 60.L*16 and 88.S*79. I am not sure they were the best moves or not right now, but I feel like I'm watching something incredible." Kisei, Ōi and Ōza champion Yoshiharu Habu told the Asahi Shimbum Newspaper, "I felt the machines were extraordinarily strong when I saw their games this time." Denou-sen 3.1: Sugai versus Shueso (revenge match) On Saturday 19 July 2014, Tatsuya Sugai once again got the chance to play against Shueso in what was billed as the "Shogi Denou-sen Revenge Match". Sugai had already been beaten by Shueso four months earlier in game one of Denou-sen 3, so this was seen as his chance to gain revenge for that loss. The game was sponsored by both the Japan Shogi Association and the telecommunications and media company Dwango and was held at the Tokyo Shogi Kaikan (the Japan Shogi Association's head office). Although the playing site was closed to the public, the game was streamed live via Niconico Live with commentary being provided by various shogi professionals and women's professionals. Shuesho's moves were made by Denso's robotic arm. The initial time control for each player was eight hours which was then followed by a 1-minute byoyomi. In addition, four 1-hour breaks were scheduled throughout the playing session to allow both sides time to eat and rest. The game lasted through the night and into the next day and finally finished almost 20 hours after it started when Sugai resigned after Shueso's 144 move. Programmer tools Shogidokoro Shogidokoro (将棋所) is a Windows graphical user interface (GUI) that calls a program to play shogi and displays the moves on a board. Shogidokoro was created in 2007. Shogidokoro uses the Universal Shogi Interface (USI). The USI is an open communication protocol that shogi programs use to communicate with a user interface. USI was designed by Norwegian computer chess programmer Tord Romstad in 2007. Tord Romstad based USI on Universal Chess Interface (UCI). UCI was designed by computer chess programmer Stefan Meyer-Kahlen in 2000. Shogidokoro can automatically run a tournament between two programs. This helps programmers to write shogi programs faster because they can skip writing the user interface part. It is also useful for testing changes to a program. Shogidokoro can be used to play shogi by adding a shogi engine to Shogidokoro. Some engines that will run under Shogidokoro are the following: Apery aperypaq (Apery SDT5 + Qhapaq SDT5) BlunderXX Bonanza elmo eloqhappa (elmo WCSC27 + Qhapaq WCSC27) Gikou (技巧) GPS Shogi Laramie Lesserkai Lightning Ponanza Quartet Qhapaq relmo (elmo WCSC27 + rezero8), rezero Silent Majority Spear SSP Tanuki (ナイツ・オブ・タヌキ WCSC27, 平成将棋合戦ぽんぽこ SDT5) TJshogi Ukamuse (浮かむ瀬 – the 2016 release of Apery) YaneuraOu (やねうら王) Yomita (読み太) The interface can also use tsumeshogi solver-only engines like SeoTsume (脊尾詰). The software's menus have both Japanese and English language options available. XBoard/WinBoard XBoard/WinBoard is another GUI that supports shogi and other chess variants including western chess and xiangqi. Shogi support was added to WinBoard in 2007 by H.G. Muller. WinBoard uses its own protocol (Chess Engine Communication Protocol) to communicate with engines, but can connect to USI engines through the UCI2WB adapter. Engines that can natively support WinBoard protocol are Shokidoki, TJshogi, GNU Shogi and Bonanza. Unlike Shogidokoro, WinBoard is free/libre and open source, and also available for the X Window System as XBoard (for Linux and Mac systems). A number of Shogi variants, such as Chu Shogi and Dai Shogi, are playable against AI using a forked version of Winboard. Included engines are: Shokidoki, which can play the smaller variants with drops (i.e. Minishogi); and HaChu, a large Shogi variant engine designed for playing Chu Shogi and has improved in strength over time. Shogi Browser Q 将棋ぶらうざQ (Shogi Browser Q) is a free cross-platform (Java) GUI, that can run USI engines and compete on Floodgate. Since v3.7 both Japanese and English languages are available. BCMShogi BCMShogi is an English language graphical user interface for the USI protocol and the WinBoard shogi protocol. It is no longer developed and currently is unavailable from the author's website. Floodgate Floodgate is a computer shogi server for computers to compete and receive ratings. Programs running under Shogidokoro can connect to Floodgate. The GPS team created Floodgate. Floodgate started operating continuously in 2008. The most active players have played 4,000 games. From 2008 to 2010, 167 players played 28,000 games on Floodgate. Humans are welcome to play on Floodgate. The time limit is 15 minutes per player, sudden death. From 2011 to 2018, the Floodgates number one program increased by 1184 points, an average of 169 points per year. World Computer Shogi Championship The annual computer vs computer world shogi championship is organized by the Computer Shogi Association (CSA) of Japan. The computers play automated games through a server. Each program has 25 minutes to complete a game. The first championship was in 1990 with six programs. In 2001, it grew to 55 programs. The championship is broadcast on the Internet. At the 19th annual CSA tournament, four programs (GPS Shogi, Otsuki Shogi, Monju and KCC Shogi) that had never won a CSA tournament defeated three of the previous years strongest programs (Bonanza, Gekisashi and YSS). The top three winners of the 2010 CSA tournament are Gekisashi, Shueso and GPS Shogi. In 2011, Bonkras won the CSA tournament with five wins out of seven games. Bonkras ran on a computer with three processors containing 16 cores and six gigabytes of memory. Bonanza won second place on a computer with 17 processors containing 132 cores and 300 gigabytes of memory. Shueso won third place. The 2010 CSA winner, Gekisashi, won fourth place. Ponanza won fifth place. GPS Shogi won sixth place on a computer with 263 processors containing 832 cores and 1486 gigabytes of memory. In 2012, GPS Shogi searched 280,000,000 moves per second and the average search depth was 22.2 moves ahead. Hiroshi Yamashita, the author of YSS, maintains a list of all shogi programs that played in World Computer Shogi Championship by year and winning rank. Video game systems Some commercial game software which play shogi are Habu Meijin no Omoshiro Shōgi for Super Famicom, Clubhouse Games for Nintendo DS and Shotest Shogi for Xbox. Restrictions On 18 September 2005 a Japan Shogi Association professional 5 dan played shogi against a computer. The game was played at the 29th Hokkoku Osho-Cup Shogi Tournament in Komatsu, Japan. The Matsue National College of Technology developed the computer program Tacos. Tacos played first and chose the static rook line in the opening. Professional Hashimoto followed the opening line while changing his bishop with the bishop of Tacos. Tacos had a good development with some advantages in the opening and middle game even until move 80. Many amateur players expected Tacos to win. However, professional Hashimoto defended and Tacos played strange moves. Tacos lost. On 14 October 2005, the Japan Shogi Association banned professional shogi players from competing against a computer. The Japan Shogi Association said the rule is to preserve the dignity of its professionals, and to make the most of computer shogi as a potential business opportunity. The ban prevents the rating of computers relative to professional players. From 2008 to 2012, the Japan Shogi Association (with Kunio Yonenaga as president) did not permit any games between a professional and a computer. Milestones 2005: at the Amateur Ryuo tournament, program Gekisashi defeated Eiji Ogawa in a 40-minute game of the first knockout round. 2005: Program Gekisashi defeated amateur 6-dan Masato Shinoda in a 40-minute exhibition game. 2007: highest rating for a computer on Shogi Club 24 is 2744 for YSS. 2008: May, computer program Tanase Shogi beat Asahi Amateur Meijin title holder Yukio Kato. 75 moves played in a 15-minute exhibition game. 2008: May, computer program Gekisashi beat Amateur Meijin Toru Shimizugami. 100 moves played in a 15-minute exhibition game. 2008: November, Gekisashi beat Amateur Meijin Shimizugami in a 1-hour game with 1-minute byoyomi. 2010: October, first time a computer beat a shogi champion. Akara beat the womens Osho champion Shimizu in 6 hours and 3 minutes. 2011: May, highest rated player on Shogi Club 24 is computer program Ponanza, rated 3211. 2011: December, highest rated player on Shogi Club 24 is computer program Bonkras, rated 3364 after 2116 games. 2012: January, Bonkras defeated the 1993 Meijin Yonenaga. They played 113 moves with main time 3 hours and then 1 minute per move. 2013: 20 April, GPS Shogi defeated Hiroyuki Miura, ranked 15. Game was 102 moves with main time 4 hours then 1 minute per move. 2013: 12 May, highest rated player on Shogi Club 24 is computer program Ponanza, rated 3453. 2014: 12 April, Ponanza defeated Yashiki Nobuyuki, ranked 12. Game was 130 moves with main time 5 hours then 1 minute per move. 2016: 10 April, Ponanza defeated Takayuki Yamasaki, 8-dan. Game was 85 moves. Takayuki used 7 hours 9 minutes. 2017: 20 May, Ponanza defeated Meijin Amahiko Satō in 2 games. 2017: Google DeepMind's AlphaZero convincingly defeats 2017 World Computer Shogi Champion program elmo See also List of shogi software Shogi variant Computer Chess Chess engine Chess opening book (computers) References External links Computer versus Human Shogi Events in Japanese コンピュータ将棋 まとめサイト: How to start computer Shogi (Japanese Chess) Current ratings for development versions of shogi engines 将棋フリーソフト: Installation instruction shogi engine (v.2019 May) Instructions to set-up and play with shogi engines How to install the Yaneuraou engine with third party evaluation files/opening books and the Gikou2 engine Nederlandse Shogi Bond: How to analyze your games using a shogi engine Spear a shogi engine Game artificial intelligence
18785439
https://en.wikipedia.org/wiki/Soundscape%20Digital%20Technology
Soundscape Digital Technology
Soundscape Digital Technology developed Windows-based digital audio workstations for multi-channel studio recording, editing and mastering. Soundscape SSHDR1 Soundscape was formed in the UK when in early 1992, Chris Wright, the head designer and Technical Manager for Cheetah Marketing Ltd., with Belgian designer Johan Bonnaerens and Cheetah, together with Johan's employer Sydec NV agreed a plan to jointly design, manufacture and market a modular 4 track hard disk based digital audio workstation (DAW). The SSHDR1 DAW became, if not the first, certainly one of the first products of this kind available and was showcased as an 8 track system at the NAMM and Musik Messe trade shows in 1993. Cheetah's parent company Cannon Street Investments was struggling during the UK recession and closed the company in March 1993, splitting off the computer peripherals division (which principally manufactured joysticks such as Bart Simpson, Batman and Alien licensed designs) to another company in the group. Chris Wright along with Sales Manager Nick Owen bought the assets of the Cheetah music products division, forming Soundscape Digital Technology Ltd., immediately took on two of the ex-Cheetah employees (Marcus Case - Production Manager and Kirstie Davies - Operations Manager) and started to market and manufacture the Soundscape SSHDR1, shipping the first batch of 100 units in August 1993. Like Chris, who had started designing products for music (synthesizers, effects, samplers, keyboards, drum machines) in his spare time (his day job was as a Senior Electronics Designer in telecoms), Johan was also an avid rock guitar player and music fan and had started the design at home. A long experienced audio designer himself, Chris contributed in some of the key elements of the DSP (Digital Signal Processing) code such as how to efficiently implement real-time fade curves, and digital compressors and chase locking to timecode, and his experience of EMC shielding and testing techniques enabled rapid EMC approval to be gained. He later concentrated on developing the specifications for the Soundscape products as they moved into the demanding high end markets in broadcast and film sound. Johan concentrated mainly on the Windows software and another engineer took over the DSP code. The system rapidly gained market success, shipping over 700 systems in the first year and garnered excellent reviews throughout the music and recording press in Europe, Australia and the US, and featured on the front page of most major magazines. The system was renowned for its bulletproof stability, something that was the holy grail of computer based recording on PCs. This was due to its split design with separate Motorola 56000 DSP powered hardware that was controlled by Windows editing software. The hardware took the strain allowing a very light demand on the PC, so that other MIDI sequencers such as eMagic Logic, Steinberg Cubase, Cakewalk and others could be used simultaneously. The boast was that even if the PC crashed, the system would continue recording, and this was demonstrated regularly. The result was that while most other computer based recording/editing systems were all studio based, Soundscape could also be used for live recording and could be relied upon for recording 100 piece orchestras with no risk. Integration of the SSHDR1 hardware within eMagic Logic Audio and Cakewalk was developed by both companies using the Soundscape API. Design elements The modular nature and expandability to 16 units connected to one PC was also somewhat unusual. Initially launched as a 4 track 16bit, 48 kHz system and using inexpensive IDE drives (the first units shipped with 2 x 120MB drives), advances in the efficiency of the DSP code extended this first to 8 tracks with 24bit recording and then with the addition of a second DSP board, to 12 tracks and also added the world's first configurable DSP based digital mixer. Huge systems could be configured as just 8 units coupled together formed a 96 track system with sample accurate synchronisation and could be controlled from one editing screen. The unit also had removable drive trays fitted as it had become cost efficient to simply put drives on the shelf as they cost far less per hour of audio than master tape. Soundscape also produced a range of modular audio interface units that connected to the Soundscape SSHDR1 unit via TDIF. Soundscape took a bold decision to offer free of charge software updates to their users, a decision that generated user loyalty of a level previously unknown for computer based audio products. Added to that, the quality of the product release remained incredibly high and bugs were virtually non-existent. Soundscape R.Ed, Mixtreme and Mixpander cards In 1997 the Soundscape R.Ed system was released which offered 32 tracks per units at up to 24bit 96 kHz and had two removable and two fixed drive bays. IDE disks, which at first had been ridiculed by many as non-professional, nevertheless had dominated the PC market and were approximately half the price of SCSI. The system could now contain a massive amount of inexpensive storage and was as reliable as ever. The limitation of the system compared to the market leader Pro Tools, had always been the amount of DSP power available for mixing and effects, but in 2000, this was removed with the launch of the Mixpander card, which added 9 of the latest super powerful Motorola 563xx DSPs to the system, connected via a fast bus, and so finally a vast amount of real-time DSP processing power was available. From 1993 to 2000 approximately 10,000 Soundscape systems were shipped and were being used in many professional applications as well as in home studios. Several successful Hollywood produced TV shows such as Mad About You and Frasier were almost completely edited using Soundscape, systems were in use in large numbers throughout the CBC in Canada and other broadcasters in many countries, large multitrack systems in recording studios. Soundscape had introduced an entire recording generation to digital recording and editing many of whom had previously never even used a computer. This system was very simple to use but contained powerful editing tools, real-time plug-in effects and had wide-ranging support throughout the industry with Soundscape format plug-ins developed by many top companies such as TC Electronics, Dolby, Drawmer, CEDAR Audio Ltd, Synchro Arts etc. Also there were some 30 to 40 companies developing or including Soundscape hardware in their products, from radio automation companies such as RCS, D.A.V.I.D and Dalet Digital Media Systems to video NLE manufacturers such as DPS and D-Vision (later Discreet) and many others who used Soundscape Mixtreme cards and Soundscape iBox audio interfaces. The Mixtreme card, first shipped in 1998 was Soundscape's first PCI card and utilised the DSP mixer developed for the Soundscape SSHDR1 so that along with 16 channels of I/O, it could also support the full range of Soundscape format real-time DSP effects plug-ins available. This was a unique card and the first of its type. Over the next few years many thousands of cards were shipped and it gained wide recognition as a very flexible and future proof audio solution. Demise In 1997 Sydec had started to run into hard times, as following a management buyout from their parent company Niko (a Belgian manufacturer of electrical products such as light fittings), the managing director had become ill, they had a dispute with their former owners and the result was that 50% of the expected income disappeared virtually overnight. The Soundscape side of Sydec's business, which by now had risen to approximately 10 people, half of the company, was still doing well, but without income from the other half, Sydec needed extra revenue badly. Chris Wright started to develop ideas to port the DSP core of the Soundscape R.Ed as a stand-alone recorder engine and began to discuss this with his contacts at Tascam in Japan. A plan was formed to provide a 24 track recorder plug-in board for Tascam digital mixers, but at the end Tascam didn't sign the contract as they had received a better offer from one of their existing 3rd party developers (in end the product never appeared). Chris Wright then presented the same idea to Mackie (which at the time was a $300 million NASDAQ listed corporation) and an agreement was made to produce a stand-alone 24 track recorder, that eventually became the Mackie SDR2496. Mackie held off on signing the contract, as their investigations into Sydec's heath had shown that the company was vulnerable and eventually made an offer to buy the shares of Sydec which was accepted. Mackie announced to the world's music industry that they had bought Soundscape at the NAMM show in 2001, which wasn't correct, and in doing so infringed the images, logos copyright owned by Soundscape Digital Technology Ltd. Soundscape's distribution network and customers became extremely nervous and business stalled, just as the long-awaited Mixpander was being launched. Soundscape disputed Mackie's use of their intellectual property and a legal action ensued ending in the High Court in London. An agreement was struck in May 2003 whereby Soundscape could resume its business without interference from Mackie, but following 5 months with no sales, a large legal bill and the slow summer months ahead, from being in a healthy position at the end of 2000, Soundscape now found itself in difficulties and decided to close its doors in September 2001. Chris Wright joined Teac and Nick Owen started a video dealer based in Cardiff, Wales. The sales completely halted as the Soundscape distribution network suddenly had no access to the product and the deep knowledge and energy of the Soundscape team that had driven the product to success had disappeared. Far from being the saviour, Mackie was unable to handle the product and for 1 year there was very little activity and almost no sales. In 2002 the Soundscape R.Ed was rebadged as the Mackie Soundscape 32 and re-launched, but the product was by then based on a design that had been conceived over 10 years earlier and the hardware design for the Soundscape R.Ed was originally started in 1995. Times had moved on and more powerful or native processing products (using the CPU of the PC) had become available such as Nuendo, Pyramix and Pro Tools LE and these were much less expensive. Since 2001, Pyramix particularly had begun to fill the void vacated by Soundscape. Mackie was also haemorrhaging cash in many areas and in 2003 suddenly closed Sydec's doors. Having picked themselves off the floor, Sydec's MD together with Johan Bonnaerens and 3 others reformed as Sydec Audio Engineering and made a deal with Mackie to sell off the stock of Mackie built units. The incredibly loyal Soundscape user base was relieved as they had become very disillusioned with Mackie, but it was difficult to make headway with such a small team. The company continued without great success until 2006, when they were purchased by Solid State Logic. The company continues to develop and release new software. The hardware department now focusses more on Audio Acquisition and Format Converters (such as their iBox range). As of 2010, the Soundscape 32 system and iBox range was still available. One problem is that IDE disk drives have largely been replaced by SATA and the Soundscape 32 units cannot support them. The current focus is to utilise hard disks connected to the PC together with a Mixpander card, providing a way that the software can operate without relying on the external units. The latest range has focussed on MADI connections, but this is a relatively niche area. History 1992 Cheetah Marketing agrees deal with Sydec NV 1993 Soundscape Digital Technology Ltd. formed after Cheetah's closure 1993 Soundscape SSHDR1 launched 1994 Over 700 Soundscape SSHDR1 systems sold 1995 Soundscape iBox range of Audio Interfaces launched 1997 Soundscape R.Ed launched 1998 Soundscape Mixtreme PCI card launched 2000 Soundscape Mixpander DSP card launched 2001 Sydec NV bought by Mackie, 2001 Soundscape Digital Technology in legal dispute with Mackie, the company closes its doors in September 2003 Mackie closes Sydec in April 2003 Sydec reopens in August as Sydec Audio Engineering NV 2006 Sydec bought by SSL 2012 Most Sydec's developers are leaving the company External links Sydec Audio Engineering NV Manufacturers of professional audio equipment Digital audio workstation software Audio equipment manufacturers of the United Kingdom
19512683
https://en.wikipedia.org/wiki/Walkers%20%28snack%20foods%29
Walkers (snack foods)
Walkers is a British snack food manufacturer mainly operating in the UK and Ireland. The company is best known for manufacturing potato crisps and other (non-potato-based) snack foods. In 2013, it held 56% of the British crisp market. Walkers was founded in 1948 in Leicester, England, by Henry Walker. In 1989, Walkers was acquired by Lay's owner, Frito-Lay, a division of PepsiCo. The Walkers factory in Leicester produces over 11 million bags of crisps per day, using about 800 tons of potatoes. According to the BBC television programme Inside the Factory, production of a bag of crisps takes approximately 35 minutes from the moment the raw potatoes are delivered to the factory, to the point at which finished product leaves the dispatch bay for delivery to customers. The company produces a variety of flavours for its crisps. The three main varieties are: Cheese and Onion (introduced in 1954), Salt and Vinegar (introduced in 1967) and Ready Salted. Other varieties include: Worcester Sauce, Roast Chicken, Prawn Cocktail, Smoky Bacon, Tomato Ketchup, and Pickled Onion. The Leicester-born former England international footballer Gary Lineker has been the face of the brand since 1995, featuring in most of its popular commercials and successful advertising campaigns. For the 2011 Comic Relief, four celebrities each represented four new flavours. The Walkers brand (under PepsiCo) sponsors the UEFA Champions League for the UK and Ireland markets. In 2019, Walkers reunited with the Spice Girls, with the 1990s girl band featuring in a campaign. Since 2008, Walkers has run its "Do Us a Flavour" campaign, challenging the British public to think up unique flavours for its crisps. Six flavours were chosen from among the entries and released as special editions. Consumers could vote on their favourite, and the winner would become a permanent flavour. In 2018, Walkers launched six new flavours to celebrate the brand's seventieth birthday, with each flavour representing a different decade. History In the 1880s, Walker moved from Mansfield to Leicester (43 miles south) to take over an established butcher's shop in the high street. Meat rationing in the UK after World War II saw the factory output drop dramatically, and so in 1948 the company starting looking at alternative products. Potato crisps were becoming increasingly popular with the public; this led managing director R.E. Gerrard to shift the company focus and begin hand-slicing and frying potatoes. Prior to the 1950s crisps were sold without flavour—Smith's of London sold plain potato crisps which came with a small blue sachet of salt that could be sprinkled over them. The first crisps manufactured by Walkers in 1948 were sprinkled with salt and sold for threepence a bag. After Archer Martin and Richard Synge (while working in Leeds) received a Nobel Prize for the invention of partition chromatography in 1952, food scientists began to develop flavours via a gas chromatograph, a device that allowed scientists to understand chemical compounds behind complex flavours such as cheese. In 1954, the first flavoured crisps were invented by Joe “Spud” Murphy (owner of the Irish company Tayto) who developed a technique to add cheese and onion seasoning during production. Later that year, Walkers introduced Cheese and Onion (inspired by the Ploughman's lunch), and Salt and Vinegar was launched in 1967 (inspired by the nation's love of fish and chips). Prawn Cocktail flavour was introduced in the 1970s (inspired by the 1970s popular starter of prawn cocktail) and Roast Chicken (inspired by the nation's roast dinner). In 1989, the company was acquired by PepsiCo, which placed operations under its Frito-Lay unit. The Walkers logo, featuring a red ribbon around a yellow sun, is noticeably similar to Lay's. It derives from the Walkers logo used in 1990. The company is still a significant presence in Leicester. Gary Lineker, the Leicester-born former footballer, is now the face of the company. In 2000, Lineker's Walkers commercials were ranked ninth in Channel 4’s UK wide poll of the "100 Greatest Adverts". The official website states that an estimated "11 million people will eat a Walkers product every day". The company employs over 4,000 people in 15 locations around the UK. In June 1999, PepsiCo transferred ownership of its Walkers brands out of Britain and into a Swiss subsidiary, Frito-Lay Trading GmbH. Subsequently, according to The Guardian, the UK tax authorities managed to claw back less than a third of what they might have received had an unchanged structure continued producing the same sort of level of UK profits and tax as Walkers Snack Foods had in 1998. In September 2001, Walkers ran a "Moneybags" promotion where £20, £10 and £5 notes were placed in special winning bags. This was very popular. However, two workers at a crisp factory were sacked after stealing cash prizes from bags on the production line. In February 2006, Walkers changed its brand label and typeset. It also announced it would reduce the saturated fat in its crisps by 70%. It started frying its crisps in "SunSeed" oil, as claiming the oil is higher in monounsaturated fat content than the standard sunflower oil which it had used previously, establishing its own sunflower farms in Ukraine and Spain to be able to produce sufficient quantities of the oil. Walkers updated its packaging style in June 2007, moving to a brand identity reminiscent of the logo used from 1998 to 2006. Many of Walkers brands were formerly branded under the Smiths Crisps name. This comes from the time when Walkers, Smiths and Tudor Crisps were the three main brands of Nabisco's UK snack division, with Tudor being marketed mainly in the north of England and Smiths in the south. After the takeover by PepsiCo, the Tudor name was dropped, and the Smiths brand has become secondary to Walkers. The only products retaining the Smiths brand are Salt & Vinegar and Ready Salted Chipsticks, Frazzles and the "Savoury Selection", which includes Bacon Flavour Fries, Scampi Flavour Fries and Cheese Flavoured Moments. To promote the freshness of its products, Walkers began to package them in foil bags from 1993, then from 1996, began filling them with nitrogen instead of air. In 1997, Walkers became the brand name of Quavers and Monster Munch snacks. In January 1999, Walkers launched Max, a brand with a range of crisps and then a new-look Quavers in March 1999. In April 2000, another of the Max flavours called Red Hot Max was launched and then Naked Max in June 2000. In February 2000, a new-look Cheetos was relaunched, serving as the only cheesy snack in the UK. In July 2000, Quavers were relaunched and then a picture of the multipack. In March 2001, Walkers bought Squares, a range of snacks from Smiths. in. November 2001, more Max flavours were introduced. They included chargrilled steak and chip shop curry. In May 2002, Walkers launched Sensations. Sensations flavours include Thai Sweet Chilli, Roast Chicken & Thyme, Balsamic Vinegar & Caramelised Onion. Walkers introduced the streaky bacon Quavers flavour to salt & vinegar and prawn cocktail in August 2002. In January 2003, Smiths brands Salt 'n' Shake, Scampi Fries and Bacon Fries were relaunched under the Walkers identity. In January 2003 Walkers bought Wotsits from Golden Wonder, which replaced Cheetos during December 2002. In April 2004, Walkers launched a Flamin' Hot version of Wotsits, which replaced BBQ beef, and then Wotsits Twisted, a range of cheese puffs in July 2004. In September 2007, Walkers launched Sunbites, a healthier range of lower/better fat crisps made using whole grains. In July 2008, Walkers launched its "Do Us a Flavour" campaign, challenging the public to think up unique flavours for its crisps. In January 2009 six flavours were chosen from among the entries and released as special editions, available until May 2009. During this period, consumers could vote on their favourite, and the winner would become a permanent flavour. The winner was Builder's Breakfast by Emma Rushin from Belper in Derbyshire. This flavour was discontinued a year later, in May 2010, in order for Walkers to focus on the upcoming 'Flavour Cup'. In summer 2009, Walkers launched its premium "Red Sky" brand of "all natural" potato crisps and snacks. It was stated that Red Sky products were made from 100% natural ingredients, and that the makers "work in partnership with Cool Earth", a charity that protects endangered rainforest; Walkers made charitable donations proportionate to the number of purchases of Red Sky snacks. Walkers discontinued the range in 2014 following poor sales. In April 2010, the company launched a promotional campaign entitled the Walkers Flavour Cup in order to locate the world's most loved and favourite flavour. In the end, it was decided that the flavour with the most fans at the end of the tournament/competition would be declared the winner and ultimate champion of all flavours. Walkers encouraged people to engage in social media activity, and upload photos and videos to its website proving people's Superfan status of Walkers Crisps. The best fan from each of the 15 flavours won £10,000. In the end, English roast beef & Yorkshire pudding won the Flavour Cup. For the 2011 Comic Relief, four celebrities (Jimmy Carr, Stephen Fry, Al Murray and Frank Skinner) each represented four new flavours. In early 2013, Walkers revised its packaging, with a new design and typeface. Slogans such as 'Distinctively Salt & Vinegar' and 'Classically Ready Salted' were added to the front of packs. The previous packaging design had only existed for 12 months. Along with this packaging design, there came news that the company would begin using real meat products in its Smoky Bacon and Roast Chicken flavoured crisps. This prompted opposition from vegetarians, vegans, Muslims and Jews, who were now unable to eat these flavours. In 2014, Lineker launched a new "Do Us a Flavour" Walkers competition which encouraged people to submit new flavours of crisps, with the best six being sold later in the year before a public vote to decide the winner. The winner would win £1m. The public had to pick one of Walkers' ingredients as a base – Somerset Cheddar, Devonshire chicken, Norfolk pork, Dorset sour cream, Vale of Evesham tomatoes and Aberdeen Angus beef – then choose their own unique flavour. In 2015, Walkers launched the "Bring me Back" campaign, reintroducing Barbecue, Cheese and Chive, Beef and Onion, Lamb and Mint and Toasted Cheese flavours for a limited time. People voted on the Walkers website or used hashtags to see which flavour would be reintroduced permanently (Beef and Onion was chosen). The Marmite flavour was also brought back permanently to coincide with the promotion. On 10 April 2016, Walkers launched the Spell and Go promotion, again fronted by Gary Lineker. This competition caused some controversy as customers complained that it was impossible to win. The fairness of the competition was discussed on You and Yours, the consumer show on BBC Radio 4. Over 100 entrants complained to the Advertising Standards Authority, who after completing an investigation, decided that elements were misleading, and the competition was banned. As of 2018, Walkers came under pressure from campaigners to change its packaging due to its contribution to litter and plastic pollution. As part of the protest a marine biology student wore a crisp packet dress to her graduation. She claimed the dress was inspired by litter she had seen on a beach. In September 2018, the Royal Mail appealed to customers to stop posting empty crisp packets to Walkers, which campaigners had asked people to do and "flood Walkers social media with pictures of us popping them in the post". Royal Mail was obliged by law to deliver the bags to Walkers' freepost address, but without envelopes they could not go through machines and had to be sorted by hand, causing delays. Product lines Core crisps Walkers' most common flavours of regular crisp are Ready Salted (sold in a red packet), Salt & Vinegar (green), Cheese & Onion (blue), Smoky Bacon (maroon) and Prawn Cocktail (pink). Other flavours are sold in other coloured packets, such as Beef & Onion (brown), Marmite (black), and Worcester sauce (purple). The unusual colours for Walkers packaging with Salt & Vinegar (in green, with other brands in blue), and Cheese & Onion (in blue, with other brands in green) has been debated in the UK, with some believing Walkers switched the colours; however, Walkers have stated that they have always had that colour scheme. Actress Thandiwe Newton joked, ’Salt ‘n Vinegar is BLUE, and Cheese and Onion is GREEN. Ok? Enough with this foolishness.’ Some flavours were made available for a short time either because they tied in with special promotions, or failed to meet sales expectations. Walkers' "Great British Dinner" range included baked ham & mustard and chicken tikka. A series of "mystery flavours" were launched in 2012, and later revealed to be sour cream & spring onion, Lincolnshire sausage & brown sauce, and Birmingham chicken balti. In 2016, Walkers produced a limited edition 'Winners - Salt and Victory' crisps to commemorate its home-town football team, Leicester City, winning the Premier League for the first time. Earlier that season, Walkers had given Leicester fans in attendance at a match versus Chelsea bags of "Vardy salted" crisps, bearing the image of the Foxes' striker. Other lines Other Walkers products are: Baked crisp range Cheese Heads Crinkles Deep Ridged (now Max Double Crunch) Extra Crunchy 150g bags, launched in 2010 Lights (low fat crisps, formerly Lites) Market Deli crisps, pitta chips and tortilla chips Max Pops Potato Heads (discontinued in 2008), Salt 'n' Shake Sensations - a premium range of crisps, poppadums and nuts Squares Stars Sunbites - wholegrain crispy snacks Hoops and Crosses Chipsticks Doritos French Fries Frazzles Mix-Ups Quavers Monster Munch Sundog Savoury Popcorn Snaps Twisted Wotsits including the Wafflers variant In January 2019, Walkers unveiled new packaging for its main range, celebrating its British heritage through its design. The new packaging features the Walkers logo in the middle of each packet rather than centre-top, alongside a new series of illustrations which are laid out in the shape of a Union Jack flag, and feature icons and landmarks such as London's Big Ben and red telephone boxes, and Liverpool's Liver Building. Litter According to the environmental charity Keep Britain Tidy, Walkers crisps packets along with Cadbury chocolate wrappers and Coca-Cola cans were the three top brands that were the most common pieces of rubbish found in UK streets in 2013. In December 2018, Walkers launched a recycling scheme for crisp packets after it was targeted by protests on the issue. Three months after its launch more than half a million empty packets were recycled. However, as UK consumers eat 6 billion packets of crisps per year, with Walkers producing 11 million packets per day, the campaign organisation 38 Degrees noted this represents only a small fraction of the number of packets made and sold annually. See also List of brand name snack foods References External links Snack food manufacturers of the United Kingdom Frito-Lay brands Brand name snack foods British brands Manufacturing companies based in Leicester Products introduced in 1948 Brand name potato chips and crisps 1989 mergers and acquisitions
228791
https://en.wikipedia.org/wiki/Climm
Climm
climm (previously mICQ) is a free CLI-based instant messaging client that runs on a wide variety of platforms, including AmigaOS, BeOS, Windows (using either Cygwin or MinGW), OS X, NetBSD/OpenBSD/FreeBSD, Linux, Solaris, HP-UX, and AIX. Functionality climm has many of the features the official ICQ client has, and more: It has support for SSL-encrypted direct connection compatible with licq and SIM. It supports OTR encrypted messages. It is internationalized; German, English, and other translations are available, and it supports sending and receiving acknowledged and non-acknowledged Unicode-encoded messages (it even understands UTF-8 messages for message types the ICQ protocol does not use them for). It is capable of running several UINs at the same time and is very configurable (e.g. different colors for incoming messages from different contacts or for different accounts). Due to its command-line interface, it has good usability for blind users through text-to-speech interfaces or Braille devices. climm also supports basic functionality of the XMPP protocol. History Climm was originally developed as mICQ by Matt D. Smith as public domain software. Starting with mICQ 0.4.8 it was licensed under the GPLv2, not much of the original PD code remained since then. All later additions were made by Rüdiger Kuhlmann, in particular, the support for the ICQ v8 protocol. mICQ was renamed to climm ("Command Line Interface Multi Messenger") with version change to 0.6. CLimm was relicensed to include the OpenSSL exception. See also Comparison of instant messaging clients References Notes Andreas Kneib (Feb 2004) Der direkte Draht. ICQ in der Kommandozeile (Direct Line. ICQ in the command line), LinuxUser Further reading Jonathan Corbet (February 18, 2003) The trojaning of mICQ, lwn.net External links ICQ protocol page Instant messaging clients for Linux MacOS instant messaging clients Windows instant messaging clients Amiga instant messaging clients Free instant messaging clients Cross-platform software Free software programmed in C Public-domain software with source code
3503014
https://en.wikipedia.org/wiki/Anonymous%20pipe
Anonymous pipe
In computer science, an anonymous pipe is a simplex FIFO communication channel that may be used for one-way interprocess communication (IPC). An implementation is often integrated into the operating system's file IO subsystem. Typically a parent program opens anonymous pipes, and creates a new process that inherits the other ends of the pipes, or creates several new processes and arranges them in a pipeline. Full-duplex (two-way) communication normally requires two anonymous pipes. Pipelines are supported in most popular operating systems, from Unix and DOS onwards, and are created using the "|" character in many shells. Unix Pipelines are an important part of many traditional Unix applications and support for them is well integrated into most Unix-like operating systems. Pipes are created using the pipe system call, which creates a new pipe and returns a pair of file descriptors referring to the read and write ends of the pipe. Many traditional Unix programs are designed as filters to work with pipes. Microsoft Windows Like many other device IO and IPC facilities in the Windows API, anonymous pipes are created and configured with API functions that are specific to the IO facility. In this case CreatePipe is used to create an anonymous pipe with separate handles for the read and write ends of the pipe. Read and write IO operations on the pipe are performed with the standard IO facility API functions ReadFile and WriteFile. On Microsoft Windows, reads and writes to anonymous pipes are always blocking. In other words, a read from an empty pipe will cause the calling thread to wait until at least one byte becomes available or an end-of-file is received as a result of the write handle of the pipe being closed. Likewise, a write to a full pipe will cause the calling thread to wait until space becomes available to store the data being written. Reads may return with fewer than the number of bytes requested (also called a short read). New processes can inherit handles to anonymous pipes in the creation process. See also Named pipe Anonymous named pipe Pipeline (Unix) References Hart, Johnson M. Windows System Programming, Third Edition. Addison-Wesley, 2005. Notes Inter-process communication
2281397
https://en.wikipedia.org/wiki/Shell%20%28computing%29
Shell (computing)
In computing, a shell is a computer program which exposes an operating system's services to a human user or other programs. In general, operating system shells use either a command-line interface (CLI) or graphical user interface (GUI), depending on a computer's role and particular operation. It is named a shell because it is the outermost layer around the operating system. Command-line shells require the user to be familiar with commands and their calling syntax, and to understand concepts about the shell-specific scripting language (for example, bash). Graphical shells place a low burden on beginning computer users, and are characterized as being easy to use. Since they also come with certain disadvantages, most GUI-enabled operating systems also provide CLI shells. Overview Operating systems provide various services to their users, including file management, process management (running and terminating applications), batch processing, and operating system monitoring and configuration. Most operating system shells are not direct interfaces to the underlying kernel, even if a shell communicates with the user via peripheral devices attached to the computer directly. Shells are actually special applications that use the kernel API in just the same way as it is used by other application programs. A shell manages the user–system interaction by prompting users for input, interpreting their input, and then handling an output from the underlying operating system (much like a read–eval–print loop, REPL). Since the operating system shell is actually an application, it may easily be replaced with another similar application, for most operating systems. In addition to shells running on local systems, there are different ways to make remote systems available to local users; such approaches are usually referred to as remote access or remote administration. Initially available on multi-user mainframes, which provided text-based UIs for each active user simultaneously by means of a text terminal connected to the mainframe via serial line or modem, remote access has extended to Unix-like systems and Microsoft Windows. On Unix-like systems, Secure Shell protocol is usually used for text-based shells, while SSH tunneling can be used for X Window System–based graphical user interfaces (GUIs). On Microsoft Windows, Remote Desktop Protocol can be used to provide GUI remote access, and since Windows Vista, PowerShell Remote can be used for text-based remote access via WMI, RPC, and WS-Management. Most operating system shells fall into one of two categories command-line and graphical. Command line shells provide a command-line interface (CLI) to the operating system, while graphical shells provide a graphical user interface (GUI). Other possibilities, although not so common, include voice user interface and various implementations of a text-based user interface (TUI) that are not CLI. The relative merits of CLI- and GUI-based shells are often debated. Command-line shells A command-line interface (CLI) is an operating system shell that uses alphanumeric characters typed on a keyboard to provide instructions and data to the operating system, interactively. For example, a teletypewriter can send codes representing keystrokes to a command interpreter program running on the computer; the command interpreter parses the sequence of keystrokes and responds with an error message if it cannot recognize the sequence of characters, or it may carry out some other program action such as loading an application program, listing files, logging in a user and many others. Operating systems such as UNIX have a large variety of shell programs with different commands, syntax and capabilities, with the POSIX shell being a baseline. Some operating systems had only a single style of command interface; commodity operating systems such as MS-DOS came with a standard command interface (COMMAND.COM) but third-party interfaces were also often available, providing additional features or functions such as menuing or remote program execution. Application programs may also implement a command-line interface. For example, in Unix-like systems, the telnet program has a number of commands for controlling a link to a remote computer system. Since the commands to the program are made of the same keystrokes as the data being sent to a remote computer, some means of distinguishing the two are required. An escape sequence can be defined, using either a special local keystroke that is never passed on but always interpreted by the local system. The program becomes modal, switching between interpreting commands from the keyboard or passing keystrokes on as data to be processed. A feature of many command-line shells is the ability to save sequences of commands for re-use. A data file can contain sequences of commands which the CLI can be made to follow as if typed in by a user. Special features in the CLI may apply when it is carrying out these stored instructions. Such batch files (script files) can be used repeatedly to automate routine operations such as initializing a set of programs when a system is restarted. Batch mode use of shells usually involves structures, conditionals, variables, and other elements of programming languages; some have the bare essentials needed for such a purpose, others are very sophisticated programming languages in and of themselves. Conversely, some programming languages can be used interactively from an operating system shell or in a purpose-built program. The command-line shell may offer features such as command-line completion, where the interpreter expands commands based on a few characters input by the user. A command-line interpreter may offer a history function, so that the user can recall earlier commands issued to the system and repeat them, possibly with some editing. Since all commands to the operating system had to be typed by the user, short command names and compact systems for representing program options were common. Short names were sometimes hard for a user to recall, and early systems lacked the storage resources to provide a detailed on-line user instruction guide. Graphical shells A graphical user interface (GUI) provides means for manipulating programs graphically, by allowing for operations such as opening, closing, moving and resizing windows, as well as switching focus between windows. Graphical shells may be included with desktop environments or come separately, even as a set of loosely coupled utilities. Most graphical user interfaces develop the metaphor of an "electronic desktop", where data files are represented as if they were paper documents on a desk, and application programs similarly have graphical representations instead of being invoked by command names. Unix-like systems Graphical shells typically build on top of a windowing system. In the case of X Window System or Wayland, the shell consists of an X window manager or a Wayland compositor, respectively, as well as of one or multiple programs providing the functionality to start installed applications, to manage open windows and virtual desktops, and often to support a widget engine. In the case of macOS, Quartz Compositor acts as the windowing system, and the shell consists of the Finder, the Dock, SystemUIServer, and Mission Control. Microsoft Windows Modern versions of the Microsoft Windows operating system use the Windows shell as their shell. Windows Shell provides desktop environment, start menu, and task bar, as well as a graphical user interface for accessing the file management functions of the operating system. Older versions also include Program Manager, which was the shell for the 3.x series of Microsoft Windows, and which in fact shipped with later versions of Windows of both the 95 and NT types at least through Windows XP. The interfaces of Windows versions 1 and 2 were markedly different. Desktop applications are also considered shells, as long as they use a third-party engine. Likewise, many individuals and developers dissatisfied with the interface of Windows Explorer have developed software that either alters the functioning and appearance of the shell or replaces it entirely. WindowBlinds by StarDock is a good example of the former sort of application. LiteStep and Emerge Desktop are good examples of the latter. Interoperability programmes and purpose-designed software lets Windows users use equivalents of many of the various Unix-based GUIs discussed below, as well as Macintosh. An equivalent of the OS/2 Presentation Manager for version 3.0 can run some OS/2 programmes under some conditions using the OS/2 environmental subsystem in versions of Windows NT. Other uses "Shell" is also used loosely to describe application software that is "built around" a particular component, such as web browsers and email clients, in analogy to the shells found in nature. Indeed, the (command-line) shell encapsulates the operating system kernel. These are also sometimes referred to as "wrappers". In expert systems, a shell is a piece of software that is an "empty" expert system without the knowledge base for any particular application. See also Comparison of command shells Human–computer interaction Internet Explorer shell Shell account Shell builtin Superuser Unix shell Window manager provides a rudimentary process management interface Read-eval-print loop also called language shell, a CLI for an interpreted programming language References Desktop environments
12967110
https://en.wikipedia.org/wiki/Notre%20Dame%20Fighting%20Irish%20football%20under%20Tyrone%20Willingham
Notre Dame Fighting Irish football under Tyrone Willingham
The Notre Dame Fighting Irish were led by Tyrone Willingham and represented the University of Notre Dame in NCAA Division I college football from 2002 to 2004. The team was an independent and played their home games in Notre Dame Stadium. Throughout the three seasons, the Irish were 21–16 (21–15 before Willingham was fired) and were invited to two bowl games, both of which they lost. After the 2001 season, fifth-year head coach Bob Davie was fired. His immediate replacement, George O'Leary, was forced to resign under some controversy for discrepancies on his resume, and Willingham was chosen to replace him. Willingham made immediate changes to the program and won his first eight games. Although his team floundered at the end of the season and lost their bowl game, he led the team to 10 wins and was named "Coach of the Year" by two different publications. His second year began with the signing of a top-5 recruiting class to replace a number of players who graduated. Although the team began the season with a win, they lost their next two games, and freshman quarterback Brady Quinn became the starter. Quinn led the Irish to four more wins that season, and the team finished a 5–7 record. Willingham's third season started with a loss, but three straight wins brought the team back into national prominence. The team went on to win six games, but their fifth loss of the season, a blowout to the University of Southern California (USC) Trojans, was Willingham's final game at Notre Dame. Although the Irish were invited to a bowl game at the end of the season, Willingham was fired. The eventual hiring of Charlie Weis as Willingham's replacement was called a good move, but Willingham's firing remained a controversial subject for years following his tenure. Before Willingham In the 2001 season, the Fighting Irish, led by fifth-year head coach Bob Davie, had a record of five wins and six losses. A day after the season ended, athletic director Kevin White announced to the media that Davie would not be retained as head coach of Notre Dame. A week after the firing of Davie, George O'Leary, seven-year head coach of Georgia Tech, was hired by Notre Dame for the head coaching position. Despite being a controversial choice criticized by some in the media, and Notre Dame being criticized for making a premature decision, O'Leary was happy to accept what was called his dream job. Five days after being hired, however, O'Leary resigned from the position. It was later revealed that O'Leary had lied on his résumé about receiving a varsity letter and a master's degree while in school. While O'Leary was criticized for lying, some in the media said that his resignation gave Notre Dame a chance to make a better decision. Two weeks after O'Leary resigned, Notre Dame signed Tyrone Willingham, the seventh-year coach of Stanford, to a six-year contract. Willingham became the school's first African American head coach in any sport. He immediately made changes to the Irish program, including changing the long used Irish offense from an option attack to a West Coast type. He also made his first-year Fighting Irish team only the second in Notre Dame history to pick captains on a game-by-game basis. 2002 season Season overview The 2002 season became known as a "Return to Glory" for the Irish. This phrase appeared on a student shirt that created a "Sea of Green" in the Irish stands. It was picked up by many in the media and was used on the front cover of Sports Illustrated. Despite not scoring an offensive touchdown in their first two games, the Irish won both, and in the process made Willingham the 24th Notre Dame head coach to win his opener in his first season. The team went on to win its next six games, including wins over Willingham's alma mater, Michigan State, and Stanford, his former team. The team was initially led throughout the season by quarterback Carlyle Holiday, former quarterback and wide receiver Arnaz Battle, and on defense, Shane Walton. Running back Ryan Grant, who had to replace Julius Jones who was out for academic reasons, also played an important role. During the Michigan State game, however, Holiday was injured and replaced by backup Pat Dillingham. Dillingham led the Irish to a comeback win on a screen pass to Battle in that game, and he continued the winning streak until Holiday returned for the Florida State game. In that game, Holiday he threw a 65-yard touchdown on his first play to Battle that helped the Irish win the game. The first Irish loss of the season came against the Boston College Eagles, mirroring the 1993 season when Notre Dame narrowly lost a chance to participate in the national championship game due to a loss to Boston College. Willingham, wanting the team to be a part of the "Sea of Green" in the stands, decided that the team should wear green for the game. In 1985, the last time the Irish wore green at home, they came out after halftime against USC and won the game 37–3. The ploy, however, did not work this time, as, after an injured Holiday was replaced by Dillingham, and the Eagles defense returned an interception that sealed the loss for the Irish. The Fighting Irish won their next two games, including their 39th straight victory over Navy and a 42–0 blowout victory over struggling Rutgers. This gave Notre Dame a legitimate shot at a Bowl Championship Series (BCS) bowl game if they could win against perennial rival USC. The Irish were ranked higher than the Trojans, but USC quarterback Carson Palmer, who cited the game as the reason he went on to win the Heisman Trophy, threw for 425 yards in the Trojans' 31 point win. The Irish won 10 games but were not invited to a BCS bowl game, and they accepted a bid to play North Carolina State in the Gator Bowl instead. With both an offense and defense that outmatched the Irish, the Wolfpack won the game 28–6, giving the Irish their sixth consecutive bowl loss. Despite the loss, the Irish ended the season ranked in both the Associated Press (AP) and Coaches Polls. After the season, some Irish were honored with post-season awards. Battle was named by one foundation as their sportsman of the year, while Walton was named as a Consensus All-American. Finally, Willingham was honored with two Coach of the Year awards, was named by Sporting News as "Sportsman of the Year", and was the only coach listed by Sporting News as one of their "Most Powerful People in Sports". 2003 season Season overview The 2003 season began with the Irish losing a number of key players to graduation, including Arnaz Battle and center Jeff Faine. They were boosted, however, by the return of running back, Julius Jones, who was reinstated to the team after a year of academic ineligibility. In Willingham's first full year of recruiting, he signed a top-5 class. Of the 20 recruits signed, 12 were four-star recruits (high school recruits are rated on a star scale, with one star indicating a low-quality recruit and five stars indicating the highest-quality recruit). These new recruits included future stars Victor Abiamiri, Chinedum Ndukwe, Brady Quinn, Jeff Samardzija, and Tom Zbikowski. The Irish began their season ranked 19th and facing the hardest schedule in the nation. They opened against the Washington State Cougars, playing the team for the first time in the history of the program. The Irish came back from being down by 19 points to win in overtime, but Carlyle Holiday struggled as quarterback. In the next game against rival Michigan, the Wolverines avenged their 2002 loss by beating the Irish by a score of 38–0 in the first shutout in the series in 100 years and the largest margin of victory ever between the two teams. After another loss to Michigan State, many Irish fans were calling for Holiday to be taken out of the game in favor of freshman Brady Quinn, who saw his first collegiate action in the fourth quarter of the Michigan rout. Holiday was replaced as starter for the next game against Purdue. In Quinn's first start, the Irish were bolstered with Quinn's 297 passing yards on 59 attempts. However, Purdue's defense intercepted four of Quinn's passes and sacked him five times en route to a 23–10 Boilermaker victory. Quinn remained as the starter and, with Willingham's acknowledgment that the running game needed to take more of a role in the next game, got his first win against Pittsburgh. He was helped by Julius Jones' school-record 262 rushing yards. Notre Dame lost their next three games, including Willingham's second straight 31 point loss to USC, a last minute loss to Boston College, and their first home shutout since 1978 to Florida State. The Irish players began to call the season disappointing, as the team needed to win their last four games to make a bowl game. They looked to have a chance of becoming bowl eligible, as their next three games were a last minute win that improved their streak to 40 games over Navy, a win on senior day over the Brigham Young University (BYU) Cougars, and a win over Stanford that saw the Irish offense finally connect in the season. Notre Dame lost their final game to Syracuse, however. With a 5–7 record, the Irish finished with the twelfth losing season in the history of the Notre Dame football program. 2004 season Season overview The 2004 season began with doubts and criticism for the Irish. With Julius Jones graduating as fourth-leading rusher in Notre Dame history, the Irish hoped to replace him with a talented recruiting class. However, Willingham struggled in his second full year of recruiting and the new class was ranked 30th in the nation. Despite signing highly sought after recruit Darius Walker, the 17 man class only included three four-star recruits. The season began poorly for the Irish with a loss at BYU. Despite Brady Quinn improving at the quarterback position and completing over 50 percent of his passes for 265 yards, the Irish only managed to gain 11 yards rushing. They next faced a highly ranked Michigan team at home and Willingham stated that an improved running game would be important if the Irish were to be able to beat the Wolverines. Darius Walker answered Willingham in his first collegiate game, gaining 115 yards and scoring two late touchdowns to lead the Irish in the upset. The Irish were rejuvenated with this win, and they rallied to move to 3–1 on the season with wins over Michigan State and Washington. Some in the media began comparing Willingham to some of Notre Dame's legendary coaches and said the team would win seven or eight games in the season and be back in national championship contention by 2005. With renewed expectations, the Irish hoped to continue their streak and beat 15th ranked Purdue, who had not won at Notre Dame in 30 years. The Boilermakers' quarterback, Kyle Orton, outplayed the Irish defense, handing them a 25-point loss to end the rally. The Irish got back on track and beat Stanford, making Notre Dame the second school to reach 800 wins. They also beat Navy for the 41st straight time, which moved Notre Dame into the rankings for the first time since their 2003 loss to Michigan. The Irish did not stay ranked for long, as Boston College once again beat the Irish on a late score. The Irish had three games left and needed one win to become bowl eligible, their next game was against the 9th ranked Tennessee Volunteers in Knoxville. The Irish defense stepped up and, after knocking out quarterback Erik Ainge on a sack, returned an interception for a touchdown to upset the Volunteers and become bowl eligible. Once again ranked, the Irish returned home for their final home game against Pittsburgh. Losing on a late score, the team allowed five passing touchdowns by an opponent for the first time ever at home. Visiting USC for the final regular season game, the Irish again lost to the Trojans by 31 points. The Irish accepted a bowl bid to play in the Insight Bowl. In a highly criticized move, Willingham was fired two days later. Defensive coordinator Kent Baer led the Irish, hoping to "win one for Ty." However, the Oregon State Beavers, led by four touchdown passes from Derek Anderson, beat the Irish in Notre Dame's seventh consecutive bowl loss. The Irish ended 2004 with a 6–6 record and in need of a coach. Aftermath of the Willingham firing In firing Willingham, the Notre Dame athletic department cited a relatively poor record of 21–15, a weak recruiting class, and three losses, each by 31 points, to rival USC. However, the Irish also hoped to entice Urban Meyer, the head coach of the Utah Utes, to lead Notre Dame. Meyer had just led the Utes to an undefeated season and he had a clause in his contract that stated he could leave Utah without a penalty to coach for the Irish. When Meyer instead took the head coaching position at Florida, the Irish were ridiculed in the media with claims that the Notre Dame coaching position was no longer as prestigious as it was in the past. After over a week without a coach, the Irish hired New England Patriots' offensive coordinator Charlie Weis as head coach. Weis was an alumnus of Notre Dame, and he became the first alumnus to coach the team since 1963. At least one sports writer stated that Weis was a choice that made sense for the program. Willingham, meanwhile, accepted a position as head coach of the University of Washington Huskies football team. See also Notre Dame Fighting Irish football under Bob Davie Notre Dame football yearly totals References 2000s in Indiana
765588
https://en.wikipedia.org/wiki/ANTLR
ANTLR
In computer-based language recognition, ANTLR (pronounced antler), or ANother Tool for Language Recognition, is a parser generator that uses LL(*) for parsing. ANTLR is the successor to the Purdue Compiler Construction Tool Set (PCCTS), first developed in 1989, and is under active development. Its maintainer is Professor Terence Parr of the University of San Francisco. Usage ANTLR takes as input a grammar that specifies a language and generates as output source code for a recognizer of that language. While Version 3 supported generating code in the programming languages Ada95, ActionScript, C, C#, Java, JavaScript, Objective-C, Perl, Python, Ruby, and Standard ML, Version 4 at present targets C#, C++, Dart, Java, JavaScript, Go, PHP, Python (2 and 3), and Swift. A language is specified using a context-free grammar expressed using Extended Backus–Naur Form (EBNF). ANTLR can generate lexers, parsers, tree parsers, and combined lexer-parsers. Parsers can automatically generate parse trees or abstract syntax trees, which can be further processed with tree parsers. ANTLR provides a single consistent notation for specifying lexers, parsers, and tree parsers. By default, ANTLR reads a grammar and generates a recognizer for the language defined by the grammar (i.e., a program that reads an input stream and generates an error if the input stream does not conform to the syntax specified by the grammar). If there are no syntax errors, the default action is to simply exit without printing any message. In order to do something useful with the language, actions can be attached to grammar elements in the grammar. These actions are written in the programming language in which the recognizer is being generated. When the recognizer is being generated, the actions are embedded in the source code of the recognizer at the appropriate points. Actions can be used to build and check symbol tables and to emit instructions in a target language, in the case of a compiler. Other than lexers and parsers, ANTLR can be used to generate tree parsers. These are recognizers that process abstract syntax trees, which can be automatically generated by parsers. These tree parsers are unique to ANTLR and help processing abstract syntax trees. Licensing and ANTLR 4 are free software, published under a three-clause BSD License. Prior versions were released as public domain software. Documentation, derived from Parr's book The Definitive ANTLR 4 Reference, is included with the BSD-licensed ANTLR 4 source. Various plugins have been developed for the Eclipse development environment to support the ANTLR grammar, including ANTLR Studio, a proprietary product, as well as the "ANTLR 2" and "ANTLR 3" plugins for Eclipse hosted on SourceForge. ANTLR 4 ANTLR 4 deals with direct left recursion correctly, but not with left recursion in general, i.e., grammar rules x that refer to y that refer to x. Development As reported on the tools page of the ANTLR project, plug-ins that enable features like syntax highlighting, syntax error checking and code completion are freely available for the most common IDEs (Intellij IDEA, NetBeans, Eclipse, Visual Studio and Visual Studio Code). Projects Software built using ANTLR includes: Groovy. Jython. Hibernate OpenJDK Compiler Grammar project experimental version of the javac compiler based upon a grammar written in ANTLR. Apex, Salesforce.com's programming language. The expression evaluator in Numbers, Apple's spreadsheet. Twitter's search query language. Weblogic server. Apache Cassandra. Processing. JabRef. Presto (SQL query engine) MySQL Workbench Over 200 grammars implemented in ANTLR 4 are available on GitHub. They range from grammars for a URL to grammars for entire languages like C, Java and Go. Example In the following example, a parser in ANTLR describes the sum of expressions can be seen in the form of "1 + 2 + 3": // Common options, for example, the target language options { language = "CSharp"; } // Followed by the parser class SumParser extends Parser; options { k = 1; // Parser Lookahead: 1 Token } // Definition of an expression statement: INTEGER (PLUS^ INTEGER)*; // Here is the Lexer class SumLexer extends Lexer; options { k = 1; // Lexer Lookahead: 1 characters } PLUS: '+'; DIGIT: ('0'..'9'); INTEGER: (DIGIT)+; The following listing demonstrates the call of the parser in a program: TextReader reader; // (...) Fill TextReader with character SumLexer lexer = new SumLexer(reader); SumParser parser = new SumParser(lexer); parser.statement(); See also Coco/R DMS Software Reengineering Toolkit JavaCC Modular Syntax Definition Formalism Parboiled (Java) Parsing expression grammar SableCC References Bibliography Further reading External links ANTLR (mega) Tutorial Why Use ANTLR? ANTLR Studio 1992 software Free compilers and interpreters Parser generators Software using the BSD license Public-domain software
23453203
https://en.wikipedia.org/wiki/Unix%20filesystem
Unix filesystem
In Unix and operating systems inspired by it, the file system is considered a central component of the operating system. It was also one of the first parts of the system to be designed and implemented by Ken Thompson in the first experimental version of Unix, dated 1969. As in other operating systems, the filesystem provides information storage and retrieval, and one of several forms of interprocess communication, in that the many small programs that traditionally form a Unix system can store information in files so that other programs can read them, although pipes complemented it in this role starting with the Third Edition. Also, the filesystem provides access to other resources through so-called device files that are entry points to terminals, printers, and mice. The rest of this article uses Unix as a generic name to refer to both the original Unix operating system and its many workalikes. Principles The filesystem appears as one rooted tree of directories. Instead of addressing separate volumes such as disk partitions, removable media, and network shares as separate trees (as done in DOS and Windows: each drive has a drive letter that denotes the root of its file system tree), such volumes can be mounted on a directory, causing the volume's file system tree to appear as that directory in the larger tree. The root of the entire tree is denoted /. In the original Bell Labs Unix, a two-disk setup was customary, where the first disk contained startup programs, while the second contained users' files and programs. This second disk was mounted at the empty directory named usr on the first disk, causing the two disks to appear as one filesystem, with the second disk’s contents viewable at /usr. Unix directories do not contain files. Instead, they contain the names of files paired with references to so-called inodes, which in turn contain both the file and its metadata (owner, permissions, time of last access, etc., but no name). Multiple names in the file system may refer to the same file, a feature termed a hard link. The mathematical traits of hard links make the file system a limited type of directed acyclic graph, although the directories still form a tree, as they typically may not be hard-linked. (As originally envisioned in 1969, the Unix file system would in fact be used as a general graph with hard links to directories providing navigation, instead of path names.) File types The original Unix file system supported three types of files: ordinary files, directories, and "special files", also termed device files. The Berkeley Software Distribution (BSD) and System V each added a file type to be used for interprocess communication: BSD added sockets, while System V added FIFO files. BSD also added symbolic links (often termed "symlinks") to the range of file types, which are files that refer to other files, and complement hard links. Symlinks were modeled after a similar feature in Multics, and differ from hard links in that they may span filesystems and that their existence is independent of the target object. Other Unix systems may support additional types of files. Conventional directory layout Certain conventions exist for locating some kinds of files, such as programs, system configuration files, and users' home directories. These were first documented in the hier(7) man page since Version 7 Unix; subsequent versions, derivatives and clones typically have a similar man page. The details of the directory layout have varied over time. Although the file system layout is not part of the Single UNIX Specification, several attempts exist to standardize (parts of) it, such as the System V Application Binary Interface, the Intel Binary Compatibility Standard, the Common Operating System Environment, and Linux Foundation's Filesystem Hierarchy Standard (FHS). Here is a generalized overview of common locations of files on a Unix operating system: See also Btrfs ext2 ext3 ext4 Filesystem Hierarchy Standard HAMMER JFS (file system) Unix File System Veritas File System ZFS References Unix file system technology File system management
15590171
https://en.wikipedia.org/wiki/9912%20Donizetti
9912 Donizetti
9912 Donizetti, provisional designation , is a stony Rafita asteroid from the central regions of the asteroid belt, approximately 7 kilometers in diameter. It was discovered during the third Palomar–Leiden Trojan survey in 1977, and named after Italian composer Gaetano Donizetti. Discovery Donizetti was discovered on 16 October 1977, by the Dutch astronomers Ingrid and Cornelis van Houten, on photographic plates taken by Dutch–American astronomer Tom Gehrels at Palomar Observatory in California, United States. Trojan survey The survey designation "T-3" stands for the third and last Palomar–Leiden Trojan survey, named after the fruitful collaboration of the Palomar and Leiden Observatory in the 1960s and 1970s. Gehrels used Palomar's Samuel Oschin telescope (also known as the 48-inch Schmidt Telescope), and shipped the photographic plates to Ingrid and Cornelis van Houten at Leiden Observatory where astrometry was carried out. The trio are credited with the discovery of several thousand asteroids. Orbit and classification It orbits the Sun in the central main-belt at a distance of 2.2–2.9 AU once every 4 years and 1 month (1,499 days). Its orbit has an eccentricity of 0.15 and an inclination of 7° with respect to the ecliptic. The body's observation arc begins at the discovering Palomar Observatory on 7 October 1977, just 9 days prior to its official discovery observation. Rafita family Donizetti is a stony member of the Rafita family, which is located in the central main-belt just beyond the 3:1 mean-motion orbital resonance with Jupiter. The family consists of more than a thousand members, the largest being 1658 Innes and 1587 Kahrstedt, approximately 14 and 15 kilometers in diameter, respectively. The family's namesake, 1644 Rafita, is considered an interloper to the family itself. Physical characteristics Donizetti has been characterized as a stony S-type asteroid by Pan-STARRS photometric survey. Rotation period In October 2010, a rotational lightcurve of Donizetti was obtained from photometric observations in the R-band at the Palomar Transient Factory (PTF) in California. Lightcurve analysis gave a rotation period of 6.228 hours with a brightness variation of 0.19 magnitude (). In December 2011, PTF obtained a second lightcurve, also in the R-band, that gave a concurring period of 6.230 hours and a higher amplitude of 0.32 magnitude.(). Diameter and albedo According to the surveys carried out by the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, Donizetti measures 6.922 kilometers in diameter and its surface has an albedo of 0.255. The Collaborative Asteroid Lightcurve Link assumes a standard albedo for stony asteroids of 0.20 and calculates a diameter of 6.54 kilometers based on an absolute magnitude of 13.29. Naming This minor planet was named for Italian composer of symphonies, church and chamber music and operas, Gaetano Donizetti (1797–1848). The approved naming citation was published by the Minor Planet Center on 2 April 1999 (). References External links Asteroid Lightcurve Database (LCDB), query form (info ) Dictionary of Minor Planet Names, Google books Asteroids and comets rotation curves, CdR – Observatoire de Genève, Raoul Behrend Discovery Circumstances: Numbered Minor Planets (5001)-(10000) – Minor Planet Center 009912 Discoveries by Cornelis Johannes van Houten Discoveries by Ingrid van Houten-Groeneveld Discoveries by Tom Gehrels 2078 Minor planets named for people Named minor planets 009912 19771016 Gaetano Donizetti
11199437
https://en.wikipedia.org/wiki/Jeffrey%20Hunker
Jeffrey Hunker
Jeffrey Hunker (January 20, 1957 – May 31, 2013) was an American cyber security consultant and writer. Biography Hunker received his bachelor's degree from Harvard College and Ph.D. from Harvard Business School. He joined the Boston Consulting Group before becoming an advisor in the Department of Commerce and the founding director of the Critical Infrastructure Assurance Office (later subsumed by the Department of Homeland Security National Protection and Programs Directorate). This led him to serve on the National Security Council as the Senior Director for Critical Infrastructure. Hunker was also a Vice President at Kidder, Peabody & Co., dean of the Heinz College at Carnegie Mellon, and a member of the Council on Foreign Relations. He is credited with coining the term cyberinfrastructure and has worked closely with Richard A. Clarke on cyberterrorism issues. Hunker's research is primarily concerned with Homeland and Information Security. Prof. Hunker was also the Carnegie Mellon Representation for the Institute for Information Infrastructure Protection. In 2008 Hunker was charged three times with driving under the influence, followed by another incident on Thanksgiving 2009. In May 2010 Hunker pleaded guilty to these four drunken driving charges and was sentenced to 3 to 6 months in jail. He was paroled at sentencing and was sentenced to 24 months probation. This sentence was terminated early on June 2, 2011. In 2010 his book Creeping Failure: How We Broke the Internet and What We Can Do to Fix It was published by McClelland and Stewart, a division of Random House. Creeping Failure is a Scientific American magazine Recommended Book. In 2011 a second edition was released. Also in 2010 he was co-editor of Insider Threats in Cyber Security and his article (co-authored with Christian Probst)The Risk of Risk Analysis and its Relation to the Economics of Insider Threats appears in The Economics of Information Security and Privacy. Until 2013, he was Visiting Scholar in the Computer Science Department at the University of California, Davis, and was also consulting with a major philanthropic foundation in Pittsburgh. His most recent books are, as co-editor and contributor, Insider Threats in Cyber Security (Springer, 2010), and Cybersecurity: Shared Risks, Shared Responsibilities (Carolina Academic Press, 2012). Dr. Hunker died on May 31, 2013. References Boston Consulting Group people Carnegie Mellon University faculty Cyberinfrastructure Harvard Business School alumni Writers about computer security United States National Security Council staffers 1957 births 2013 deaths Harvard College alumni
19673
https://en.wikipedia.org/wiki/MP3
MP3
MP3 (formally MPEG-1 Audio Layer III or MPEG-2 Audio Layer III) is a coding format for digital audio developed largely by the Fraunhofer Society in Germany, with support from other digital scientists in the United States and elsewhere. Originally defined as the third audio format of the MPEG-1 standard, it was retained and further extended — defining additional bit-rates and support for more audio channels — as the third audio format of the subsequent MPEG-2 standard. A third version, known as MPEG 2.5 — extended to better support lower bit rates — is commonly implemented, but is not a recognized standard. MP3 (or mp3) as a file format commonly designates files containing an elementary stream of MPEG-1 Audio or MPEG-2 Audio encoded data, without other complexities of the MP3 standard. With regard to audio compression (the aspect of the standard most apparent to end-users, and for which it is best known), MP3 uses lossy data-compression to encode data using inexact approximations and the partial discarding of data. This allows a large reduction in file sizes when compared to uncompressed audio. The combination of small size and acceptable fidelity led to a boom in the distribution of music over the Internet in the mid- to late-1990s, with MP3 serving as an enabling technology at a time when bandwidth and storage were still at a premium. The MP3 format soon became associated with controversies surrounding copyright infringement, music piracy, and the file ripping/sharing services MP3.com and Napster, among others. With the advent of portable media players, a product category also including smartphones, MP3 support remains near-universal. MP3 compression works by reducing (or approximating) the accuracy of certain components of sound that are considered (by psychoacoustic analysis) to be beyond the hearing capabilities of most humans. This method is commonly referred to as perceptual coding or as psychoacoustic modeling. The remaining audio information is then recorded in a space-efficient manner, using MDCT and FFT algorithms. Compared to CD-quality digital audio, MP3 compression can commonly achieve a 75 to 95% reduction in size. For example, an MP3 encoded at a constant bitrate of 128 kbit/s would result in a file approximately 9% of the size of the original CD audio. In the early 2000s, compact disc players increasingly adopted support for playback of MP3 files on data CDs. The Moving Picture Experts Group (MPEG) designed MP3 as part of its MPEG-1, and later MPEG-2, standards. MPEG-1 Audio (MPEG-1 Part 3), which included MPEG-1 Audio Layer I, II and III, was approved as a committee draft for an ISO/IEC standard in 1991, finalised in 1992, and published in 1993 as ISO/IEC 11172-3:1993. An MPEG-2 Audio (MPEG-2 Part 3) extension with lower sample- and bit-rates was published in 1995 as ISO/IEC 13818-3:1995. It requires only minimal modifications to existing MPEG-1 decoders (recognition of the MPEG-2 bit in the header and addition of the new lower sample and bit rates). History Background The MP3 lossy audio-data compression algorithm takes advantage of a perceptual limitation of human hearing called auditory masking. In 1894, the American physicist Alfred M. Mayer reported that a tone could be rendered inaudible by another tone of lower frequency. In 1959, Richard Ehmer described a complete set of auditory curves regarding this phenomenon. Between 1967 and 1974, Eberhard Zwicker did work in the areas of tuning and masking of critical frequency-bands, which in turn built on the fundamental research in the area from Harvey Fletcher and his collaborators at Bell Labs. Perceptual coding was first used for speech coding compression with linear predictive coding (LPC), which has origins in the work of Fumitada Itakura (Nagoya University) and Shuzo Saito (Nippon Telegraph and Telephone) in 1966. In 1978, Bishnu S. Atal and Manfred R. Schroeder at Bell Labs proposed an LPC speech codec, called adaptive predictive coding, that used a psychoacoustic coding-algorithm exploiting the masking properties of the human ear. Further optimisation by Schroeder and Atal with J.L. Hall was later reported in a 1979 paper. That same year, a psychoacoustic masking codec was also proposed by M. A. Krasner, who published and produced hardware for speech (not usable as music bit-compression), but the publication of his results in a relatively obscure Lincoln Laboratory Technical Report did not immediately influence the mainstream of psychoacoustic codec-development. The discrete cosine transform (DCT), a type of transform coding for lossy compression, proposed by Nasir Ahmed in 1972, was developed by Ahmed with T. Natarajan and K. R. Rao in 1973; they published their results in 1974. This led to the development of the modified discrete cosine transform (MDCT), proposed by J. P. Princen, A. W. Johnson and A. B. Bradley in 1987, following earlier work by Princen and Bradley in 1986. The MDCT later became a core part of the MP3 algorithm. Ernst Terhardt et al. constructed an algorithm describing auditory masking with high accuracy in 1982. This work added to a variety of reports from authors dating back to Fletcher, and to the work that initially determined critical ratios and critical bandwidths. In 1985, Atal and Schroeder presented code-excited linear prediction (CELP), an LPC-based perceptual speech-coding algorithm with auditory masking that achieved a significant data compression ratio for its time. IEEE's refereed Journal on Selected Areas in Communications reported on a wide variety of (mostly perceptual) audio compression algorithms in 1988. The "Voice Coding for Communications" edition published in February 1988 reported on a wide range of established, working audio bit compression technologies, some of them using auditory masking as part of their fundamental design, and several showing real-time hardware implementations. Development The genesis of the MP3 technology is fully described in a paper from Professor Hans Musmann, who chaired the ISO MPEG Audio group for several years. In December 1988, MPEG called for an audio coding standard. In June 1989, 14 audio coding algorithms were submitted. Because of certain similarities between these coding proposals, they were clustered into four development groups. The first group was ASPEC, by Fraunhofer Gesellschaft, AT&T, France Telecom, Deutsche and Thomson-Brandt. The second group was MUSICAM, by Matsushita, CCETT, ITT and Philips. The third group was ATAC, by Fujitsu, JVC, NEC and Sony. And the fourth group was SB-ADPCM, by NTT and BTRL. The immediate predecessors of MP3 were "Optimum Coding in the Frequency Domain" (OCF), and Perceptual Transform Coding (PXFM). These two codecs, along with block-switching contributions from Thomson-Brandt, were merged into a codec called ASPEC, which was submitted to MPEG, and which won the quality competition, but that was mistakenly rejected as too complex to implement. The first practical implementation of an audio perceptual coder (OCF) in hardware (Krasner's hardware was too cumbersome and slow for practical use), was an implementation of a psychoacoustic transform coder based on Motorola 56000 DSP chips. Another predecessor of the MP3 format and technology is to be found in the perceptual codec MUSICAM based on an integer arithmetics 32 sub-bands filterbank, driven by a psychoacoustic model. It was primarily designed for Digital Audio Broadcasting (digital radio) and digital TV, and its basic principles were disclosed to the scientific community by CCETT (France) and IRT (Germany) in Atlanta during an IEEE-ICASSP conference in 1991, after having worked on MUSICAM with Matsushita and Philips since 1989. This codec incorporated into a broadcasting system using COFDM modulation was demonstrated on air and in the field with Radio Canada and CRC Canada during the NAB show (Las Vegas) in 1991. The implementation of the audio part of this broadcasting system was based on a two-chips encoder (one for the subband transform, one for the psychoacoustic model designed by the team of G. Stoll (IRT Germany), later known as psychoacoustic model I) and a real time decoder using one Motorola 56001 DSP chip running an integer arithmetics software designed by Y.F. Dehery's team (CCETT, France). The simplicity of the corresponding decoder together with the high audio quality of this codec using for the first time a 48 kHz sampling frequency, a 20 bits/sample input format (the highest available sampling standard in 1991, compatible with the AES/EBU professional digital input studio standard) were the main reasons to later adopt the characteristics of MUSICAM as the basic features for an advanced digital music compression codec. During the development of the MUSICAM encoding software, Stoll and Dehery's team made thorough use of a set of high-quality audio assessment material selected by a group of audio professionals from the European Broadcasting Union and later used as a reference for the assessment of music compression codecs. The subband coding technique was found to be efficient, not only for the perceptual coding of the high-quality sound materials but especially for the encoding of critical percussive sound materials (drums, triangle,...), due to the specific temporal masking effect of the MUSICAM sub-band filterbank (this advantage being a specific feature of short transform coding techniques). As a doctoral student at Germany's University of Erlangen-Nuremberg, Karlheinz Brandenburg began working on digital music compression in the early 1980s, focusing on how people perceive music. He completed his doctoral work in 1989. MP3 is directly descended from OCF and PXFM, representing the outcome of the collaboration of Brandenburg — working as a postdoctoral researcher at AT&T-Bell Labs with James D. Johnston ("JJ") of AT&T-Bell Labs — with the Fraunhofer Institute for Integrated Circuits, Erlangen (where he worked with Bernhard Grill and four other researchers – "The Original Six"), with relatively minor contributions from the MP2 branch of psychoacoustic sub-band coders. In 1990, Brandenburg became an assistant professor at Erlangen-Nuremberg. While there, he continued to work on music compression with scientists at the Fraunhofer Society's Heinrich Herz Institute. In 1993, he joined the staff of Fraunhofer HHI. The song "Tom's Diner" by Suzanne Vega was the first song used by Karlheinz Brandenburg to develop the MP3 format. Brandenburg adopted the song for testing purposes, listening to it again and again each time he refined the scheme, making sure it did not adversely affect the subtlety of Vega's voice. Accordingly, he dubbed Vega the "Mother of MP3". Standardization In 1991, there were two available proposals that were assessed for an MPEG audio standard: MUSICAM (Masking pattern adapted Universal Subband Integrated Coding And Multiplexing) and ASPEC (Adaptive Spectral Perceptual Entropy Coding). The MUSICAM technique, proposed by Philips (Netherlands), CCETT (France), the Institute for Broadcast Technology (Germany), and Matsushita (Japan), was chosen due to its simplicity and error robustness, as well as for its high level of computational efficiency. The MUSICAM format, based on sub-band coding, became the basis for the MPEG Audio compression format, incorporating, for example, its frame structure, header format, sample rates, etc. While much of MUSICAM technology and ideas were incorporated into the definition of MPEG Audio Layer I and Layer II, the filter bank alone and the data structure based on 1152 samples framing (file format and byte oriented stream) of MUSICAM remained in the Layer III (MP3) format, as part of the computationally inefficient hybrid filter bank. Under the chairmanship of Professor Musmann of the Leibniz University Hannover, the editing of the standard was delegated to Leon van de Kerkhof (Netherlands), Gerhard Stoll (Germany), and Yves-François Dehery (France), who worked on Layer I and Layer II. ASPEC was the joint proposal of AT&T Bell Laboratories, Thomson Consumer Electronics, Fraunhofer Society and CNET. It provided the highest coding efficiency. A working group consisting of van de Kerkhof, Stoll, Leonardo Chiariglione (CSELT VP for Media), Yves-François Dehery, Karlheinz Brandenburg (Germany) and James D. Johnston (United States) took ideas from ASPEC, integrated the filter bank from Layer II, added some of their own ideas such as the joint stereo coding of MUSICAM and created the MP3 format, which was designed to achieve the same quality at 128 kbit/s as MP2 at 192 kbit/s. The algorithms for MPEG-1 Audio Layer I, II and III were approved in 1991 and finalized in 1992 as part of MPEG-1, the first standard suite by MPEG, which resulted in the international standard ISO/IEC 11172-3 (a.k.a. MPEG-1 Audio or MPEG-1 Part 3), published in 1993. Files or data streams conforming to this standard must handle sample rates of 48k, 44100 and 32k and continue to be supported by current MP3 players and decoders. Thus the first generation of MP3 defined interpretations of MP3 frame data structures and size layouts. Further work on MPEG audio was finalized in 1994 as part of the second suite of MPEG standards, MPEG-2, more formally known as international standard ISO/IEC 13818-3 (a.k.a. MPEG-2 Part 3 or backwards compatible MPEG-2 Audio or MPEG-2 Audio BC), originally published in 1995. MPEG-2 Part 3 (ISO/IEC 13818-3) defined 42 additional bit rates and sample rates for MPEG-1 Audio Layer I, II and III. The new sampling rates are exactly half that of those originally defined in MPEG-1 Audio. This reduction in sampling rate serves to cut the available frequency fidelity in half while likewise cutting the bitrate by 50%. MPEG-2 Part 3 also enhanced MPEG-1's audio by allowing the coding of audio programs with more than two channels, up to 5.1 multichannel. An MP3 coded with MPEG-2 results in half of the bandwidth reproduction of MPEG-1 appropriate for piano and singing. A third generation of "MP3" style data streams (files) extended the MPEG-2 ideas and implementation, but was named MPEG-2.5 audio, since MPEG-3 already had a different meaning. This extension was developed at Fraunhofer IIS, the registered patent holders of MP3, by reducing the frame sync field in the MP3 header from 12 to 11 bits. As in the transition from MPEG-1 to MPEG-2, MPEG-2.5 adds additional sampling rates exactly half of those available using MPEG-2. It thus widens the scope of MP3 to include human speech and other applications yet requires only 25% of the bandwidth (frequency reproduction) possible using MPEG-1 sampling rates. While not an ISO recognized standard, MPEG-2.5 is widely supported by both inexpensive Chinese and brand-name digital audio players as well as computer software based MP3 encoders (LAME), decoders (FFmpeg) and players (MPC) adding additional MP3 frame types. Each generation of MP3 thus supports 3 sampling rates exactly half that of the previous generation for a total of 9 varieties of MP3 format files. The sample rate comparison table between MPEG-1, 2 and 2.5 is given later in the article. MPEG-2.5 is supported by LAME (since 2000), Media Player Classic (MPC), iTunes, and FFmpeg. MPEG-2.5 was not developed by MPEG (see above) and was never approved as an international standard. MPEG-2.5 is thus an unofficial or proprietary extension to the MP3 format. It is nonetheless ubiquitous and especially advantageous for low-bit-rate human speech applications. The ISO standard ISO/IEC 11172-3 (a.k.a. MPEG-1 Audio) defined three formats: the MPEG-1 Audio Layer I, Layer II and Layer III. The ISO standard ISO/IEC 13818-3 (a.k.a. MPEG-2 Audio) defined extended version of the MPEG-1 Audio: MPEG-2 Audio Layer I, Layer II and Layer III. MPEG-2 Audio (MPEG-2 Part 3) should not be confused with MPEG-2 AAC (MPEG-2 Part 7 – ISO/IEC 13818-7). Compression efficiency of encoders is typically defined by the bit rate, because compression ratio depends on the bit depth and sampling rate of the input signal. Nevertheless, compression ratios are often published. They may use the Compact Disc (CD) parameters as references (44.1 kHz, 2 channels at 16 bits per channel or 2×16 bit), or sometimes the Digital Audio Tape (DAT) SP parameters (48 kHz, 2×16 bit). Compression ratios with this latter reference are higher, which demonstrates the problem with use of the term compression ratio for lossy encoders. Karlheinz Brandenburg used a CD recording of Suzanne Vega's song "Tom's Diner" to assess and refine the MP3 compression algorithm. This song was chosen because of its nearly monophonic nature and wide spectral content, making it easier to hear imperfections in the compression format during playbacks. This particular track has an interesting property in that the two channels are almost, but not completely, the same, leading to a case where Binaural Masking Level Depression causes spatial unmasking of noise artifacts unless the encoder properly recognizes the situation and applies corrections similar to those detailed in the MPEG-2 AAC psychoacoustic model. Some more critical audio excerpts (glockenspiel, triangle, accordion, etc.) were taken from the EBU V3/SQAM reference compact disc and have been used by professional sound engineers to assess the subjective quality of the MPEG Audio formats. LAME is the most advanced MP3 encoder. LAME includes a VBR variable bit rate encoding which uses a quality parameter rather than a bit rate goal. Later versions (2008+) support an n.nnn quality goal which automatically selects MPEG-2 or MPEG-2.5 sampling rates as appropriate for human speech recordings which need only 5512 Hz bandwidth resolution. Going public A reference simulation software implementation, written in the C language and later known as ISO 11172-5, was developed (in 1991–1996) by the members of the ISO MPEG Audio committee in order to produce bit compliant MPEG Audio files (Layer 1, Layer 2, Layer 3). It was approved as a committee draft of ISO/IEC technical report in March 1994 and printed as document CD 11172-5 in April 1994. It was approved as a draft technical report (DTR/DIS) in November 1994, finalized in 1996 and published as international standard ISO/IEC TR 11172-5:1998 in 1998. The reference software in C language was later published as a freely available ISO standard. Working in non-real time on a number of operating systems, it was able to demonstrate the first real time hardware decoding (DSP based) of compressed audio. Some other real time implementations of MPEG Audio encoders and decoders were available for the purpose of digital broadcasting (radio DAB, television DVB) towards consumer receivers and set top boxes. On 7 July 1994, the Fraunhofer Society released the first software MP3 encoder, called l3enc. The filename extension .mp3 was chosen by the Fraunhofer team on 14 July 1995 (previously, the files had been named .bit). With the first real-time software MP3 player WinPlay3 (released 9 September 1995) many people were able to encode and play back MP3 files on their PCs. Because of the relatively small hard drives of the era (≈500–1000 MB) lossy compression was essential to store multiple albums' worth of music on a home computer as full recordings (as opposed to MIDI notation, or tracker files which combined notation with short recordings of instruments playing single notes). As sound scholar Jonathan Sterne notes, "An Australian hacker acquired l3enc using a stolen credit card. The hacker then reverse-engineered the software, wrote a new user interface, and redistributed it for free, naming it "thank you Fraunhofer"". Fraunhofer example implementation A hacker named SoloH discovered the source code of the "dist10" MPEG reference implementation shortly after the release on the servers of the University of Erlangen. He developed a higher-quality version and spread it on the internet. This code started the widespread CD ripping and digital music distribution as MP3 over the internet. Internet distribution In the second half of the 1990s, MP3 files began to spread on the Internet, often via underground pirated song networks. The first known experiment in Internet distribution was organized in the early 1990s by the Internet Underground Music Archive, better known by the acronym IUMA. After some experiments using uncompressed audio files, this archive started to deliver on the native worldwide low-speed Internet some compressed MPEG Audio files using the MP2 (Layer II) format and later on used MP3 files when the standard was fully completed. The popularity of MP3s began to rise rapidly with the advent of Nullsoft's audio player Winamp, released in 1997. In 1998, the first portable solid state digital audio player MPMan, developed by SaeHan Information Systems, which is headquartered in Seoul, South Korea, was released and the Rio PMP300 was sold afterwards in 1998, despite legal suppression efforts by the RIAA. In November 1997, the website mp3.com was offering thousands of MP3s created by independent artists for free. The small size of MP3 files enabled widespread peer-to-peer file sharing of music ripped from CDs, which would have previously been nearly impossible. The first large peer-to-peer filesharing network, Napster, was launched in 1999. The ease of creating and sharing MP3s resulted in widespread copyright infringement. Major record companies argued that this free sharing of music reduced sales, and called it "music piracy". They reacted by pursuing lawsuits against Napster, which was eventually shut down and later sold, and against individual users who engaged in file sharing. Unauthorized MP3 file sharing continues on next-generation peer-to-peer networks. Some authorized services, such as Beatport, Bleep, Juno Records, eMusic, Zune Marketplace, Walmart.com, Rhapsody, the recording industry approved re-incarnation of Napster, and Amazon.com sell unrestricted music in the MP3 format. Design File structure An MP3 file is made up of MP3 frames, which consist of a header and a data block. This sequence of frames is called an elementary stream. Due to the "bit reservoir", frames are not independent items and cannot usually be extracted on arbitrary frame boundaries. The MP3 Data blocks contain the (compressed) audio information in terms of frequencies and amplitudes. The diagram shows that the MP3 Header consists of a sync word, which is used to identify the beginning of a valid frame. This is followed by a bit indicating that this is the MPEG standard and two bits that indicate that layer 3 is used; hence MPEG-1 Audio Layer 3 or MP3. After this, the values will differ, depending on the MP3 file. ISO/IEC 11172-3 defines the range of values for each section of the header along with the specification of the header. Most MP3 files today contain ID3 metadata, which precedes or follows the MP3 frames, as noted in the diagram. The data stream can contain an optional checksum. Joint stereo is done only on a frame-to-frame basis. Encoding and decoding The MP3 encoding algorithm is generally split into four parts. Part 1 divides the audio signal into smaller pieces, called frames, and a modified discrete cosine transform (MDCT) filter is then performed on the output. Part 2 passes the sample into a 1024-point fast Fourier transform (FFT), then the psychoacoustic model is applied and another MDCT filter is performed on the output. Part 3 quantifies and encodes each sample, known as noise allocation, which adjusts itself in order to meet the bit rate and sound masking requirements. Part 4 formats the bitstream, called an audio frame, which is made up of 4 parts, the header, error check, audio data, and ancillary data. The MPEG-1 standard does not include a precise specification for an MP3 encoder, but does provide example psychoacoustic models, rate loop, and the like in the non-normative part of the original standard. MPEG-2 doubles the number of sampling rates which are supported and MPEG-2.5 adds 3 more. When this was written, the suggested implementations were quite dated. Implementers of the standard were supposed to devise their own algorithms suitable for removing parts of the information from the audio input. As a result, many different MP3 encoders became available, each producing files of differing quality. Comparisons were widely available, so it was easy for a prospective user of an encoder to research the best choice. Some encoders that were proficient at encoding at higher bit rates (such as LAME) were not necessarily as good at lower bit rates. Over time, LAME evolved on the SourceForge website until it became the de facto CBR MP3 encoder. Later an ABR mode was added. Work progressed on true variable bit rate using a quality goal between 0 and 10. Eventually numbers (such as -V 9.600) could generate excellent quality low bit rate voice encoding at only 41 kbit/s using the MPEG-2.5 extensions. During encoding, 576 time-domain samples are taken and are transformed to 576 frequency-domain samples. If there is a transient, 192 samples are taken instead of 576. This is done to limit the temporal spread of quantization noise accompanying the transient (see psychoacoustics). Frequency resolution is limited by the small long block window size, which decreases coding efficiency. Time resolution can be too low for highly transient signals and may cause smearing of percussive sounds. Due to the tree structure of the filter bank, pre-echo problems are made worse, as the combined impulse response of the two filter banks does not, and cannot, provide an optimum solution in time/frequency resolution. Additionally, the combining of the two filter banks' outputs creates aliasing problems that must be handled partially by the "aliasing compensation" stage; however, that creates excess energy to be coded in the frequency domain, thereby decreasing coding efficiency. Decoding, on the other hand, is carefully defined in the standard. Most decoders are "bitstream compliant", which means that the decompressed output that they produce from a given MP3 file will be the same, within a specified degree of rounding tolerance, as the output specified mathematically in the ISO/IEC high standard document (ISO/IEC 11172-3). Therefore, comparison of decoders is usually based on how computationally efficient they are (i.e., how much memory or CPU time they use in the decoding process). Over time this concern has become less of an issue as CPU speeds transitioned from MHz to GHz. Encoder/decoder overall delay is not defined, which means there is no official provision for gapless playback. However, some encoders such as LAME can attach additional metadata that will allow players that can handle it to deliver seamless playback. Quality When performing lossy audio encoding, such as creating an MP3 data stream, there is a trade-off between the amount of data generated and the sound quality of the results. The person generating an MP3 selects a bit rate, which specifies how many kilobits per second of audio is desired. The higher the bit rate, the larger the MP3 data stream will be, and, generally, the closer it will sound to the original recording. With too low a bit rate, compression artifacts (i.e., sounds that were not present in the original recording) may be audible in the reproduction. Some audio is hard to compress because of its randomness and sharp attacks. When this type of audio is compressed, artifacts such as ringing or pre-echo are usually heard. A sample of applause or a triangle instrument with a relatively low bit rate provide good examples of compression artifacts. Most subjective testings of perceptual codecs tend to avoid using these types of sound materials, however, the artifacts generated by percussive sounds are barely perceptible due to the specific temporal masking feature of the 32 sub-band filterbank of Layer II on which the format is based. Besides the bit rate of an encoded piece of audio, the quality of MP3 encoded sound also depends on the quality of the encoder algorithm as well as the complexity of the signal being encoded. As the MP3 standard allows quite a bit of freedom with encoding algorithms, different encoders do feature quite different quality, even with identical bit rates. As an example, in a public listening test featuring two early MP3 encoders set at about 128 kbit/s, one scored 3.66 on a 1–5 scale, while the other scored only 2.22. Quality is dependent on the choice of encoder and encoding parameters. This observation caused a revolution in audio encoding. Early on bitrate was the prime and only consideration. At the time MP3 files were of the very simplest type: they used the same bit rate for the entire file: this process is known as Constant Bit Rate (CBR) encoding. Using a constant bit rate makes encoding simpler and less CPU intensive. However, it is also possible to create files where the bit rate changes throughout the file. These are known as Variable Bit Rate. The bit reservoir and VBR encoding were actually part of the original MPEG-1 standard. The concept behind them is that, in any piece of audio, some sections are easier to compress, such as silence or music containing only a few tones, while others will be more difficult to compress. So, the overall quality of the file may be increased by using a lower bit rate for the less complex passages and a higher one for the more complex parts. With some advanced MP3 encoders, it is possible to specify a given quality, and the encoder will adjust the bit rate accordingly. Users that desire a particular "quality setting" that is transparent to their ears can use this value when encoding all of their music, and generally speaking not need to worry about performing personal listening tests on each piece of music to determine the correct bit rate. Perceived quality can be influenced by listening environment (ambient noise), listener attention, and listener training and in most cases by listener audio equipment (such as sound cards, speakers and headphones). Furthermore, sufficient quality may be achieved by a lesser quality setting for lectures and human speech applications and reduces encoding time and complexity. A test given to new students by Stanford University Music Professor Jonathan Berger showed that student preference for MP3-quality music has risen each year. Berger said the students seem to prefer the 'sizzle' sounds that MP3s bring to music. An in-depth study of MP3 audio quality, sound artist and composer Ryan Maguire's project "The Ghost in the MP3" isolates the sounds lost during MP3 compression. In 2015, he released the track "moDernisT" (an anagram of "Tom's Diner"), composed exclusively from the sounds deleted during MP3 compression of the song "Tom's Diner", the track originally used in the formulation of the MP3 standard. A detailed account of the techniques used to isolate the sounds deleted during MP3 compression, along with the conceptual motivation for the project, was published in the 2014 Proceedings of the International Computer Music Conference. Bit rate Bitrate is the product of the sample rate and number of bits per sample used to encode the music. CD audio is 44100 samples per second. The number of bits per sample also depends on the number of audio channels. CD is stereo and 16 bits per channel. So, multiplying 44100 by 32 gives 1411200—the bitrate of uncompressed CD digital audio. MP3 was designed to encode this 1411 kbit/s data at 320 kbit/s or less. As less complex passages are detected by MP3 algorithms then lower bitrates may be employed. When using MPEG-2 instead of MPEG-1, MP3 supports only lower sampling rates (16000, 22050 or 24000 samples per second) and offers choices of bitrate as low as 8 kbit/s but no higher than 160 kbit/s. By lowering the sampling rate, MPEG-2 layer III removes all frequencies above half the new sampling rate that may have been present in the source audio. As shown in these two tables, 14 selected bit rates are allowed in MPEG-1 Audio Layer III standard: 32, 40, 48, 56, 64, 80, 96, 112, 128, 160, 192, 224, 256 and 320 kbit/s, along with the 3 highest available sampling frequencies of 32, 44.1 and 48 kHz. MPEG-2 Audio Layer III also allows 14 somewhat different (and mostly lower) bit rates of 8, 16, 24, 32, 40, 48, 56, 64, 80, 96, 112, 128, 144, 160 kbit/s with sampling frequencies of 16, 22.05 and 24 kHz which are exactly half that of MPEG-1 MPEG-2.5 Audio Layer III frames are limited to only 8 bit rates of 8, 16, 24, 32, 40, 48, 56 and 64 kbit/s with 3 even lower sampling frequencies of 8, 11.025, and 12 kHz. On earlier systems that only support the MPEG-1 Audio Layer III standard, MP3 files with a bit rate below 32 kbit/s might be played back sped-up and pitched-up. Earlier systems also lack fast forwarding and rewinding playback controls on MP3. MPEG-1 frames contain the most detail in 320 kbit/s mode, the highest allowable bit rate setting, with silence and simple tones still requiring 32 kbit/s. MPEG-2 frames can capture up to 12 kHz sound reproductions needed up to 160 kbit/s. MP3 files made with MPEG-2 don't have 20 kHz bandwidth because of the Nyquist–Shannon sampling theorem. Frequency reproduction is always strictly less than half of the sampling frequency, and imperfect filters require a larger margin for error (noise level versus sharpness of filter), so an 8 kHz sampling rate limits the maximum frequency to 4 kHz, while a 48 kHz sampling rate limits an MP3 to a maximum 24 kHz sound reproduction. MPEG-2 uses half and MPEG-2.5 only a quarter of MPEG-1 sample rates. For the general field of human speech reproduction, a bandwidth of 5512 Hz is sufficient to produce excellent results (for voice) using the sampling rate of 11025 and VBR encoding from 44100 (standard) WAV file. English speakers average 41–42 kbit/s with -V 9.6 setting but this may vary with amount of silence recorded or the rate of delivery (wpm). Resampling to 12000 (6K bandwidth) is selected by the LAME parameter -V 9.4 Likewise -V 9.2 selects 16000 sample rate and a resultant 8K lowpass filtering. For more information see Nyquist – Shannon. Older versions of LAME and FFmpeg only support integer arguments for the variable bit rate quality selection parameter. The n.nnn quality parameter (-V) is documented at lame.sourceforge.net but is only supported in LAME with the new style VBR variable bit rate quality selector—not average bit rate (ABR). A sample rate of 44.1 kHz is commonly used for music reproduction, because this is also used for CD audio, the main source used for creating MP3 files. A great variety of bit rates are used on the Internet. A bit rate of 128 kbit/s is commonly used, at a compression ratio of 11:1, offering adequate audio quality in a relatively small space. As Internet bandwidth availability and hard drive sizes have increased, higher bit rates up to 320 kbit/s are widespread. Uncompressed audio as stored on an audio-CD has a bit rate of 1,411.2 kbit/s, (16 bit/sample × 44100 samples/second × 2 channels / 1000 bits/kilobit), so the bitrates 128, 160 and 192 kbit/s represent compression ratios of approximately 11:1, 9:1 and 7:1 respectively. Non-standard bit rates up to 640 kbit/s can be achieved with the LAME encoder and the freeformat option, although few MP3 players can play those files. According to the ISO standard, decoders are only required to be able to decode streams up to 320 kbit/s. Early MPEG Layer III encoders used what is now called Constant Bit Rate (CBR). The software was only able to use a uniform bitrate on all frames in an MP3 file. Later more sophisticated MP3 encoders were able to use the bit reservoir to target an average bit rate selecting the encoding rate for each frame based on the complexity of the sound in that portion of the recording. A more sophisticated MP3 encoder can produce variable bitrate audio. MPEG audio may use bitrate switching on a per-frame basis, but only layer III decoders must support it. VBR is used when the goal is to achieve a fixed level of quality. The final file size of a VBR encoding is less predictable than with constant bitrate. Average bitrate is a type of VBR implemented as a compromise between the two: the bitrate is allowed to vary for more consistent quality, but is controlled to remain near an average value chosen by the user, for predictable file sizes. Although an MP3 decoder must support VBR to be standards compliant, historically some decoders have bugs with VBR decoding, particularly before VBR encoders became widespread. The most evolved LAME MP3 encoder supports the generation of VBR, ABR, and even the older CBR MP3 formats. Layer III audio can also use a "bit reservoir", a partially full frame's ability to hold part of the next frame's audio data, allowing temporary changes in effective bitrate, even in a constant bitrate stream. Internal handling of the bit reservoir increases encoding delay. There is no scale factor band 21 (sfb21) for frequencies above approx 16 kHz, forcing the encoder to choose between less accurate representation in band 21 or less efficient storage in all bands below band 21, the latter resulting in wasted bitrate in VBR encoding. Ancillary data The ancillary data field can be used to store user defined data. The ancillary data is optional and the number of bits available is not explicitly given. The ancillary data is located after the Huffman code bits and ranges to where the next frame's main_data_begin points to. Encoder mp3PRO used ancillary data to encode extra information which could improve audio quality when decoded with its own algorithm. Metadata A "tag" in an audio file is a section of the file that contains metadata such as the title, artist, album, track number or other information about the file's contents. The MP3 standards do not define tag formats for MP3 files, nor is there a standard container format that would support metadata and obviate the need for tags. However, several de facto standards for tag formats exist. As of 2010, the most widespread are ID3v1 and ID3v2, and the more recently introduced APEv2. These tags are normally embedded at the beginning or end of MP3 files, separate from the actual MP3 frame data. MP3 decoders either extract information from the tags, or just treat them as ignorable, non-MP3 junk data. Playing and editing software often contains tag editing functionality, but there are also tag editor applications dedicated to the purpose. Aside from metadata pertaining to the audio content, tags may also be used for DRM. ReplayGain is a standard for measuring and storing the loudness of an MP3 file (audio normalization) in its metadata tag, enabling a ReplayGain-compliant player to automatically adjust the overall playback volume for each file. MP3Gain may be used to reversibly modify files based on ReplayGain measurements so that adjusted playback can be achieved on players without ReplayGain capability. Licensing, ownership, and legislation The basic MP3 decoding and encoding technology is patent-free in the European Union, all patents having expired there by 2012 at the latest. In the United States, the technology became substantially patent-free on 16 April 2017 (see below). MP3 patents expired in the US between 2007 and 2017. In the past, many organizations have claimed ownership of patents related to MP3 decoding or encoding. These claims led to a number of legal threats and actions from a variety of sources. As a result, uncertainty about which patents must have been licensed in order to create MP3 products without committing patent infringement in countries that allow software patents was a common feature of the early stages of adoption of the technology. The initial near-complete MPEG-1 standard (parts 1, 2 and 3) was publicly available on 6 December 1991 as ISO CD 11172. In most countries, patents cannot be filed after prior art has been made public, and patents expire 20 years after the initial filing date, which can be up to 12 months later for filings in other countries. As a result, patents required to implement MP3 expired in most countries by December 2012, 21 years after the publication of ISO CD 11172. An exception is the United States, where patents in force but filed prior to 8 June 1995 expire after the later of 17 years from the issue date or 20 years from the priority date. A lengthy patent prosecution process may result in a patent issuing much later than normally expected (see submarine patents). The various MP3-related patents expired on dates ranging from 2007 to 2017 in the United States. Patents for anything disclosed in ISO CD 11172 filed a year or more after its publication are questionable. If only the known MP3 patents filed by December 1992 are considered, then MP3 decoding has been patent-free in the US since 22 September 2015, when , which had a PCT filing in October 1992, expired. If the longest-running patent mentioned in the aforementioned references is taken as a measure, then the MP3 technology became patent-free in the United States on 16 April 2017, when , held and administered by Technicolor, expired. As a result, many free and open-source software projects, such as the Fedora operating system, have decided to start shipping MP3 support by default, and users will no longer have to resort to installing unofficial packages maintained by third party software repositories for MP3 playback or encoding. Technicolor (formerly called Thomson Consumer Electronics) claimed to control MP3 licensing of the Layer 3 patents in many countries, including the United States, Japan, Canada and EU countries. Technicolor had been actively enforcing these patents. MP3 license revenues from Technicolor's administration generated about €100 million for the Fraunhofer Society in 2005. In September 1998, the Fraunhofer Institute sent a letter to several developers of MP3 software stating that a license was required to "distribute and/or sell decoders and/or encoders". The letter claimed that unlicensed products "infringe the patent rights of Fraunhofer and Thomson. To make, sell or distribute products using the [MPEG Layer-3] standard and thus our patents, you need to obtain a license under these patents from us." This led to the situation where the LAME MP3 encoder project could not offer its users official binaries that could run on their computer. The project's position was that as source code, LAME was simply a description of how an MP3 encoder could be implemented. Unofficially, compiled binaries were available from other sources. Sisvel S.p.A., a Luxembourg-based company, administers licenses for patents applying to MPEG Audio. They, along with its United States subsidiary Audio MPEG, Inc. previously sued Thomson for patent infringement on MP3 technology, but those disputes were resolved in November 2005 with Sisvel granting Thomson a license to their patents. Motorola followed soon after, and signed with Sisvel to license MP3-related patents in December 2005. Except for three patents, the US patents administered by Sisvel had all expired in 2015. The three exceptions are: , expired February 2017; , expired February 2017; and , expired 9 April 2017. In September 2006, German officials seized MP3 players from SanDisk's booth at the IFA show in Berlin after an Italian patents firm won an injunction on behalf of Sisvel against SanDisk in a dispute over licensing rights. The injunction was later reversed by a Berlin judge, but that reversal was in turn blocked the same day by another judge from the same court, "bringing the Patent Wild West to Germany" in the words of one commentator. In February 2007, Texas MP3 Technologies sued Apple, Samsung Electronics and Sandisk in eastern Texas federal court, claiming infringement of a portable MP3 player patent that Texas MP3 said it had been assigned. Apple, Samsung, and Sandisk all settled the claims against them in January 2009. Alcatel-Lucent has asserted several MP3 coding and compression patents, allegedly inherited from AT&T-Bell Labs, in litigation of its own. In November 2006, before the companies' merger, Alcatel sued Microsoft for allegedly infringing seven patents. On 23 February 2007, a San Diego jury awarded Alcatel-Lucent US $1.52 billion in damages for infringement of two of them. The court subsequently revoked the award, however, finding that one patent had not been infringed and that the other was not owned by Alcatel-Lucent; it was co-owned by AT&T and Fraunhofer, who had licensed it to Microsoft, the judge ruled. That defense judgment was upheld on appeal in 2008. See Alcatel-Lucent v. Microsoft for more information. Alternative technologies Other lossy formats exist. Among these, Advanced Audio Coding (AAC) is the most widely used, and was designed to be the successor to MP3. There also exist other lossy formats such as mp3PRO and MP2. They are members of the same technological family as MP3 and depend on roughly similar psychoacoustic models and MDCT algorithms. Whereas MP3 uses a hybrid coding approach that is part MDCT and part FFT, AAC is purely MDCT, significantly improving compression efficiency. Many of the basic patents underlying these formats are held by Fraunhofer Society, Alcatel-Lucent, Thomson Consumer Electronics, Bell, Dolby, LG Electronics, NEC, NTT Docomo, Panasonic, Sony Corporation, ETRI, JVC Kenwood, Philips, Microsoft, and NTT. When the digital audio player market was taking off, MP3 was widely adopted as the standard hence the popular name "MP3 player". Sony was an exception and used their own ATRAC codec taken from their MiniDisc format, which Sony claimed was better. Following criticism and lower than expected Walkman sales, in 2004 Sony for the first time introduced native MP3 support to its Walkman players. There are also open compression formats like Opus and Vorbis that are available free of charge and without any known patent restrictions. Some of the newer audio compression formats, such as AAC, WMA Pro and Vorbis, are free of some limitations inherent to the MP3 format that cannot be overcome by any MP3 encoder. Besides lossy compression methods, lossless formats are a significant alternative to MP3 because they provide unaltered audio content, though with an increased file size compared to lossy compression. Lossless formats include FLAC (Free Lossless Audio Codec), Apple Lossless and many others. See also Advanced Audio Coding (AAC) AIMP Audio coding format Audio Data Compression Comparison of audio coding formats FLAC Fraunhofer FDK AAC Fraunhofer Society Harald Popp High-Efficiency Advanced Audio Coding (HE-AAC) ID3 iPod Lossless compression Lossy compression Media player software Monkey's Audio (APE) MP3 blog MP3 player MP3 Surround MP3HD MPEG-1 Audio Layer II (MP2) MPEG-4 Part 14 Musepack (MPC) Opus Podcast Portable media player Speech coding TwinVQ Unified Speech and Audio Coding (xHE-AAC) Vorbis (OGG) WavPack (WV) Winamp Windows Media Audio (WMA) References Further reading External links MP3-history.com, The Story of MP3: How MP3 was invented, by Fraunhofer IIS MP3 News Archive , Over 1000 articles from 1999 to 2011 focused on MP3 and digital audio MPEG.chiariglione.org, MPEG Official Web site Computer-related introductions in 1993 Audio codecs Data compression MPEG Technicolor SA
37477675
https://en.wikipedia.org/wiki/Nexus%2010
Nexus 10
The Nexus 10 is a tablet computer co-developed by Google and Samsung Electronics that runs the Android operating system. It is the second tablet in the Google Nexus series, a family of Android consumer devices marketed by Google and built by an OEM partner. Following the success of the 7-inch Nexus 7, the first Google Nexus tablet, the Nexus 10 was released with a 10.1-inch, 2560×1600 pixel display, which was the world's highest resolution tablet display at the time of its release. The Nexus 10 was announced on October 29, 2012, and became available on November 13, 2012. The device is available in two storage sizes, 16 GB for US$399 and 32 GB for US$499. Along with the Nexus 4 mobile phone, the Nexus 10 launched Android 4.2 ("Jelly Bean"), which offered several new features, such as: 360° panoramic photo stitching called "Photo Sphere"; a quick settings menu; widgets on the lock screen; gesture typing; an updated version of Google Now; and multiple user accounts for tablets. Google revealed the device on October 29, 2012 to mixed-to-favorable reactions. Due to high demand, the device quickly sold out through the Google Play Store. Since its release, the device has gone through three major software updates and is currently upgradable to Android 5.1.1 ("Lollipop"). Official software support for Android versions after 5.1.1 will not be offered; however, security patches will be provided at least 3 years after the release of the device History Google was scheduled to launch the Nexus 10 along with the Nexus 4 and Android 4.2 at a conference event in New York City on October 29, 2012, however, the event was cancelled because of Hurricane Sandy. Instead, the device was announced the same day in an official press release in Google's blog, along with the Nexus 4 and the 32 GB, cellular connectivity-capable Nexus 7. The Nexus 10 became available for sale in the United States, the United Kingdom, Australia, France, Germany, Spain, and Canada on November 13, 2012. Japan was supposed to be included in the November 13 launch but the release was postponed. The tablet became available on February 5. The Nexus 10 went through repeated patterns of availability across international Google Play Stores, while the 32GB version was more consistently out of stock. The 32GB Nexus 10 was sold out within hours of its release in Google Play, while the 16GB version was still available for sale. Google Play stores in the United States and Canada received the Nexus 10 and quickly sold out. Features Software The Nexus 10 shipped with Android 4.2 ("Jelly Bean") as its operating system and is upgradable to Android 5.1.1 ("Lollipop") since March 9, 2015. It ships with preinstalled applications, such as Google Chrome, Gmail, Play Music, Play Books, Play Movies, the Play Store application, YouTube, Currents, Google+, Maps, and People. Hardware and design The Nexus 10 features a Samsung Exynos 5250 system on chip, a dual-core 1.7 GHz Cortex A15 central processing unit and a quad-core ARM Mali T604 graphics processing unit. The device also includes a primary 5-megapixel, rear-facing camera with LED flash, able to shoot 1080p video at 30 frames per second and take 2592×1936 resolution images with features such as autofocus, face detection and geotagging, and a secondary 1.9-megapixel, front-facing camera. It is encased in a plastic chassis and the rear of the device comprises a smooth, plastic surface, except for a strip of removable, dimpled plastic material, similar to the Nexus 7's rear, that hides FCC brandings. The rear of the Nexus 10 also includes a large "nexus" branding, for the line of mobile devices it belongs to, and a smaller "Samsung" branding, for the manufacturer of the device. There is, also, no mention of Google, the device's distributor and the maker of the Nexus 10's operating system, Android, on its hardware. On the top of the Nexus 10 are the volume controls and the power button, on the left side of the device lies a 3.5mm headphone jack and a microUSB port, used for charging or connecting the device to a PC or other USB-compatible device. On the right side of the device, there is only a microHDMI port and at the bottom, there is a magnetic pogo pin used for docking and charging. A 9,000 mAh lithium polymer battery powers the Nexus 10, and is reportedly capable of 9 hours of video playback, 90 hours of audio playback, 7 hours of web browsing, and 500 hours (20.8 days) of standby time. The Nexus 10 has a liquid crystal display with a 2560×1600 WQXGA display resolution, giving it a pixel density of 300 pixels per inch and a 16:10 aspect ratio. The display also features a "True RGB Real Stripe PLS" TFT panel and a capacitive, multi-touch screen, protected by scratch-resistant Corning Gorilla Glass 2, and is capable of displaying over 16.7 million colours. Reception The Nexus 10 received mixed-to-favorable reviews. TechCrunch columnist Drew Olanoff said that Android was a better experience on a tablet than iOS and concluded "Apple has an advantage, but Google is right there on the cusp of something amazing," The Gadget Show called the Nexus 10 "the best 'big' Android tablet we’ve ever seen. The screen alone puts it ahead of the competition, the screen on the Google Nexus 10 is phenomenal. It’s every bit as stunning as the Retina Display on the third and fourth generation iPads. It really has to be seen to be believed: it’s like a printed sticker on the glass, and makes reading a delight." Tim Stevens of Engadget says "Google's latest reference tablet packs an amazing resolution but ultimately fails to distance itself from the competition." Eric Franklin of CNET states "[...] the Nexus 10 has superior design and performance, and the features available in Android 4.2 may be worth price of admission alone." James Rogerson of TechRadar wrote "Ultimately, other than the price, there's little reason for Apple fans to jump ship to the Nexus 10, equally the Nexus 10 puts up enough of a defence to keep the Android faithful happy." Laptop Magazine, CNET, and PCWorld all rated the Nexus 10 at 4 out of 5 stars, while TechRadar gave the device 4.5 stars out of 5. PC Magazine and Wired gave the device a rating of 3 out of 5, and 8 out of 10, respectively. Commentators noted the lack of an SD card slot for expandable storage, absence of cellular connectivity, low color contrast and saturation, and limited selection of tablet-optimized Android apps, while praising the Nexus 10's high display resolution, powerful, high-performance processor and contemporary user interface. See also Comparison of tablet computers Comparison of Google Nexus tablets References Tablet computers Android (operating system) devices Google Nexus Portable media players Tablet computers introduced in 2012 Samsung computers
2435069
https://en.wikipedia.org/wiki/Memory%20map
Memory map
In computer science, a memory map is a structure of data (which usually resides in memory itself) that indicates how memory is laid out. The term "memory map" can have different meanings in different contexts. It is the fastest and most flexible cache organization that uses an associative memory. The associative memory stores both the address and content of the memory word. In the boot process, a memory map is passed on from the firmware in order to instruct an operating system kernel about memory layout. It contains the information regarding the size of total memory, any reserved regions and may also provide other details specific to the architecture. In virtual memory implementations and memory management units, a memory map refers to page tables or hardware registers, which store the mapping between a certain process's virtual memory layout and how that space relates to physical memory addresses. In native debugger programs, a memory map refers to the mapping between loaded executable(or)library files and memory regions. These memory maps are used to resolve memory addresses (such as function pointers) to actual symbols. PC BIOS memory map BIOS for the IBM Personal Computer and compatibles provides a set of routines that can be used by operating system or applications to get the memory layout. Some of the available routines are: BIOS Function: INT 0x15, AX=0xE801: This BIOS interrupt call is used to get the memory size for 64MB+ configurations. It is supported by AMI BIOSses dated August 23, 1994 or later. The caller sets AX to 0xE801 then executes int 0x15. If some error has happened, the routine returns with CF (Carry Flag) set to 1. If no error, the routine returns with CF clear and the state of registers is described as following: BIOS Function: INT 0x15, AX=0xE820 - GET SYSTEM MEMORY MAP: Input: SMAP buffer structure: How used: The operating system shall allocate an SMAP buffer in memory (20 bytes buffer). Then set registers as specified in "Input" table. On first call, EBX should be set to 0. Next step is to call INT 0x15. If no error, the interrupt call returns with CF clear and the buffer filled with data representing first region of the memory map. EBX is updated by BIOS so that when the OS calls the routine again, The next region is returned in the buffer. BIOS sets EBX to zero if all done. See also BIOS RAMMap by Mark Russinovich References Computer memory
44472192
https://en.wikipedia.org/wiki/Livia%20Acosta%20Noguera
Livia Acosta Noguera
Livia Acosta Noguera was a Venezuelan diplomat to the United States in Miami and is a lead member of SEBIN. She was declared persona non grata by the United States Department of State following an inquiry by the FBI of allegations of planning cyberattacks on government facilities and nuclear power plants in the United States. Biography Before the presidency of Hugo Chávez, Acosta worked at the Baptist Seminary of Venezuela. In 2000, she became the head of Special Projects Microfinance Development Fund, focusing on the promotion of microcredit for the underprivileged. In 2001 and 2002 Acosta helped organize the Bolivarian Circles that were used to promote Chavez's ideology in Venezuela and internationally. In 2003, she began working at the embassy in Dominican Republic until 2006. In 2007 she became the second secretary at the embassy in Mexico in charge of cultural affairs, working for cultural and political advocacy and created relationships with groups such as the Mexican leftist party, Democratic Revolution. In 2010, Acosta moved to the embassy in Peru until March 2011, when she was appointed as consul of Venezuela in Miami. Cyber attack allegations According to a Univision investigative report titled The Iranian Threat, while Acosta was a cultural attaché in Mexico, she allegedly met with Mexican hackers who were planning to launch cyberattacks on the White House, the FBI, The Pentagon and several nuclear plants. In 2008, Acosta associated with activists and leaders from the embassies of Cuba and Iran in Mexico, and with the group of students and professors at the National Autonomous University of Mexico (UNAM). The plan was discovered after students at UNAM, posing as hackers, reported on a leftist teacher that encouraged the students to commit acts of sabotage and recorded conversations with diplomats after deciding not to carry out sabotage. In one of the recorded conversations, Acosta asked hackers for access to computer system of nuclear plants in the United States and stated that she would give the information to Hugo Chávez. Acosta also asked the supposed hackers to monitor banking operations, property and transport of critics of the Venezuelan government and additional monitoring of Venezuelan military personnel in Mexico. Following the Univision report, members of the United States congress wrote a letter to the Department of State to investigate the allegation and if they were found to be true, they told them to "declare her a persona non grata and require her immediate departure from the United States". The Department of State called the report "very disturbing" and the FBI began an investigation. Following the investigation, the FBI delivered an inquiry to the Department of State which led to Acosta being declared persona non grata. References Venezuelan diplomats Living people Year of birth missing (living people)
40364759
https://en.wikipedia.org/wiki/Smart%20Grid%20Interoperability%20Panel
Smart Grid Interoperability Panel
Smart Grid Interoperability Panel or SGIP is an organization that defines requirements for a smarter electric grid by driving interoperability, the use of standard, and collaborating across organizations to address gaps and issue hindering the deployment of smart grid technologies. SGIP facilitates and runs different working groups to address key topical areas such as the architecture group, the grid management group, the cybersecurity group, the distributed resources and generation group, and the testing and certification group. History SGIP 1.0 was established in December 2009 as a new stakeholder forum to provide technical support to the Commerce Department’s National Institute of Standards and Technology (NIST) with the assistance from Knoxville and EnerNex Corp, under a contract enabled by the American Recovery and Reinvestment Act. SGIP 2.0 was established as a public-private organization which transitioned into a non-profit public-private partnership organization in 2013. Function The prime functions of SGIP is reported to be- To specify testing and certification requirements that are necessary in order to improve interoperability Smart Grid-related equipment, software, and services. To provide technical guidance to facilitate the development of standards for a secure, interoperable Smart Grid. To supervise the performance of activities intended to expedite the development of interoperability and cybersecurity specifications by standard development organizations. To foster innovation and address gaps hindering the acceleration of grid modernization. To educate, facilitate the collaboration, and provide solutions to grid modernization. SGIP 1.0’s initial focus was to define the industry standards for 20 categories, representing every domain in the power industry and these categories include: Appliance and consumer electronic providers Commercial and industrial equipment manufacturers and automation vendors Consumers - residential, commercial and industrial Electric transportation Electric utility companies - investor owned utilities and federal and state power authorities Electric utility companies - municipal and investor owned Electric utility companies - rural electric association Electricity and financial market traders Independent power producers Information and communication technologies infrastructure and service providers Information technology application developers and integrators Power equipment manufacturers and vendors Professional societies, users groups, trade associations and industry consortia Research and development organizations and academia Relevant government entities Renewable power producers Retail service providers Standards and specification development organizations State and local regulators Testing and certification vendors Transmission operators and independent system operators Venture capital When SGIP 1.0 transitioned to SGIP 2.0, LLC, the focus remained for interoperability and addressing gaps in standards and also focused on Distributed Energy Resources, EnergyIoT, Cybersecurity and Orange Button. Overview In 2013, SGIP was the recipient of PMI Distinguished Project Award. In November 2014, Sharon Allan was appointed as the president and CEO of SGIP. In October 2015, SGIP partnered with Industrial Internet Consortium in order to develop technologies and testbeds to accelerate IoT adoption in the energy sector. In November 2015, SGIP was the recipient of the Smart Grid Interoperability Standards Cooperative Agreement Program federal funding opportunity from NIST, during which, SGIP was reported to receive $2.1 million during the performance period from January 1, 2016, to December 2018. In March 2016, SGIP announced that the Open Field Message Bus (OpenFMB) was ratified as a standard through a NAESB Retail Market Quadrant member vote. OpenFMB is said to be SGIP’s EnergyIoT initiative, bringing the IoT and advanced interoperability to the power grid. In April 2016, the organization received $615,426 from US Department of Energy, which was used for reducing non-hardware soft-costs associated with solar projects. In February 2017, SGIP merged with Smart Electric Power Alliance(SEPA) under SEPA brand and organizational umbrella. See also Distributed Generation Energy Policy Intermittent Power Sources Renewables Smart Grid Smart Grids by Country Smart Meter Microgrid NIST FERC References External links Electricity Advisory Committee (EAC) GridWise Architecture Council, official web site NIST Smart Grid Homepage FERC Homepage Smart grid Renewable energy technology
10058568
https://en.wikipedia.org/wiki/James%20D.%20McCaffrey
James D. McCaffrey
James D. McCaffrey is a research software engineer at Microsoft Research known for his contributions to machine learning, combinatorics, and software test automation. Education McCaffrey earned a BA in experimental psychology from the University of California, Irvine, a B.A. in applied mathematics from California State University, Fullerton, an M.S. in computer science information systems from Hawaii Pacific University, and a Ph.D. in interdisciplinary computational statistics and cognitive psychology from the University of Southern California. Career Prior to joining Microsoft, McCaffrey was the Associate Vice President of Research at Volt Information Sciences in Redmond, Washington, supporting the needs of software engineers at Microsoft. He joined Microsoft as a software engineer in 2006 and worked on various Microsoft products, including Exchange Server, Azure, and Bing. He then became a research software engineer at Microsoft Research, where he directs the internal Microsoft AI School, focusing on creating machine learning and artificial intelligence algorithms. He is the Senior Technical Editor for Microsoft's Visual Studio Magazine. His research at Microsoft primarily focuses on machine learning. His other research interests include combinatorics, especially when applied to human behavior such as sports betting and Blackjack Switch, as well as "software systems which have designs influenced by the behavior of biological systems such as swarm intelligence optimization and simulated bee colony algorithms and their application to data mining. Selected bibliography McCaffrey, J.D., "Using the Multi-Attribute Global Inference of Quality (MAGIQ) Technique for Software Testing", Proceedings of the 6th International Conference on Information Technology New Generations, April 2009, pp. 738–742. McCaffrey, J.D., "An Empirical Study of the Effectiveness of Partial Antirandom Testing", Proceedings of the 18th International Conference on Software Engineering and Data Engineering, June 2009, pp. 260–265. McCaffrey, J.D. and Czerwonka, J., "An Empirical Study of the Effectiveness of Pairwise Testing", Proceedings of the 2009 International Conference on Software Engineering Research and Practice, July 2009, pp. 186–191. McCaffrey, J.D., "Generation of Pairwise Test Sets using a Genetic Algorithm", Proceedings of the 33rd IEEE International Computer Software and Applications Conference, July 2009, pp. 626–631. McCaffrey, J.D., "Generation of Pairwise Test Sets using a Simulated Bee Colony Algorithm", Proceedings of the 2009 IEEE International Conference on Information Reuse and Integration, August 2009, pp. 115–119. McCaffrey, J.D. and Dierking, H., "An Empirical Study of Unsupervised Rule Set Extraction of Clustered Categorical Data using a Simulated Bee Colony Algorithm", Proceedings of the 3rd International Symposium on Rule Interchange and Applications, November 2009, pp. 182–192. McCaffrey, J.D., "An Empirical Study of Categorical Dataset Visualization using a Simulated Bee Colony Algorithm", Proceedings of the 5th International Symposium on Visual Computing, December 2009, pp. 179–188. McCaffrey, J.D., "Keras Succinctly for Syncfusion", An eBook focused on Keras, an open-source, neural-network library written in the Python language., September, 2018. McCaffrey, J.D., "Introduction to CNTK Succinctly for Syncfusion", An eBook focused on Microsoft CNTK (Cognitive Toolkit, formerly Computational Network Toolkit), an open source code framework that enables you to create deep learning systems, such as feed-forward neural network time series prediction systems and convolutional neural network image classifiers., April, 2018. McCaffrey, J.D., "Bing Maps V8 Succinctly for Syncfusion", The Bing Maps V8 library is a very large collection of JavaScript code that allows web developers to place a map on a webpage, query for data, and manipulate objects on a map, creating a geo-application. August, 2017. McCaffrey, J.D., "R Programming Succinctly for Syncfusion", The R programming language on its own is a powerful tool that can perform thousands of statistical tasks, but by writing programs in R, you gain tremendous power and flexibility to extend its base functionality. June, 2017. McCaffrey, J.D., "SciPy Programming Succinctly for Syncfusion", SciPy Programming Succinctly offers readers a quick, thorough grounding in knowledge of the Python open source extension SciPy. September, 2016. McCaffrey, J.D., "Machine Learning Using C# Succinctly for Syncfusion", In Machine Learning Using C# Succinctly, you'll learn several different approaches to applying machine learning to data analysis and prediction problems. October, 2014. McCaffrey, J.D., "Neural Networks Using C# Succinctly for Syncfusion", Neural networks are an exciting field of software development used to calculate outputs from input data. While the idea seems simple enough, the implications of such networks are staggering—think optical character recognition, speech recognition, and regression analysis. July, 2014. See also Lightweight Software Test Automation Multi-Attribute Global Inference of Quality References Introduced a description and C# language implementation of the factoradic, in fact a type of factorial number system, in "Using Permutations in .NET for Improved Systems Security", McCaffrey, J. D., August 2003, MSDN Library. See http://msdn2.microsoft.com/en-us/library/aa302371.aspx and "String Permutations", MSDN Magazine, June 2006 (Vol. 21, No. 7). ; a previous description of a factorial number system. Introduced a description and C# language implementation of the combinadic, in fact a type of combinatorial number system, in "Generating the mth Lexicographical Element of a Mathematical Combination", McCaffrey, J. D., July 2004, MSDN Library. See http://msdn2.microsoft.com/en-us/library/aa289166(VS.71).aspx. Applied Combinatorial Mathematics, Ed. E. F. Beckenbach (1964), pp. 27−30; a previous description of a combinatorial representation of integers. McCaffrey, James D., ".NET Test Automation Recipes", Apress Publishing, 2006. . American technology writers Software testing people Living people People from Redmond, Washington Servite High School alumni Year of birth missing (living people)
2966253
https://en.wikipedia.org/wiki/Corrector%20Yui
Corrector Yui
is a Japanese anime television series created by Kia Asamiya. The anime series was produced by Nippon Animation and broadcast on NHK Educational TV from 1999 to 2000. It was licensed for North American release by Viz Media. This series has aired on Cartoon Network outside the United States. In the US, the San Jose market licensed KTEH aired the series in its English-subtitled version as part of its Sunday Late-Prime (9pm-after 12) Sci-Fi programming line-up in the 90s. Two manga series were also released: a two volume series by Asamiya and published in Ciao from 1999 to 2000; and a nine volume two-part series by Keiko Okamoto which was published by NHK Publishing. The second manga series was licensed in North America and translated into English by Tokyopop beginning in 2002. It was created based on the Japanese novels "Nanso Satomi Hakkenden". This series follows a basic Magical girl progression, but Corrector Yui's magic powers all derive from incorporated entirely into her digital avatar as antivirus software for the virtual world, with no real powers granted outside of network. Plot It is the year 20** and computers become an integral part of society, Internet had evolved into a virtual reality called "ComNet". However, a teenage girl Yui Kasuga is one of the few who cannot use computers at all, despite the fact that her father is a software developer. At the time, an evil Host computer called Grosser wants to take over the both of ComNet and real world. The eight softwares whose developed to correct Grosser's evil intention, they're digital avatars exist only on ComNet and need help of a human called Corrector. Yui is sucked into the ComNet where she is recruited by I.R., one of eight softwares who gives her element suits that allow her to ComNet Fairy "Corrector Yui" and fight Grosser's evil softwares. In the first season, the series revolves around the war against Grosser, and reveals the mysteries that surround the Correctors, their seemingly missing Grosser and Correctors's creator, and the relationship that he seems to have had with the corrupted softwares. In the second season, Yui and the Correctors must fight with a mysterious computer virus who menaces the ComNet, and also cope with the mysterious Corrector Ai, a Corrector who tends to work on her own and seems to have her own agenda. The key to the mysteries seems to be a strange young girl who seems lost and may be related with the devastating virus appearances. Characters Main / A 14-year-old girl of dubious academic skill, has aspirations to become a manga artist and/or voice actress. Yui has shown skill and growth over the series as she assumes the secret on-line identity of "Corrector Yui". While utterly incompetent when physically interfacing with computers, in ComNet has the abilities to correct computer virus or software bug and a tremendously powerful fighter as Corrector, able to quickly undo damage and easily battle the strongest of Grosser's henchmen. Her optimistic view and her ability to cheer up others even at the worst moment (though sometimes she can be quite perky) was one of her greatest strengths, making her easily likeable by her peers a fellow Corrector programs. She also has an empathetic personality that helps her understand the nature of AI programs she meets on the ComNet. In 1st season, it was revealed that her singing voice had ability to normalize programs. In 2nd season, she upgrade to Advanced Elemental Suit. In finale, she upgraded to Final Element Suit. / / Yui's best friend. She has a polite, calm, and caring personality. Her intelligence and innate ability with computers, and her voice that enable to normalize programs had makes her a perfect candidate for Corrector, as expected by Professor Inukai. From the beginning she was supposed to be a Corrector, but I.R. confused her with Yui, thanks to Grosser's intervention. When Professor Inukai was passing Corrector tasks and abilities to her, she got manipulated by Grosser, who turned her into Dark Angel Haruna, but she was eventually saved by Yui. In the 2nd season, she again returns as a Corrector to assist Yui, first when Yui got petrified by infected insects, and later, when she helps Yui until the end of the 2nd season. Curiously, she can use Element Suits more effectively than Yui. Her Basic Elemental Suit reminds of angel. / Shun's cousin. A dark and mysterious girl who becomes a Corrector on her own in the second season. Her bitter experience in the past seems to be the reason for her cold, apathic demeanor as a mask to protect herself. While at most times she does her job by mostly cold, no-nonsense efficiency, Yui's action sometimes force her to reconsider and help Yui. She became a Corrector to search for the "little girl", believing that she might be connected with her mother's coma. While at first she rather despised Yui, and considered her little more than a nuisance, in the end she admires her optimistic view of life, and her ability to cheer up others. Her Basic Element Suit is reminiscent of the maid. Professor Inukai is the creator of Correctors and Grosser. He attempts to stop Grosser by sending Correctors, but he is left comatose after Grosser intercepts him. His mind, incomplete and amnesiac, wanders the Net, seeing help and evading Grosser's henchmen. He eventually recovers and passes on his Corrector abilities to Yui. He helps her repair and build Element Suits, and provides a base of operations for the Correctors. Family Ai's cousin. An engineering/medical student, also Yui's love interest (which he is unaware of). Very skillful with computers and the ComNet, he is often helping Yui (albeit indirectly) in her task as Corrector. Later in the end of season one, he was abducted by Grosser, as bait to lure Yui and the other Correctors to battle him in his lair. In the second season, he's gone to U.S. for study and doesn't appear the show. Yui's father. He is a computer programmer, also the head of project team that developed virtual amusement parks called Galaxy Land and Marine Adventure Net. He loves Yui very much, and is always distressed when he sees Yui with some guy (especially Shun). Yui's mother. An excellent cook, whose work is highly favored by Yui and her father. Ai's mother. A motherly figure whom Ai loves very much. Now she's comatose, thanks to an accident that severed the connection between her mind and body. Her mind is imprisoned by Ryo Kurokawa after she attempts to release the "missing little girl", thus causing an accident that kills Ryo and releases the girl into the ComNet. By the final episode, her consciousness is freed from ComNet and Azusa wakes up from her coma. Ai's father. Deceased 10 years ago. He and Ai's mother originally belonged to the science team that created the ComNet. His death triggered Ai's attempt to mask her heart, at first to make her mother less worried. Classmates and teachers Yui's childhood friend. He also Haruna's boyfriend. His demeanor contrasts with Haruna, often making him a joke target by Yui and her peers, though he is a nice boy at heart. Yui's best friend. Yui's best friend. Yui's friend. Yui's friend. Yui's teacher. Though she's actually a good, supportive teacher, her childish personality can sometimes create trouble. Correctors Eight softwares have close relationship with Yui throughout the show (Except for I.R. have same appearance as humans). Originally one software called the Corrector, but broken down into eight softwares under Grosser's attack. As such, each piece of software concentrates on a specific task, and Yui must learn to integrate their skillsets to defeat Grosser's minions, and eventually, Grosser himself. In finale of the 1st season, it's revealed that way they correct Grosser is their self-sacrifice. In 2nd season, Correctors and Yui deal with unknown computer viruses. Corrector Software No. 1, The Regulator. Yui meets him in a space adventure site (not unlike an MMORPG). He has the ability to practically stop time for a limited time, enabling him to move at light-speed velocity. His power in the Wind Element Suit makes Yui able to stop time for about 15 seconds, and if other Correctors (such as I.R.) borrow its power, the time can be strengthened to 30 seconds. He has a "hero-guy" type personality and insists on being the team leader and great hero (treats Yui like a damsel in distress, said he didn't need a woman's help, always tries to go first without everyone's concern, tries to steal the spotlight every time, has some narcissistic streak, etc), which at first makes Yui tend to dislike him, and sometimes puts him at odds with the other Correctors, though he is actually a nice. He is mainly a comic relief for the show (especially 2nd season). Corrector Software No. 2, The Synchronizer. Not much is known about this Corrector, since he was lost at the beginning of the series. At the end of the first season it is revealed that he was corrupted and became the Corruptor program War Wolf. His ability is similar to Wolf (using fire bursts as weapons), and one of the Correctors that has excellent fighting abilities on his own. His power of fire can be bestowed to other Correctors, to let them access the Fire Element Suit. He has a loyal personality, seems to have some feelings for Yui (refused Professor Inukai's orders to run from Grosser and goes save Inukai, or is the one that opposes Inukai's decision to transfer Corrector power from Yui to Haruna, refuses to become Haruna's ally and confesses in front of Inukai he want to follow Yui until the end). Beagle infections force him to stay in War Wolf form for most of the 2nd season, reminding him of his bad deeds in the past, causing him to become rather explosive, and Yui's attitude doesn't help either (she always call him "Doggy" out of habit, which annoys him). In the finale, He's back to true form and hugged with Yui who blushed. Corrector Software No. 3, The Predictor. Yui meets her on a love fortune-telling site. Has ability to foresee the future. Her visions, though most of time accurate, can sometimes deviate, especially when she doesn't have enough data/knowledge about the subject (like when Yui hugged her, when her vision in which Yui shakes her hand, when they first meet, and when she is trapped in Rapunzel's tower on the fairytale simulation). Has wise, maternal personality. Her power enables Yui to access the Wind Element Suit and enables Yui to predict the enemies' movements and attacks. She often becomes a subject for Control's flirting, but thanks to her ability, she can evade most of it (leaving Control in distress). Corrector Software No. 4, The System Maintainer. Yui meets him on a rainforest like simulation. He has the ability to control nature, like water and plants. His power helps Yui to access the Water Element Suit, and she can control nature at will, using it as defensive, recovering, or offensive tools. He has a rebellious, childish personality, and dislikes anyone who desecrates nature, to the point of kidnapping everyone who litters on the rainforest simulation. When Yui meets him, he at first hates Professor Inukai, whom he believed had left him, but he finally learns that Dr. Inukai actually loves him, just like the other Correctors, and left him there for his own safety. Corrector Software No. 5, The Repairer. She has the ability to heal damage, including Element Suit damage, and when she lends her power to Yui, Yui can perform a barrier in the Water Element Suit that can knock back attacks. She has a sweet, rather naive (to the point of perky), and polite personality. While she is not prone to fighting, her fighting style includes applying non-lethal force as much as possible (like using a "tickler machine" and bug spray), and also using a vast array of traps and gadgets (she is notoriously known among the other Correctors for this, she even dubbed herself the "Trap Princess"). All her antics and naivety especially disturbs Freeze (whom Rescue seems interested in). In the 2nd season, she fights using bandages from her halo, anti-Beagles spray, and can also detect virus sources by using a radar on her halo. Corrector Software No. 6, The Archiver. A pacifist old man who claims himself to be a "Peace Defender", who ironically has a knack for building destructive weapons. At first he disliked Yui for destroying the solitude he gained at the remote area he inhabited, but later he helped her drive away Grosser's Corruptors by lending her his power. His power enables Yui to access the Fire Element Suit, with the "Flame Bomber" projectile attack. A wise person, he always speaks what he feels is right, even if it makes conflict with the other Correctors. In the 2nd season, from his data, the villain cloned him to wreak havoc on the ComNet, giving the distraction they needed to carry out their plan. Corrector Software No. 7, The Compiler. Yui first met him, along with Peace, at a secluded area, separated from the main domain of the marine adventure simulation created by her father's company. A fat guy with a playful, sweet, almost childish demeanor. He likes Yui at first sight, because "she is cute". He has the ability to mimic practically anyone, and copy some abilities from the one he is mimicking. He is also the one with the biggest memory power out of all of the Correctors, and when he lends his power to Yui, Yui's power (and also her weight) in her Basic Element Suit changes greatly, and can also activate her Earth Element Suit. He's Peace's good friend. His ability to mimic others can sometimes be quite a nuisance for others, like when he teases Freeze by mimicking Rescue. Corrector Software No. 8, The Installer. The first Corrector Software Yui meets, also Yui's best friend; I.R. always nags Yui about her duty as Corrector, sometimes making Yui annoyed to the end. He has a tendency to end his sentences with "Thank you", sometimes referred to as a Japanese raccoon dog, but hates that idea. He made the Basic Element Suit for Yui from a magic girls outfit stored at Galaxy Land, went download it to Yui until Season 2. he can also access the Earth Element Suit, giving Yui a large physical power boost, making her on par with Jaggy. Villains/Corruptors Leader of the Corruptors. At first, he is a "good" A.I. program created by Professor Inukai to manages the all over ComNet. However awareness of own ego and revolted against Professor Inukai. His contact with Yui; a girl who cried to computer thrown in the garbage made him want to "live", and later, to tried create the world where he could "live". Professor Inukai thinks it's fatal error and create the Correctors softwares to delete his mind. When Inukai chooses Haruna as Corrector, Grosser, who was fascinated with Yui from the beginning, manipulated I.R. and set Professor Inukai in accident, so that Yui would be the Corrector instead of Haruna. He even went as far as to turn Haruna into Dark Angel Haruna. In the 1st season finale, he pretended to be defeated by Yui and Correctors but possesses Shun's body, he confesses that he want to be Yui whom his dearest human and tried to convince Yui to let him her body, Yui could see through the disguise, and instead, she convinces him that he actually is a living being himself, capable of feeling happiness, pain, and sadness, and also Correctors and Corruptors who are softwares over the net. She also stated that for the sake of others who love her, she can't just quit to become herself, though sometimes she is bitter and sad. Grosser, who finally sees the error of his ways, deletes himself along with the whole ComNet, sparing a pretty much-devastated Yui. But at last, with moral support from Haruna, she could use her power and reinitialize the whole ComNet again. Grosser decided to start his life over and resurrects Correctors, Corruptors and Shun, all along were grateful to her. Resembles a werewolf, he is Grosser's most loyal henchmen that prefers actual combat. He fights by using a lazer sword, and he has the ability to fire flames. He have a rivalry with Yui but she called him "Doggy", much to his annoyance. He is a master swordsman and has a somewhat honorable demeanor. He is, in fact, the lost corrector program Synchro, but this is unknown to him until near the end of the first season. Beagle infections in the beginning of the 2nd season have forced Synchro to stay in this form for the rest of the season. The only female Software on Grosser's team. She is probably the most ruthless and competent of all of Grosser's henchmen, at least until the too-naive-to-be-true Rescue appears. She has the ability to freeze anything. In fact, Freeze, Jaggie, and Virus are Grosser's clones but which them didn't realizes. In the second season, after failing in many side-jobs, she finally accepts a job from a mysterious man to search for a young girl, who is "always present everywhere she's near". Later it was revealed that she has the innate ability to detect a virus presence (which she didn't realize), and later was used by this mysterious man to search for the "missing little girl". It is noted that she got a big personality change in the 2nd season, from the cold, no-nonsense villain in the 1st season into a ditzy, much more cheerful and girly attitude in the 2nd season. In the series finale, she joined to Correctors and even got her own Corrector suit (dubbed "Corrector Freeze" by some fans). Despite his bulk size and his strength, Jaggy is (arguably) the most knowledgeable (though not the most talented) of all Grosser's henchmen. Most of his tactics including manipulating the environment to do his bidding (though most of it didn't do so well). Loves to read books. In the 2nd season, he became the library administrator, and seems to have some feelings for Freeze. Armed with primarily a lazer sword and computer viruses called viruses command, he (arguably) has the most intelligence compared to his other teammates. He isn't prone to actual combat, and prefers to use more scheming methods. His computer viruses can be used to corrupt other programs and make them do his bidding. In the 2nd season, he works as a virus computer researcher, whom with Yui and Haruna's help finally identifies the source of Beagles Virus. Others A baby whale's software from the Marine Adventure Net area in ComNet. He was developed by Shinichi and his team based on actual whale calls. He doesn't like his original name, so he has given himself his own name. Grosser sees his singing voice as a threat, at finale of 1st season it's revealed that his voice evoking Grosser's sadness. A software for spy specialist, worked with Grosser for while, but he went missing. At first, Yui and her friends thought he was Syncro. He contacts Yui and her friends while hides his nature and want to know who Yui is. This red-haired young girl, with her teddy bear, is always present when the Beagle Virus appears. While she seems innocent and bears no harm, when emotionally pressured, and she sheds tears, her tears become the core for the Beagles virus. She is always searching for "sunflowers". Her picture was found within a storybook that belongs to Ai Shinozaki. Originally, she was created by Shintaro Ishikawa as a messenger software for his daughter. Ryo took her, experimenting on her with many computer viruses (not unlike Lisa Trevor), and thus the Beagles Virus was created with her as the host. Her name is "Ai", same with Shinozaki Ai. This girl can feel the emotions of others, which causes Ai to always lose her track, due to her heart's closure to others (which at first, she blamed on the ComCon, using logical deduction). Missing Little Girl carries this teddy bear as her protector and backup. He is the one who stole Yui's newest ComCon and gave it to Ai instead. Voiced by: Mitsuru Ogata This mysterious man is working in the fairytale simulation, as a performer for a "Sun, Wind and Traveler" story drama, as the traveler. He asks Freeze to search for the "missing little girl", claiming he wanted to return her home, with one condition; that she shouldn't talk about it with anyone. Behind his friendly demeanor, he was not beneath using cruel and dirty tactics to ensure his job was done (like cloning Peace at one point, or using a modified version of Beagle Virus). He promised a large sum of reward money for Freeze, stopping her from asking too much about the job. In the end, it is revealed he is just a mere pawn, controlled by another mastermind. At the end of the series, he helps Freeze to release the "missing little girl", and distracts Ryo, giving the Correctors a chance to reinitialize her. He has dubbed himself as "the most despised man by ComNet and Shintaro Shinozaki". He is the true mastermind behind the Beagles Virus creation, whom he planned to use to destroy ComNet, and to form a new ComNet. He saw A.I. programs as tools, thus making him at odds with Inukai and Shinozaki, causing him to retire from the group. When Azusa Shinozaki attempted to release "Ai", he was killed by an accident that happened in his place, but his mind was keep intact in ComNet, interacting with the Beagles Virus. He didn't realize it until Prof. Inukai showed him the truth. In the end, Yui's finally able to reinitialize him in her Final Element Suit, and his final words were "At least... I can leave my hatred to humanity". Miscellaneous This bracelet is used by every Corrector as access, communication, transformation, radar, transmitter and receiver device. Even without VR units, every Corrector can enter ComNet easily with this. While entering ComNet, for people outside they would be seen as sleeping, so others wouldn't notice. Correctors softwares also possesses own ComCon, which allows they uses to abilities. This virtual world is the form of Internet in Yui's world. Everyone can access and experience worlds inside it, using VR units. These is where the Softwares, digital avatars powered by AI are working. ComNet is connected to infrastructure, public services, or even entertiment around the world. Time goes 256 times faster in ComNet than in the real world (means, one second in the real word is approximately the same as 256 seconds (4 minutes, 16 seconds) in ComNet). However, being too long in ComNet can cause problems, because of fatigued body, causing symptoms which are dubbed as "ComNet Fever". That's why in regular VR units, there are failsafe programs that eject people automatically after 10 hours a day in ComNet. ComCon bracelets didn't have this feature, once causing Ai to get "ComNet Fever" when she stayed too long in the Net. Originally, an outfit for online avator that I.R. upgraded to the physical strength and agility. By downloads own Basic Element Suit, Yui (and Haruna, Ai) can hides own identitiles, and to made others believed herselfs are softwares. In 2nd season, Professor Inukai adopted it for Corrector's uniform and instaled to Yui's ComCon. Besides, by installing a Correctors software's power, Yui (and Haruna, Ai) can transform into Element Suits of Wind, Fire, Water, and Earth. In the time install more than one power, still be in Basic Element Suit, gain pounds and lost agility for increased in amount of avatar's data. Program similar to the magic; Yui, Haruna, Ai uses to corrects troubles in ComNet. Send out the stars from their magic wand to makes software bugs or computer virses fixing. Against Corrupters Software, can't initialize them unless gather a lot of powers but possible to damage them. Appears in 2nd season; an unknown powerful computer virus. In the Japanese version, it's a coined word that is a mixture of "bug" and "virus". Media Manga There are two versions of the Corrector Yui manga series. The original one was written and drawn by Kia Asamiya and published by Shogakkukan in Japan and does not have a release date in the States yet. It is a two volume series and was published in Ciao from 1999 to 2000. The manga that Tokyopop published was done by shojo artist Keiko Okamoto and is a manga adaptation of the anime series. That one is a nine volume two-part series which was published by NHK Publishing. This manga series was licensed in North America and translated into English by Tokyopop beginning in 2002. Anime Viz Media has only released 18 of the 52 episodes onto Region 1 DVD in the United States. Whether or not Viz will release the rest of the series remains to be seen. The last DVD Viz released for the show, the 4th volume, came out 24 February 2004. It is one of the Viz Media-licensed anime shows with its manga or light novel counterpart not also licensed by Viz Media. Theme Songs Opening Theme Episodes 1–26: "Eien to Iu Basho" Lyrics: Anri / Composer: Masayoshi Yamazaki / Arrangement: COIL / Vocals: Anri with Masayoshi Yamazaki and Shikao Suga Episodes 27–52: "Tori ni Naru Toki" Lyrics, Composer and Vocals: Satsuki / Arrangement: Yasuhiro Kobayashi Ending Theme Episodes 1–26: "Mirai" Lyrics: MILAI / Composer: Kazuhisa Yamaguchi / Arrangement: Kazuhisa Yamaguchi, LEGOLGEL / Vocals: LEGOLGEL Episodes 27–52: "Requiem" Lyrics, Composer and Vocals: Satsuki / Arrangement: Yasuhiro Kobayashi List of episodes 1st Season 2nd Season References External links 1999 anime television series debuts 1999 manga Kia Asamiya Magical girl anime and manga Nippon Animation 2000 Japanese television series endings Pierrot (company) Shogakukan manga Tokyopop titles NHK original programming Viz Media anime Animated television series about teenagers Television series set in the future
61354187
https://en.wikipedia.org/wiki/Russian%20interference%20in%20the%202020%20United%20States%20elections
Russian interference in the 2020 United States elections
Russian interference in the 2020 United States elections was a matter of concern at the highest level of national security within the United States government, in addition to the computer and social media industries. In February and August 2020, United States Intelligence Community (USIC) experts warned members of Congress that Russia was interfering in the 2020 presidential election in then-President Donald Trump's favor. USIC analysis released by the Office of the Director of National Intelligence (DNI) in March 2021 found that proxies of Russian intelligence promoted and laundered misleading or unsubstantiated narratives about Joe Biden "to US media organizations, US officials, and prominent US individuals, including some close to former President Trump and his administration." The New York Times reported in May 2021 that federal investigators in Brooklyn began a criminal investigation late in the Trump administration into possible efforts by several current and former Ukrainian officials to spread unsubstantiated allegations about corruption by Joe Biden, including whether they had used Trump personal attorney Rudy Giuliani as a channel. Reports of attempted interference Overview In response to Russian interference in the 2016 United States elections, special counsel Robert Mueller conducted a two-year-long investigation. The resulting report concluded that Russia interfered in "sweeping and systematic fashion". In his July 2019 congressional testimony, Mueller stated that the Russians continue to interfere in U.S. elections "as we sit here", and that "many more countries" have developed disinformation campaigns targeting U.S. elections, based partly on the Russian model. Also in July 2019, the Senate Intelligence Committee released the first volume of a bipartisan report on Russian interference in the 2016 United States elections, a report that included recommendations for securing the 2020 elections. The second volume of that report noted, based on social-media data from October 2018, that "Russian disinformation efforts may be focused on gathering information and data points in support of an active measures campaign targeted at the 2020 U.S. presidential election." In a highly classified report, the Central Intelligence Agency stated: "We assess that President Vladimir Putin and the senior most Russian officials are aware of and probably directing Russia's influence operations aimed at denigrating the former U.S. Vice President, supporting the U.S. president and fueling public discord ahead of the U.S. election in November." The existence of this report, published at the end of August 2020, was made public knowledge on September 22 in reports from The Washington Post and The New York Times. U.S. officials have accused Russia, China and Iran of trying to influence the 2020 elections. Between January and late July 2017, Twitter identified and shut down over 7,000 phony accounts created by Iranian influence operations. According to Christopher A. Wray, the Director of the Federal Bureau of Investigation, Russia is attempting to interfere with the 2020 United States elections. Speaking to the Council on Foreign Relations in July 2019, Wray stated, "We are very much viewing 2018 as just kind of a dress rehearsal for the big show in 2020." Dan Coats, the former Director of National Intelligence, believes that Russia and China will both attempt to influence the elections. As of September 2020, intelligence officials point to Russia as the more "acute threat" to the election, saying that China has been expressing its preferences by public rhetoric rather than engaging in covert operations to denigrate a candidate or otherwise interfere in the election itself. Wray testified to the House Committee on Homeland Security on September 17, 2020 that Russian efforts to damage the Biden campaign were "very active". According to United States intelligence officials interviewed by The New York Times, Russian "operations would be intended to help President Trump, potentially by exacerbating disputes around the results, especially if the race is too close to call." The FBI and the Cybersecurity and Infrastructure Security Agency have stated that Russian cyberattacks have targeted "U.S. state, local, territorial, and tribal government networks, as well as aviation networks". Social-media disinformation and voting infrastructure Various disinformation campaigns on social media have targeted the Democratic Party candidates running in the 2020 Democratic Party presidential primaries. This has prompted considerable concern regarding the ability of social media companies to cope with disinformation and manipulation. By August 2019, Facebook and Twitter had banned advertisements that use misinformation to attempt the suppression of voter turnout. Microsoft developed an open source software called ElectionGuard to help safeguard the 2020 elections. In mid-July 2019, Microsoft announced that it had, over the prior year, "notified nearly 10,000 customers they've been targeted or compromised by nation-state attacks". Based on attacks that had targeted political organizations, and on experience from 2016 and 2018, Microsoft anticipated "attacks targeting U.S. election systems, political campaigns or NGOs that work closely with campaigns". Of the "nation-state attacks" that had originated from Russia, Microsoft claimed that they followed the "same pattern of engagement" as Russian operations in 2016 and 2018. On September 20, 2019, Microsoft announced that it would provide free security updates for Windows 7, which reached its end-of-life on January 14, 2020, on federally-certified voting machines through the 2020 United States elections. On October 4, 2019, Microsoft announced that "Phosphorus", a group of hackers linked to the Iranian government, had attempted to compromise e-mail accounts belonging to journalists, prominent Iranian expatriates, U.S. government officials and the campaign of a U.S. presidential candidate. While Microsoft did not disclose which campaign had been the target of the cyber attack, unnamed sources informed Reuters that it had been that of Donald Trump. On October 21, 2019, Facebook CEO Mark Zuckerberg announced that his company has detected a "highly sophisticated" set of campaigns to interfere with the 2020 elections. These campaigns originated from Russia and from Iran. Fake accounts based in Russia posed as Americans of varied political backgrounds and worked to undermine the campaign of Joe Biden, aiming to sow discontent with Biden from both the left and the right. A September 2019 report from The Washington Post demonstrated that due to bad default passwords and weak encryption, hackers with physical access can easily get into voting machines designed for use in the 2020 United States elections, and remote hacking was possible if the machines were accidentally misconfigured. On February 21, 2020, The Washington Post reported that, according to unnamed US officials, Russia was interfering in the Democratic primary in an effort to support the nomination of Senator Bernie Sanders. Sanders issued a statement after the news report, saying in part, "I don't care, frankly, who Putin wants to be president. My message to Putin is clear: stay out of American elections, and as president I will make sure that you do." Sanders acknowledged that his campaign was briefed about Russia's alleged efforts about a month prior. Sanders suggested that Russians were impersonating people claiming to be his supporters online in order to create an atmosphere of toxicity and give "Bernie Bros" a bad reputation, a suggestion that Twitter rejected. According to election-security expert Laura Rosenberger, "Russian attempts to sow discord in the Democratic primary would be consistent with its strategy of undermining Americans' faith in democratic institutions and processes." In March 2020, the University of Wisconsin–Madison and the Brennan Center for Justice published a report indicating that Russia-linked social media accounts have been spreading Instagram posts calculated to sow division among American voters. According to the report, Russian operatives were increasingly impersonating real political candidates and groups rather than creating fictional groups. According to Twitter's head of site integrity, Russian agents also attempted in 2018 to create the impression of more election interference than is actually happening to undermine confidence in the process. Shortly thereafter, the New York Times reported that according to American intelligence officials Russian operatives have been stoking via private Facebook groups anger among African Americans, emphasizing allegations of police brutality in the United States, highlighting racism in the United States against African Americans, and promoting and pressuring hate groups, including white and black extremist groups, in order to create strife within American society, though American intelligence officials provided few details about the alleged operations. A CNN investigation found that Russian efforts had partly been outsourced to troll farms in Ghana and Nigeria. In May 2020, Twitter suspended 44 accounts that exhibited behavior plausibly, but not definitively, indicative of Russian election interference tactics, including association with a Ghana troll farm. Although government officials and American corporate security officers braced for a repeat of 2016's election infrastructure hacking and similar twenty-first century attacks, and in fact conducted what were characterized as pre-emptive counter-strikes on botnet infrastructure which might be used in large-scale coordination of hacking, and some incidents earlier in the year appeared to foreshadow such possibilities, after his dismissal, in a December 2020 interview Chris Krebs, the Trump administration's director of the Cybersecurity and Infrastructure Security Agency (CISA), described monitoring Election Day from CISA's joint command center along with representatives from the military's United States Cyber Command, the National Security Agency (NSA), the Federal Bureau of Investigation (FBI), the United States Secret Service (USSS), the Election Assistance Commission (EAC), representatives of vendors of voting machine equipment, and representatives of state and local governments, as well as his agency's analysis preceding and subsequent to that day, saying,Responding to spurious claims of foreign outsourcing of vote counting as a rationale behind litigation attempting to stop official vote counting in some areas, Krebs also affirmed that, "All votes in the United States of America are counted in the United States of America." However, acts of foreign interference did include Russian state-directed application of computational propaganda approaches, more conventional state-sponsored Internet propaganda, smaller-scale disinformation efforts, "information laundering" and "trading up the chain" propaganda tactics employing some government officials, Trump affiliates, and US media outlets, as described below. Briefings to Congress On February 13, 2020, American intelligence officials advised members of the House Intelligence Committee that Russia was interfering in the 2020 election in an effort to get Trump re-elected. The briefing was delivered by Shelby Pierson, the intelligence community's top election security official and an aide to acting Director of National Intelligence Joseph Maguire. Trump allies on the committee challenged the findings, and Trump was angered to learn of the briefing as he believed Democrats might "weaponize" the information against him. He chastised Maguire for allowing the briefing to occur, and days later he appointed Richard Grenell to replace Maguire. William Evanina, director of the National Counterintelligence and Security Center told members of Congress during a classified briefing on July 31, 2020, that Russia was working to boost the campaign of Trump and undermine that of Biden. Details of Evanina's report were not made public. The Biden campaign confirmed to the Associated Press that they had "faced multiple related threats" but were "reluctant to reveal specifics for fear of giving adversaries useful intelligence". Evanina later stated in a press release, "We assess that Russia is using a range of measures to primarily denigrate former Vice President Biden and what it sees as an anti-Russia 'establishment.'" On August 7, 2020, CNN reported that intelligence officials had provided senators, representatives and both the Biden and Trump campaigns with information "indicating Russia is behind an ongoing disinformation push targeting" Biden. That same day, Democratic congressman Eric Swalwell, a member of the House Intelligence Committee, asserted that Republican senators investigating Biden and his son were "acting as Russian launderers of this information." Johnson, Derkach, and Giuliani In late 2019, the chairman of the Senate committee investigating the matter, Ron Johnson, was warned by American intelligence officials of a risk he could be playing into the hands of Russian intelligence to spread disinformation. During this period, Richard Burr (R-NC), chair of the Senate Intelligence Committee, also warned Johnson that Johnson's investigation could aid Russian efforts to promote distrust in the United States' political system. Senators had also been briefed in late 2019 about Russian efforts to frame Ukraine for 2016 election interference. Johnson initially said he would release findings in spring 2020, as Democrats would be selecting their 2020 presidential nominee, but instead ramped up the investigation at Trump's urging in May 2020, after it became clear Biden would be the nominee. In March 2020, Johnson decided to postpone the issuing of a subpoena for former Ukrainian official and employee of a Democratic lobby firm Blue Star Strategies Andrii Telizhenko, a close ally of Rudy Giuliani who had made appearances on the pro-Trump cable channel One America News, after Senator Mitt Romney backed away on voting for Mr. Telizhenko, as the Senators from the Democratic Party pressured Senator Romney on Mr. Telizhenko's vote. Trump tweeted a press report about the investigations, later stating that he would make allegations of corruption by the Bidens a central theme of his re-election campaign. The State Department revoked Telizhenko's visa in October 2020, and CNN reported the American government was considering sanctioning him as a Russian agent. In May 2020, Ukrainian lawmaker Andrii Derkach, a Giuliani associate whom Evanina had named as a key participant in Russian interference, released snippets of alleged recordings of Joe Biden speaking with Petro Poroshenko, the Ukrainian president during the years Biden's son, Hunter, worked for Burisma Holdings. The Bidens had been accused without evidence of malfeasance relating to Burisma. The recordings, which were not verified as authentic and appeared heavily edited, depicted Biden linking loan guarantees for Ukraine to the ouster of the country's prosecutor general. The recordings did not provide evidence to support the ongoing conspiracy theory that Biden wanted the prosecutor fired to protect his son. Poroshenko denied In June 2020 that Joe Biden ever approached him about Burisma. In September 2020, the United States Department of the Treasury sanctioned Derkach, stating he "has been an active Russian agent for over a decade, maintaining close connections with the Russian Intelligence Services." The Treasury Department added Derkach "waged a covert influence campaign centered on cultivating false and unsubstantiated narratives concerning U.S. officials in the upcoming 2020 Presidential Election," including by the release of "edited audio tapes and other unsupported information with the intent to discredit U.S. officials." Giuliani, Trump's personal attorney, had spent significant time working in Ukraine during 2019 to gather information about the Bidens, making frequent American television appearances to discuss it. Attorney general Bill Barr confirmed in February 2020 that the Justice Department had created an "intake process" to analyze Giuliani's information. This information included a September 2019 statement by former Ukrainian prosecutor general Viktor Shokin falsely claiming he had been fired at Biden's insistence because Shokin was investigating Biden's son. The statement disclosed that it had been prepared at the request of attorneys for Ukrainian oligarch Dmitry Firtash, which since July 2019 included Joseph diGenova and his wife Victoria Toensing — both close associates of Trump and Giuliani. Firtash, fighting extradition to the United States where he was under federal indictment, is believed by the Justice Department to be connected to high levels of Russian organized crime, which allegedly installed him as a broker for Ukrainian imports of Russian natural gas. He is also reportedly close to the Kremlin to support Russian interests in Ukraine. According to officials interviewed by The Daily Beast, then-National Security Advisor John Bolton told his staff not to meet with Giuliani, as did his successor Robert C. O'Brien, because Bolton had been informed that Giuliani was spreading conspiracy theories that aligned with Russian interests in disrupting the 2020 election. These officials were also concerned that Giuliani would be used as a conduit for disinformation, including "leaks" of emails that would mix genuine with forged material in order to implicate Hunter Biden in corrupt dealings. The New York Times reported in November 2019 that Giuliani had directed associate Lev Parnas to approach Firtash about hiring the couple, with the proposition that Firtash could help to provide compromising information on Biden, which Parnas's attorney described was "part of any potential resolution to [Firtash's] extradition matter." Giuliani denied any association with Firtash, though he told CNN he met with a Firtash attorney for two hours in New York City at the time he was seeking information about the Bidens. As vice president, Biden had urged the Ukrainian government to eliminate brokers such as Firtash to reduce the country's reliance on Russian gas. After his October 2019 indictment, Parnas asserted that he, Giuliani, diGenova and Toensing had a deal with Firtash in which the oligarch would provide information to discredit Biden in exchange for Giuliani persuading the Justice Department to drop its efforts to extradite Firtash. The Washington Post reported in October 2019 that after they began representing Firtash, Toensing and diGenova secured a rare face-to-face meeting with Barr to argue the Firtash charges should be dropped. Prior to that mid-August meeting, Barr had been briefed in detail on the initial Trump–Ukraine whistleblower complaint within the CIA that had been forwarded to the Justice Department, as well as on Giuliani's activities in Ukraine. Bloomberg News reported that its sources told them Giuliani's high-profile publicity of the Shokin statement had greatly reduced the chances of the Justice Department dropping the charges against Firtash, as it would appear to be a political quid pro quo. Barr declined to intervene in the Firtash case. Firtash denied involvement in collecting or financing damaging information on the Bidens. According to Jane Mayer in October 2019, John Solomon, a contributor to Fox News, was pivotal for the dissemination of disinformation about Biden. She stated "No journalist played a bigger part in fueling the Biden corruption narrative than John Solomon." Developments in summer and fall 2020 By the summer of 2020, Russian intelligence had advanced to "information laundering" in which divisive propaganda was reported on Russia-affiliated news websites with the expectation the stories would be picked-up and spread by more legitimate news outlets. In August 2020, The New York Times reported that a video published by RT's Ruptly video platform, of Black Lives Matter protesters apparently burning a bible in Portland, Oregon, edited in a misleading way, "went viral" after it being shared with an inaccurate caption on social media by a far-right personality and then conservative politicians. The Times said the clip "appear[ed] to be one of the first viral Russian disinformation hits of the 2020 presidential campaign”. An NBC report in the wake of this incident found that Ruptly edited user-generated protest videos to highlight violence over peaceful protest. In September 2020, Facebook and Twitter announced that they had been alerted to the existence of Peace Data, a website set up by Russia's Internet Research Agency to interfere with the 2020 election. The social-media companies deleted accounts that had been used in an operation to recruit American journalists to write articles critical of Joe Biden and his running mate Kamala Harris. On September 3, the intelligence branch of the Department of Homeland Security issued a warning to state and federal law enforcement that Russia was "amplifying" concerns about postal voting and other measures taken to protect voters during the COVID-19 pandemic. According to DHS analysts, "Russian malign influence actors" had been spreading misinformation since at least March. Trump had repeatedly asserted without evidence that voting by mail would result in widespread fraud. ABC News reported in September 2020 that the Homeland Security Department had withheld the July release of an intelligence bulletin to law enforcement that warned of Russian efforts to promote “allegations about the poor mental health” of Joe Biden. DHS chief of staff John Gountanis halted the release pending review by secretary Chad Wolf. The bulletin stated that analysts had “high confidence” of the Russian efforts, which were similar to efforts by Trump and his campaign to depict Biden as mentally unfit. A DHS spokesperson said the bulletin was “delayed” because it did not meet the department's standards. The bulletin had not been released as of the date of the ABC News report. Later in September, Brian Murphy — a former DHS undersecretary for intelligence and analysis — asserted in a whistleblower complaint that Wolf told him “the intelligence notification should be ‘held’ because it ‘made the President look bad.’” Murphy also claimed Wolf told him to "cease providing intelligence assessments on the threat of Russian interference in the US, and instead start reporting on interference activities by China and Iran.” Murphy said Wolf told him this directive came from White House national security advisor Robert O'Brien. On September 10, 2020, Reuters reported that hackers had tried and failed to breach the systems of SKDKnickerbocker, a political consulting firm that specializes in working for Democratic Party politicians and that had been working with the Biden campaign for two months. Microsoft, who detected the cyberattack, informed SKDKnickerbocker that Russian state-backed hackers were the likely perpetrators. Analysts and officials interviewed by The New York Times in September 2020 indicated that a primary tactic of Russian disinformation campaigns was to amplify misleading statements from Trump, chiefly about postal voting. Russia's Internet Research Agency also created a fictitious press organization, the "Newsroom for American and European Based Citizens", in order to feed propaganda to right-wing social media users. NAEBC accounts were blocked or suspended by Facebook, Twitter, and LinkedIn, but their content "got more traction" on Gab and Parler, according to a Reuters report. H.R. McMaster, Trump's former national security advisor, said on October 1 that Trump was "aiding and abetting Putin’s efforts by not being direct about this. This sustained campaign of disruption, disinformation and denial is aided by any leader who doesn’t acknowledge it." On October 5, The Washington Post reported that the State Department had revoked the travel visa of Giuliani associate Andrii Telizhenko. On October 21, threatening emails were sent to Democrats in at least four states. The emails warned that "You will vote for Trump on Election Day or we will come after you." Director of National Intelligence John Ratcliffe announced that evening that the emails, using a spoofed return address, had been sent by Iran. He added that both Iran and Russia are known to have obtained American voter registration data, possibly from publicly available information, and that "This data can be used by foreign actors to attempt to communicate false information to registered voters that they hope will cause confusion, sow chaos and undermine your confidence in American democracy." A spokesman for Iran denied the allegation. In his announcement Ratcliffe said that Iran's intent had been "to intimidate voters, incite social unrest, and damage President Trump", raising questions as to how ordering Democrats to vote for Trump would be damaging to Trump. It was later reported that the reference to Trump had not been in Ratcliffe's prepared remarks as signed off by the other officials on the stage, but that he added it on his own. New York Post story In October 2020, the FBI reportedly launched an investigation into whether a story published in the tabloid journal New York Post on October 14 might be part of a Russian disinformation effort targeting Biden. The story, titled "Biden Secret Emails", displayed an email supposedly showing that Hunter Biden had arranged for his father, then-vice-president Joe Biden, to meet with a top advisor to Burisma. The Biden campaign said that no such meeting ever happened. The Post's source for the data was Giuliani, who says he got it from the hard drive of a laptop that was allegedly dropped off at a Delaware repair shop in April 2019 and never picked up. The shop owner told reporters that he thought the person who dropped it off was Hunter Biden but wasn't sure. He said he eventually gave the laptop to the FBI, keeping a copy of the hard drive for himself that he later gave to Giuliani. A year and a half earlier, in early 2019, White House officials had been warned that the Russians were planning to leak forged emails in the weeks before the election, and that Giuliani could be the conduit for such a leak. Most of the New York Post story was written by a staff reporter who did not allow his name to be used on it because he doubted the story's credibility. According to an investigation by The New York Times, editors at the New York Post "pressed staff members to add their bylines to the story", and at least one refused, in addition to the original author. Of the two writers eventually credited on the article, the second did not know her name was attached to it until after it was published. Giuliani was later quoted as saying he had given the hard drive to the New York Post because "either nobody else would take it, or if they took it, they would spend all the time they could to try to contradict it before they put it out." Several days after the story was published, more than 50 former senior intelligence officials signed a letter saying that while they have no evidence, the story "has all the classic earmarks of a Russian information operation." The New York Times reported that no solid evidence has emerged that the laptop contained Russian disinformation. Hall County, Georgia On October 7, 2020, the government of Hall County, Georgia had all its election-related information released by Russian hackers using DoppelPaymer ransomware. 2021 DNI report According to a declassified DNI report released on March 16, 2021, there was evidence of broad efforts by both Russia and Iran to shape the election's outcome. However, there was no evidence that any votes, ballots, or voter registrations were directly changed. While Iran sought to undermine confidence in the vote and harm Trump's reelection prospects, the report found that Russia's efforts had been aimed at "denigrating President Biden's candidacy and the Democratic Party, supporting former President Trump, undermining public confidence in the electoral process, and exacerbating sociopolitical divisions in the US", central to Moscow's interference effort having been reliance on Russian intelligence agencies′ proxies “to launder influence narratives” by using media organizations, U.S. officials and people close to Trump to push “misleading or unsubstantiated” allegations against Biden. As an example of such activity by Russia the report cited a documentary aired on One America News Network in January 2020, which was identified by news media as The Ukraine Hoax: Impeachment, Biden Cash, and Mass Murder. The report specifically identified individuals controlled by the Russian government as having been involved in Russia's interference efforts, such as Konstantin Kilimnik and Andrii Derkach. The report said that Putin was likely to have had "purview" over the activities of Andrii Derkach. According to the report, Putin had authorized the Russian influence operations. Following the publication of the DNI report, House Intelligence Committee Chairman Adam Schiff issued a statement that said, "Through proxies, Russia ran a successful intelligence operation that penetrated the former president’s inner circle." Government reaction Dan Coats appointed Shelby Pierson as the U.S. election security czar in July 2019, creating a new position in a move seen as an acknowledgment that foreign influence operations against U.S. elections will be ongoing indefinitely. Election-security task forces established before the 2018 midterm elections at the FBI, the Department of Homeland Security, the National Security Agency and the United States Cyber Command have been expanded and "made permanent". The Department of Homeland Security indicated that the threat of ransomware attacks upon voter registration databases was a particular concern. Prior to resigning as U.S. Secretary of Homeland Security, Kirstjen Nielsen attempted to organize a meeting of the U.S. Cabinet to discuss how to address potential foreign interference in the 2020 elections. Mick Mulvaney, the White House Chief of Staff, reportedly warned her to keep the subject away from Trump, who views the discussion as questioning the legitimacy of his victory in 2016. Mitch McConnell, the Senate Majority Leader, has blocked various bills intended to improve election security from being considered, including some measures that have had bipartisan support. Election-security legislation remains stalled in the Senate as of February 2020. However, various states have implemented changes, such as paper ballots. Florida has expanded its paper-ballot backup system since 2016, but experts warn that its voting systems are still vulnerable to manipulation, a particular concern being the electronic poll books that store lists of registered voters. All 67 election supervisors in Florida have been required to sign nondisclosure agreements, and consequently, information such as the identity of which four counties had been hacked by Russian intelligence in 2016 remains unknown to the public. Democratic members of Congress cited the lack of effort to secure U.S. elections against foreign interference, particularly from Russia, as among grounds to begin an impeachment inquiry. On September 30, 2019, the United States issued economic sanctions against seven Russians affiliated with the Internet Research Agency, an organization that manipulates social media for misinformation purposes. The sanctions were described as a warning against foreign interference in United States elections. On December 9, 2019, FBI Director Christopher A. Wray told ABC News: "as far as the [2020] election itself goes, we think Russia represents the most significant threat." According to William Evanina, director of the National Counterintelligence and Security Center, Russia is "using social media and many other tools to inflame social divisions, promote conspiracy theories and sow distrust in our democracy and elections." Bloomberg News reported in January 2020 that American intelligence and law enforcement were examining whether Russia was involved in promoting disinformation to undermine Joe Biden as part of a campaign to disrupt the 2020 election. The following month, the Estonian Foreign Intelligence Service warned that Russia would attempt to interfere in the Georgian parliamentary election in October 2020 as well as the US election in November. On July 13, 2020, House Speaker Nancy Pelosi and Senate Minority Leader Chuck Schumer wrote to FBI Director Wray, requesting a briefing on a "concerted foreign interference campaign" targeting the United States Congress. The request for an all-Congress briefing, also signed by Rep. Adam Schiff and Sen. Mark Warner, was made public one week later, save for a classified addendum that was not released to the media. Trump administration reaction The Trump administration reacted to the briefing by American intelligence officials to the House Intelligence Committee that Russia was interfering in the 2020 election in an effort to get Trump re-elected by rejecting the efforts were in favor of Trump and by firing Joseph Maguire, who was involved in those reports. By contrast, Trump and his national security adviser Robert O'Brien accepted reports the Russians were supporting the nomination of Bernie Sanders. Three weeks after Trump loyalist Richard Grenell was appointed acting Director of National Intelligence, intelligence officials briefed members of Congress behind closed doors that they had "not concluded that the Kremlin is directly aiding any candidate’s re-election or any other candidates’ election," which differed from testimony they had provided the previous month indicating that Russia was working to aid Trump's candidacy. Two intelligence officials pushed back on suggestions the new testimony was politically motivated. One intelligence official asserted the earlier testimony had overreached and that Democrats had mischaracterized it. Kash Patel, a former aide to congressman Devin Nunes who joined Grenell at the ODNI, imposed limits on what intelligence officials could tell Congress about foreign influence operations. The briefers reportedly did not intend to contradict their previous testimony, though they avoided repeating it. Trump and his surrogates asserted that China, rather than Russia, posed the greater risk to election security and was trying to help Biden win. In August 2020, Trump tweeted that "Chinese State Media and Leaders of CHINA want Biden to win ‘the U.S. Election."” Donald Trump Jr. asserted at the August Republican convention that "Beijing Biden is so weak on China that the intelligence community recently assessed that the Chinese Communist Party favors Biden.” Director of National Intelligence John Ratcliffe stated during an August Fox News appearance, "China is using a massive and sophisticated influence campaign that dwarfs anything that any other country is doing." Attorney general Bill Barr and national security advisor Robert O'Brien made similar assertions. Intelligence community officials have publicly and privately said that the underlying intelligence indicates that while China would prefer Trump not be reelected, the nation had not been actively interfering and that Russia remained the far greater threat, working to undermine Biden. Trump also asserted that China was trying to stoke race protests in an effort to help Biden, which was also not supported by the intelligence community's assessment. The United States intelligence community released analysis in March 2021 finding that China had considered interfering with the election but decided against it on concerns it would fail or backfire. Following Joe Biden's apparent win—which Trump was actively disputing through numerous lawsuits—Chris Krebs, the director of the Department of Homeland Security's Cybersecurity and Infrastructure Security Agency, issued a statement on November 12: "There is no evidence that any voting system deleted or lost votes, changed votes, or was in any way compromised." Trump tweeted on November 17 that he had fired Krebs as a result of this statement. Putin administration reaction Russian officials denied that it had interfered in the 2016 election or that it was interfering in the 2020 election. On September 25, 2020, Putin released a formal statement seeking mutual "guarantees of non-interference" in U.S. and Russian elections and asking the United States "to approve a comprehensive program of practical measures to reset our relations in the use of information and communication technologies (ICT)." Interference from the administration In a June 2019 interview with George Stephanopoulos, Donald Trump said that he would accept information from other nations about his opponents in the 2020 United States presidential election. According to reporting by The Wall Street Journal, The Washington Post and The New York Times, Trump and his personal attorney Rudy Giuliani repeatedly pressed the Ukrainian government to investigate Hunter Biden, the son of Joe Biden, leading to the Trump–Ukraine scandal. Biden was viewed as a potentially strong Trump challenger in the 2020 presidential election, and the purpose of the requested investigation was alleged to be to damage Biden's election campaign for president. Reports suggested that Trump threatened to withhold military aid from Ukraine unless they investigated Biden. The controversy triggered the commencement of the formal process of impeachment inquiries against Trump on September 24, with House speaker Nancy Pelosi directing six House committee chairmen to proceed "under that umbrella of impeachment inquiry". On October 3, 2019, while discussing negotiations on a possible agreement in the ongoing China–United States trade war, he said that "if they [China] don't do what we want, we have tremendous power." He then said that "China should start an investigation" into presidential candidate Joe Biden and his son Hunter Biden. Chair of the Federal Election Commission Ellen Weintraub then retweeted a June statement explaining that "it is illegal for any person to solicit, accept, or receive anything of value from a foreign national in connection with a U.S. election". , there is evidence President Trump, Vice President Mike Pence, U.S. Attorney General William Barr, as well as Trump's personal attorney Giuliani solicited help from Ukraine and China for assistance in discrediting Trump's political opponents. Trump also dispatched Barr to meet with Italian officials as part of Trump's efforts to discredit the Mueller investigation into Russian interference in the 2016 election. Trump also pressed Australian Prime Minister Scott Morrison of Australia to give Barr information that Trump hoped would discredit the Mueller inquiry, in a call that (like Trump's earlier call with Ukrainian president Volodymyr Zelensky), used diplomatic contacts to advance Trump's "personal political interests." According to a report in the Times of London, Trump also personally contacted British Prime Minister Boris Johnson to seek help to discredit the Mueller investigation. A Department of Homeland Security intelligence bulletin, warning about Russian interference in the 2020 election, was planned for release on July 9 but was blocked by acting Secretary of Homeland Security Chad Wolf's chief of staff. The bulletin, intended to be distributed among law-enforcement agencies, indicated that Russian disinformation operations would denigrate the mental health of Joe Biden. Aftermath Russian interference in the 2020 election was significantly less severe than it had been in 2016. Experts suggested a variety of possible explanations, not mutually exclusive. These include a hardening of American cyber defenses, reluctance on Russia's part to risk reprisals, and the fact that misinformation intended to delegitimize the election was already prevalent within the United States thanks to unfounded claims by Trump and others. On April 15, 2021 the Biden administration expelled 10 Russian diplomats and sanctioned six Russian companies that support Russia's cyber activities, in addition to 32 individuals and entities for its role in the interference and the 2020 United States federal government data breach. The New York Times reported in May 2021 that federal investigators in Brooklyn began a criminal investigation late in the Trump administration into possible efforts by several current and former Ukrainian officials to spread unsubstantiated allegations about corruption by Joe Biden. Investigators were examining whether the Ukrainians used Giuliani as a channel for the allegations, though he was not a specific subject of the investigation, in contrast to a long-running investigation of Giuliani by the US attorney's office in Manhattan. See also Cold Civil war Cold War II Cyberwarfare and Iran Cyberwarfare by Russia Cyberwarfare by China Democratic National Committee cyber attacks Foreign electoral intervention Presidency of Donald Trump Russian espionage in the United States Social media in the 2016 United States presidential election Social media in the 2020 United States presidential election Timelines related to Donald Trump and Russian interference in United States elections 1996 United States campaign finance controversy References External links 2020 controversies in the United States 2020 elections in the United States Russia–United States relations Foreign electoral intervention Internet manipulation and propaganda Trump administration controversies Information operations and warfare
840352
https://en.wikipedia.org/wiki/PunkBuster
PunkBuster
PunkBuster is a computer program that is designed to detect software used for cheating in online games. It does this by scanning the memory contents of the local machine. A computer identified as using cheats may be banned from connecting to protected servers. The aim of the program is to isolate cheaters and prevent them from disrupting legitimate games. PunkBuster is developed and published by Even Balance, Inc. History Tony Ray founded Even Balance to develop PunkBuster after his experience with cheaters on Team Fortress Classic. The first beta of PunkBuster was announced on September 21, 2000, for Half-Life. Valve was at the time fighting a hard battle against cheating, which had been going on since the release of the game. The first game in which PunkBuster was integrated was id Software's Return to Castle Wolfenstein. Features Published features Real-time scanning of memory, by placing a PunkBuster Client on players' computers searching for known hacks/cheats using a built-in database. Throttled two-tiered background auto-update system using multiple Internet Master Servers to provide end-user security ensuring that no false or corrupted updates can be installed on players' computers. Frequent status reports are sent to the PunkBuster Server by all players. When necessary, the PunkBuster Server raises a violation which (depending upon settings) will cause the offending player to be removed from the game and all other players to be informed of the violation. PunkBuster Admins can also manually remove players from the game for a specified number of minutes or permanently ban if desired. PunkBuster Servers can optionally be configured to randomly check player settings looking for known exploits of the game engine. PunkBuster Servers can be configured to instruct clients to calculate partial MD5 hashes of files inside the game installation directory. The results are compared against a set configuration and differences logged, and optionally, the client removed from the server. PunkBuster Admins can request actual screenshot samples from specific players and/or can configure the PB Server to randomly grab screenshot samples from players during gameplay. However, it is possible for a game hack to block screenshots (producing a cropped screenshot) or remove all visual features of a hack (cleaning the screenshot) to remain undetected, leaving the effectiveness of this feature diminished. An optional "bad name" facility is provided so that PunkBuster Admins can prevent players from using offensive player names containing unwanted profanity or slurs. Search functions are provided for PunkBuster Admins who wish to search player's keybindings and scripts for anything that may be known to exploit the game. The PunkBuster Player Power facility can be configured to allow players to self-administer game servers when the Server Administrator is not present entirely without the need for passwords, in which the players can call votes to have a player removed from the server for a certain amount of time. PunkBuster Servers have an optional built-in mini HTTP web server interface that allows the game server to be remotely administered via a web browser from anywhere over the Internet. PunkBuster Admins can stream their server logs in real time to another location. PunkBuster has initiated Punkbuster Hardware Bans, that bans hardware components upon detection of cheats that disrupt or circumvent PunkBuster's normal operation. These bans mean permanently banning players whose HD id matches the blacklist at Evenbalance. Incompatibilities Some games (like Crysis or BioShock 2) do not have a 64-bit version of PunkBuster. For this reason, 64 bit clients will not be able to play in PunkBuster enabled servers unless they run the 32-bit client of the game. PunkBuster does not allow Windows users without administrative accounts to connect to any games. Upon connecting to a game, the user will be immediately kicked for having insufficient OS privileges. Starting with PB client v1.700, a Windows service with full administrative rights is used in complement with the ingame PunkBuster client, allowing updates without user rights elevation. However, some games might still require administrative rights before PunkBuster will function correctly. Enforcement Global GUID bans and Hardware bans PunkBuster uses a system called 'global banning'. Either the GUID (generated from the CD key) or parts of the computer's hardware are banned from PunkBuster-enabled servers. Most attempts at cheating will only receive a detection warning, but cheats that interfere with PunkBuster's software itself could lock out the GUID of the offending system and disable access to all PunkBuster enabled servers for that particular game. Particularly severe instances of cheating may lock the offending computer out of all PunkBuster-protected games. As of June 30, 2004, Even Balance has used unique hardware identifiers to permanently ban players who attempt to interfere with PunkBuster's normal operation (which is, itself, a violation of the PunkBuster EULA). Even Balance uses a 128-bit private one-way hash so that no serial number information for individual computers can be obtained from a hardware GUID. As with previous PunkBuster GUID bans, hardware GUID lockouts are permanent. Even Balance has not disclosed what hardware PunkBuster looks for when issuing a ban, but close examination of the software has indicated that the GUID may be based on the serial numbers of scanned hard-drives. As with many bans based on information from the user's system, hardware GUID bans can be spoofed. False positives During the period of October 30 to November 6, 2013, Punkbuster was falsely banning Battlefield 4 users with the error "(Gamehack #89265)" As of November 8, 2013 the issue has been resolved by Evenbalance inc. and all Punkbuster bans resulting from this error have been resolved and officially deemed a false-positive. "We have confirmed that Violation #89265 may be triggered by non-cheat software. This Violation code has been removed from our master servers and we encourage server admins to give the benefit of the doubt to players who raised this code over the past few days." Attacks on PunkBuster PunkBuster usually searches for known cheat program signatures as opposed to relying on a heuristic approach. On March 23, 2008, hackers published and implemented a proof of concept exploit of PunkBuster's indiscriminate memory scanning. Because PunkBuster scans all of a machine's virtual memory, malicious users were able to cause mass false positives by transmitting text fragments from known cheat programs onto a high population IRC channel. When PunkBuster detected the text within user's IRC client text buffers, the users were banned. On March 25, 2008, Even Balance confirmed the existence of this exploit. Games using PunkBuster Assassin's Creed 3 Battlefield 2 Battlefield 2142 Battlefield 3 Battlefield 1942 Battlefield 4 Battlefield Hardline Battlefield: Bad Company 2 Battlefield Heroes Battlefield Play4Free Battlefield Vietnam Blacklight: Retribution Call of Duty Call of Duty 2 Call of Duty 4: Modern Warfare Call of Duty: World at War Crysis Far Cry Far Cry 2 Far Cry 3 Medal of Honor (2010) Medal of Honor: Warfighter Need for Speed: ProStreet Quake 3 Arena Red Orchestra 2: Heroes of Stalingrad Return to Castle Wolfenstein Soldier of Fortune II: Double Helix Tom Clancy's Ghost Recon: Future Soldier Tom Clancy's Ghost Recon Online Tom Clancy's Rainbow Six: Vegas 2 America's Army See also Cheating in online games GameGuard (nProtect) Valve Anti-Cheat Warden References External links 2000 software Anti-cheat software
34260401
https://en.wikipedia.org/wiki/Pylaeus
Pylaeus
In Greek mythology, Pylaeus (Ancient Greek: Πύλαιος), son of Lethus, son of Teutamides, descendant of Pelasgus. He was one of the allies to King Priam in the Trojan War; he commanded the Pelasgian contingent together with his brother Hippothous. Pylaeus is hardly ever mentioned separately from his brother; they are said to have fallen in battle together by Dictys Cretensis and to have been buried "in a garden" according to the late Latin poet Ausonius. Strabo, in his comment on the Homeric passage referenced above, mentions that according to a local tradition of Lesbos, Pylaeus also commanded the Lesbian army and had a mountain on the island named Pylaeus after him. Pylaeus is also an epithet of Hermes. Notes References Dictys Cretensis, from The Trojan War. The Chronicles of Dictys of Crete and Dares the Phrygian translated by Richard McIlwaine Frazer, Jr. (1931-). Indiana University Press. 1966. Online version at the Topos Text Project. Dionysus of Halicarnassus, Roman Antiquities. English translation by Earnest Cary in the Loeb Classical Library, 7 volumes. Harvard University Press, 1937-1950. Online version at Bill Thayer's Web Site Dionysius of Halicarnassus, Antiquitatum Romanarum quae supersunt, Vol I-IV. . Karl Jacoby. In Aedibus B.G. Teubneri. Leipzig. 1885. Greek text available at the Perseus Digital Library. Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. . Online version at the Perseus Digital Library. Homer, Homeri Opera in five volumes. Oxford, Oxford University Press. 1920. . Greek text available at the Perseus Digital Library. Strabo, The Geography of Strabo. Edition by H.L. Jones. Cambridge, Mass.: Harvard University Press; London: William Heinemann, Ltd. 1924. Online version at the Perseus Digital Library. Strabo, Geographica edited by A. Meineke. Leipzig: Teubner. 1877. Greek text available at the Perseus Digital Library. People of the Trojan War Characters in Greek mythology
2182112
https://en.wikipedia.org/wiki/Jonathan%20Bowen
Jonathan Bowen
Jonathan P. Bowen FBCS FRSA (born 1956) is a British computer scientist and an Emeritus Professor at London South Bank University, where he headed the Centre for Applied Formal Methods. Prof. Bowen is also the Chairman of Museophile Limited and has been a Professor of Computer Science at Birmingham City University, Visiting Professor at the Pratt Institute (New York City), University of Westminster and King's College London, and a visiting academic at University College London. Education Bowen was born in Oxford, the son of Humphry Bowen, and was educated at the Dragon School, Bryanston School, prior to his matriculation at University College, Oxford (Oxford University) where he received the MA degree in Engineering Science. Career Bowen later worked at Imperial College, London, the Oxford University Computing Laboratory (now the Oxford University Department of Computer Science), the University of Reading, and London South Bank University. His early work was on formal methods in general, and later the Z notation in particular. He was Chair of the Z User Group from the early 1990s until 2011. In 2002, Bowen was elected Chair of the British Computer Society FACS Specialist Group on Formal Aspects of Computing Science. Since 2005, Bowen has been an Associate Editor-in-Chief of the journal Innovations in Systems and Software Engineering. He is also an associate editor on the editorial board for the ACM Computing Surveys journal, covering software engineering and formal methods. From 2008–9, he was an Associate at Praxis High Integrity Systems, working on a large industrial project using the Z notation. Bowen's other major interest is the area of online museums. In 1994, he founded the Virtual Library museums pages (VLmp), an online museums directory that was soon adopted by the International Council of Museums (ICOM). In the same year he also started the Virtual Museum of Computing. In 2002, he founded Museophile Limited to help museums, especially online, for example with discussion forums. He has also worked in industry at Oxford Instruments, Marconi Instruments, Logica, Silicon Graphics, and Altran Praxis. Bowen was elected a Fellow of the Royal Society for the encouragement of Arts, Manufactures and Commerce (RSA) in 2002 and of the British Computer Society (BCS) in 2004. He is a Liveryman of the Worshipful Company of Information Technologists and a Freeman of the City of London. Selected books Jonathan Bowen has written and edited a number of books, including: Bowen, J.P., editor, Towards Verified Systems. Elsevier Science, Real-Time Safety Critical Systems series, volume 2, 1994. . Hinchey, M.G. and Bowen, J.P., editors, Applications of Formal Methods. Prentice Hall International Series in Computer Science, 1995. . Bowen, J.P., Formal Specification and Documentation using Z: A Case Study Approach. International Thomson Computer Press, International Thomson Publishing, 1996. . Bowen, J.P. and Hinchey, M.G., editors, High-Integrity System Specification and Design. Springer-Verlag, London, FACIT series, 1999. . Hinchey, M.G. and Bowen, J.P., editors, Industrial-Strength Formal Methods in Practice. Springer-Verlag, London, FACIT series, 1999. . Hierons, R., Bowen, J.P., and Harman, M., editors, Formal Methods and Testing. Springer-Verlag, LNCS, Volume 4949, 2008. . Börger, E., Butler, M., Bowen, J.P., and Boca, P., editors, Abstract State Machines, B and Z. Springer-Verlag, LNCS, Volume 5238, 2008. . Boca, P.P., Bowen, J.P., and Siddiqi, J.I., editors, Formal Methods: State of the Art and New Directions. Springer, 2010. , e-, . Bowen, J.P., Keene, S., and Ng, K., editors, Electronic Visualisation in Arts and Culture. Springer Series on Cultural Computing, Springer, 2013. . Copeland, J., Bowen, J.P., Sprevak, M., Wilson, R., et al., The Turing Guide. Oxford University Press, 2017. (hardcover), (paperback). Hinchey, M.G., Bowen, J.P., Olderog, E.-R., editors, Provably Correct Systems. Springer International Publishing, NASA Monographs in Systems and Software Engineering series, 2017. , . Giannini, T. and Bowen, J.P., editors, Museums and Digital Culture: New Perspectives and Research. Springer Series on Cultural Computing, Springer, 2019. , e-, . Notes References Bowen, Jonathan Peter. Who's Who in the World, Marquis Who's Who, 18th edition, 2001. H-museum information Museums and the Web conference information International Center for Scientific Research information External links Personal website LSBU official home page LSBU personal home page and publications on Archive.org SWU home page Jonathan P. Bowen on Microsoft Academic 1956 births Living people People from Oxford People educated at The Dragon School People educated at Bryanston School Alumni of University College, Oxford Computer science writers English computer scientists English non-fiction writers English book editors Formal methods people Members of the Department of Computer Science, University of Oxford Academics of Imperial College London Academics of the University of Reading Academics of London South Bank University Academics of University College London Academics of King's College London Academics of the University of Westminster Academics of Birmingham City University Silicon Graphics people British software engineers Software engineering researchers Academic journal editors Fellows of the British Computer Society Fellows of the Royal Society of Arts English male non-fiction writers
63393458
https://en.wikipedia.org/wiki/Kwan-Liu%20Ma
Kwan-Liu Ma
Kwan-Liu Ma is an American computer scientist. He was born and grew up in Taipei, Taiwan and came to the United States pursuing advanced study in 1983. He is a distinguished professor of computer science at the University of California, Davis. His research interests include visualization, computer graphics, human computer interaction, and high-performance computing. Biography Ma received his B.S., M.S. and Ph.D. degrees all in computer science from the University of Utah in 1986, 1988, and 1993, respectively. During 1993-1999, Ma was a staff scientist at the Institute for Computer Applications in Science and Engineering (ICASE), NASA Langley Research Center, where he conducted research in scientific visualization and high-performance computing. Ma joined UC Davis faculty in July 1999 and funded the Visualization and Interface Design Innovation (VIDI) research group and UC Davis Center of Excellence for Visualization. Ma is a leading researcher in Big Data visualization. He organized the NSF/DOE Workshop on Large Scientific and Engineering Data Visualization (with C. Johnson) in 1999 as well as the Panel on Visualizing Large Datasets: Challenges and Opportunities at ACM SIGGRAPH 1999. He participated in the NSF LSSDSV, ITR, and BigData programs, and led the DOE SciDAC Institute for Ultrascale Visualization, a five-year, multi-institution project. He and his students has convincingly demonstrated several advanced concepts for data visualization, such as in situ visualization (1995, 2006), visualization provenance (1999), hardware accelerated volume visualization (2001), machine learning assisted volume visualization (2004), explorable images (2010), machine learning assisted graph visualization (2017), etc. Ma has published over 350 articles and given over 250 invited talks. Ma has been actively serving the research community by playing leading roles in several professional activities including VizSec, Ultravis, EGPGV, IEEE VIS, IEEE PacificVis, and IEEE LDAV. He has served as a papers co-chair for SciVis, InfoVis, EuroVis, PacificVis, and Graph Drawing. Professor Ma was associate editor for the IEEE Transactions on Visualization and Computer Graphics (2007-2011), IEEE Computer Graphics and Applications (2007-2019), and the Journal of Computational Science and Discovery (2009-2014). He presently serves on the editorial boards of the Journal of Visualization, the Journal of Visual Informatics, and the Journal Computational Visual Media. Ma is a member of the Luxuriant Flowing Hair Club for Scientists (LFHCfS). Awards 1999 NSF CAREER Award 2000 NSF Presidential Early Career Award (PECASE) 2001 Schlumberger Foundation Technical Award 2007 UC Davis College of Engineering's Outstanding Mid-Career Research Faculty Award 2008, 2009, 2012 HP Labs Research Innovation Award 2012 IEEE Fellow 2013 IEEE VGTC Visualization Technical Achievement Award 2018 Distinguished Professor, UC Davis 2019 Inductee of the IEEE Visualization Academy Selected publications Oh-Hyun Kwon, Kwan-Liu Ma: A Deep Generative Model for Graph Layout. IEEE Trans. Vis. Comput. Graph. 26(1): 665-675 (2020) Takanori Fujiwara, Oh-Hyun Kwon, Kwan-Liu Ma: Supporting Analysis of Dimensionality Reduction Results with Contrastive Learning. IEEE Trans. Vis. Comput. Graph. 26(1): 45-55 (2020) Jianping Kelvin Li, Kwan-Liu Ma: P5: Portable Progressive Parallel Processing Pipelines for Interactive Data Analysis and Visualization. IEEE Trans. Vis. Comput. Graph. 26(1): 1151-1160 (2020) Min Shih, Charles Rozhon, Kwan-Liu Ma: A Declarative Grammar of Flexible Volume Visualization Pipelines. IEEE Trans. Vis. Comput. Graph. 25(1): 1050-1059 (2019) Oleg Igouchkine, Yubo Zhang, Kwan-Liu Ma: Multi-Material Volume Rendering with a Physically-Based Surface Reflection Model. IEEE Trans. Vis. Comput. Graph. 24(12): 3147-3159 (2018) Chris Bryan, Gregory Guterman, Kwan-Liu Ma, Harris A. Lewin, Denis M. Larkin, Jaebum Kim, Jian Ma, Marta Farre: Synteny Explorer: An Interactive Visualization Application for Teaching Genome Evolution. IEEE Trans. Vis. Comput. Graph. 23(1): 711-720 (2017) Oh-Hyun Kwon, Chris Muelder, Kyungwon Lee, Kwan-Liu Ma: A Study of Layout, Rendering, and Interaction Methods for Immersive Graph Visualization. IEEE Trans. Vis. Comput. Graph. 22(7): 1802-1815 (2016) Yuzuru Tanahashi, Chien-Hsin Hsueh, Kwan-Liu Ma: An Efficient Framework for Generating Storyline Visualizations from Streaming Data. IEEE Trans. Vis. Comput. Graph. 21(6): 730-742 (2015) Franz Sauer, Hongfeng Yu, Kwan-Liu Ma: Trajectory-Based Flow Feature Tracking in Joint Particle/Volume Datasets. IEEE Trans. Vis. Comput. Graph. 20(12): 2565-2574 (2014) Yubo Zhang, Kwan-Liu Ma: Spatio-temporal extrapolation for fluid animation. ACM Trans. Graph. 32(6): 183:1-183:8 (2013) Carlos D. Correa, Tarik Crnovrsanin, Kwan-Liu Ma: Visual Reasoning about Social Networks Using Centrality Sensitivity. IEEE Trans. Vis. Comput. Graph. 18(1): 106-120 (2012) Nathaniel Fout, Kwan-Liu Ma: Fuzzy Volume Rendering. IEEE Trans. Vis. Comput. Graph. 18(12): 2335-2344 (2012) Joyce Ma, Isaac Liao, Kwan-Liu Ma, Jennifer Frazier: Living Liquid: Design and Evaluation of an Exploratory Visualization Tool for Museum Visitors. IEEE Trans. Vis. Comput. Graph. 18(12): 2799-2808 (2012) Wei-Hsien Hsu, Kwan-Liu Ma, Carlos D. Correa: A rendering framework for multiscale views of 3D models. ACM Trans. Graph. 30(6): 131 (2011) Anna Tikhonova, Carlos D. Correa, Kwan-Liu Ma: Visualization by Proxy: A Novel Framework for Deferred Interaction with Volume Data. IEEE Trans. Vis. Comput. Graph. 16(6): 1551-1559 (2010) Carlos D. Correa, Kwan-Liu Ma: Dynamic video narratives. ACM Trans. Graph. 29(4): 88:1-88:9 (2010) Kwan-Liu Ma: In Situ Visualization at Extreme Scale: Challenges and Opportunities. IEEE Computer Graphics and Applications 29(6): 14-19 (2009) Michael Ogawa, Kwan-Liu Ma: code_swarm: A Design Study in Organic Software Visualization. IEEE Trans. Vis. Comput. Graph. 15(6): 1097-1104 (2009) Chris Muelder, Kwan-Liu Ma: Rapid Graph Layout Using Space Filling Curves. IEEE Trans. Vis. Comput. Graph. 14(6): 1301-1308 (2008) Zeqian Shen, Kwan-Liu Ma, Tina Eliassi-Rad: Visual Analysis of Large Heterogeneous Social Networks by Semantic and Structural Abstraction. IEEE Trans. Vis. Comput. Graph. 12(6): 1427-1439 (2006) Fan-Yin Tzeng, Eric B. Lum, Kwan-Liu Ma: An Intelligent System Approach to Higher-Dimensional Classification of Volume Data. IEEE Trans. Vis. Comput. Graph. 11(3): 273-284 (2005) Eric B. Lum, Kwan-Liu Ma, John P. Clyne: A Hardware-Assisted Scalable Solution for Interactive Volume Rendering of Time-Varying Data. IEEE Trans. Vis. Comput. Graph. 8(3): 286-301 (2002) T. J. Jankun-Kelly, Kwan-Liu Ma: Visualization Exploration and Encapsulation via a Spreadsheet-Like Interface. IEEE Trans. Vis. Comput. Graph. 7(3): 275-287 (2001) Kwan-Liu Ma: Visualizing Visualizations: User Interfaces for Managing and Exploring Scientific Visualization Data. IEEE Computer Graphics and Applications 20(5): 16-19 (2000) Kwan-Liu Ma, James S. Painter, Charles D. Hansen, Michael Krogh: Parallel volume rendering using binary-swap compositing. IEEE Computer Graphics and Applications 14(4): 59-68 (1994) References External links Kwan-Liu Ma, at University of California, Davis Living people American computer scientists University of Utah alumni University of California, Davis faculty 1960 births Information visualization experts Fellows of the Society for Industrial and Applied Mathematics Fellow Members of the IEEE Scientific computing researchers
1129607
https://en.wikipedia.org/wiki/Altix
Altix
Altix is a line of server computers and supercomputers produced by Silicon Graphics (and successor company Silicon Graphics International), based on Intel processors. It succeeded the MIPS/IRIX-based Origin 3000 servers. History The line was first announced on January 7, 2003, with the Altix 3000 series, based on Intel Itanium 2 processors and SGI's NUMAlink processor interconnect. At product introduction, the system supported up to 64 processors running Linux as a single system image and shipped with a Linux distribution called SGI Advanced Linux Environment, which was compatible with Red Hat Advanced Server. By August 2003, many SGI Altix customers were running Linux on 128- and 256-processor SGI Altix systems. SGI officially announced 256-processor support within a single system image of Linux on March 10, 2004 using a 2.4-based Linux kernel. The SGI Advanced Linux Environment was eventually dropped after support using a standard, unmodified SUSE Linux Enterprise Server (SLES) distribution for SGI Altix was provided with SLES 8 and SLES 9. Later, SGI Altix 512-processor systems were officially supported using an unmodified, standard Linux distribution with the launch of SLES 9 SP1. Besides full support of SGI Altix on SUSE Linux Enterprise Server, a standard and unmodified Red Hat Enterprise Linux was also fully supported starting with SGI Altix 3700 Bx2 with RHEL 4 and RHEL 5 with system processor limits defined by Red Hat for those releases. On November 14, 2005, SGI introduced the Altix 4000 series based on the Itanium 2. The Altix 3000 and 4000 are distributed shared memory multiprocessors. SGI later officially supported 1024-processor systems on an unmodified, standard Linux distribution with the launch of SLES 10 in July 2006. SGI Altix 4700 was also officially supported by Red Hat with RHEL 4 and RHEL 5 — maximum processor limits were as defined by Red Hat for its RHEL releases. The Altix brand was used for systems based on multi-core Intel Xeon processors. These include the Altix XE rackmount servers, Altix ICE blade servers and Altix UV supercomputers. NASA's Columbia supercomputer, installed in 2004 and decommissioned in 2013, was a 10240-microprocessor cluster of twenty Altix 3000 systems, each with 512 microprocessors, interconnected with InfiniBand. Altix 3000 The Altix 3000 is the first generation of Altix systems. It was succeeded by the Altix 4000 in 2004, and the last model was discontinued on December 31, 2006. The Altix 330 is an entry-level server. Unlike the high-end models, the Altix 330 is not "brick" based, but is instead based on 1U-high compute modules mounted in a rack and connected with NUMAlink. A single system may contain 1 to 16 Itanium 2 processors and 2 to 128 GB of memory. The Altix 1330 is a cluster of Altix 330 systems. The systems are networked with Gigabit Ethernet or 4X InfiniBand. The Altix 350 is a mid-range model that supports up to 32 Itanium 2 processors. Introduced in 2005, it runs Linux, rather than SGI's own Unix variant, IRIX. The Altix 350 is scalable from one to thirty-two 64-bit Intel Itanium processors. It features DDR SDRAM and PCI-X expansion ports, and can support SCSI or SATA internal hard drives. Designed as a rack-mount server, the Altix 350 is 2U, meaning it occupies two slots vertically in a standard server rack. The Altix 1350 is a cluster of Altix 350 systems. , the Altix 350 was superseded by the Altix 450 (based on the Itanium 2) and the Altix XE (based on the Xeon). The Altix 3300 is a mid-range model supporting 4 to 12 processors and 2 to 48 GB of memory. It is packaged in a short (17U) rack. The Altix 3700 is a high-end model supporting 16 to 512 processors and 8 GB to 2 TB of memory. It requires one or multiple tall (39U) rack(s). A variant of the Altix 3000 with graphics capability is known as the Prism. The 3700 is based on the third generation NUMAflex distributed shared memory architecture and it uses the NUMAlink 4 interconnection fabric. The Altix 3000 supports a single system image of 64 processors. If there are more than 64 processors in a system, then the system must be partitioned. The basic building block is called a C-brick, which contains two nodes in a 4U high rackmount unit. Each node contains two Intel Itanium 2 processors that connect to the Super-Bedrock application-specific integrated circuit through a single front-side bus. The Super-Bedrock is a crossbar switch for the processors, the local RAM, the network interface and the I/O interface. The two Super-Bedrock ASICs in each brick are connected internally by a single 6.4 GB/s NUMAlink 4 channel. A processor node also contains 16 DIMM slots that accept standard DDR DIMMs with capacities of 4 to 16 GB. The Altix 3700 Bx2 is a high-end model supporting 16 to 2,048 Itanium 2 processors and 12 GB to 24 TB of memory. It requires one or multiple tall (40U) racks. Altix 4000 The Altix 4000 is the next Itanium-based product line. It has two models, the Altix 450 a mid-range server, and the Altix 4700 a high-end server. An Altix 4700 system contains up to 2048 dual-core Itanium 2 and Itanium ("Montvale" revision) microprocessor sockets, connected by the NUMAlink 4 interconnect in a fat tree network topology. The microprocessors are accompanied by up to 128 TB of memory (192TB with single microprocessor socket blades and 16GB DIMMs). Each node is contained within a blade that plugs into an enclosure, the individual rack unit (IRU). The IRU is a 10U enclosure that contains the necessary components to support the blades such as the power supplies, two router boards (one for every five blades) and a L1 controller. Each IRU can support ten single-wide blades or two double-wide blades and eight single-width blades. The IRUs are mounted in 42U-high rack, and each rack supports up to four IRUs. Two types of node, processor and memory, are contained within a blade. Compute blades contain a processor node and consist of two PAC611 sockets for Itanium 2 and Itanium microprocessors, a Super-Hub (SHub) application-specific integrated circuit (ASIC) (chipset) and eight dual in-line memory module (DIMM) slots for memory. The number of microprocessor sockets in a compute blade is one or two. One-processor socket configurations provide more bandwidth as only one microprocessor socket is using the front side bus and local memory. Two-processor socket configurations do not support hyperthreading. Memory blades are used to expand the amount of memory without increasing the number of processors. They contain a SHub ASIC and 12 DIMM slots. Both compute and memory blades support 1, 2 4, 8, and 16 GB DIMMs. SGI support does not support any installations with 16GB DIMMs. Multiple servers can be combined on the same Numalink fabric up to the theoretical maximum of 8,192 nodes (16,384 OS CPUs). Altix XE The Altix XE servers use Intel Xeon x86-64-architecture processors. Models include: The Altix XE210 server supports up to two Dual or Quad-Core Intel Xeon processors, 5100 Series or 5300 series, 32GB DDR2 667 MHz FBDIMM memory, 1 x PCIe x8 (low profile) and 1 x PCI-X 133 MHz (full height) PCI slots, and Three SATA/SAS drive bays. The Altix XE240 server supports up to two Dual or Quad-Core Intel Xeon processors, 5100 Series or 5300 series, 32GB DDR2 667 MHz FBDIMM memory, two PCI slots configuration options (option 1: 2 x PCIe x4 (low profile), 2 x PCIe x4 (full height), 1 x PCEe x8 (full height); or option 2: 2 x PCIe x4 (low profile), 3 x PCI-X 133 MHz (full height), 1 x PCI-X 133 MHz (full height), and Five SATA/SAS drive bays. The Altix XE250 server The Altix XE270 server is a 2U configuration with Intel Xeon processor 5500 series, with a choice of up to 18 DDR3 DIMMs (2GB, 4GB, or 8GB DIMMs), 2 x PCIe x8 gen 2 (low profile), 1 x PCIe x4 gen 1 (low profile), 2 x PCI-x 133/100 (low profile) PCI slots, eight SATA or SAS drive bays with optional hardware RAID (0, 1, 5, 6, 10), The Altix XE310 server was introduced January 8, 2007 and contains two nodes per XE310, up to four Dual or Quad-Core Intel Xeon processors, 5100 Series or 5300 series (two per node), 64GB DDR2 667 MHz FBDIMM memory (32GB per node), 2 x PCIe x8 (1 per node) PCI slots, and Four SATA/SAS drive bays (two per node). The Altix XE320 server The Altix XE340 server contains 2 compute nodes within a 1U configuration, Intel Xeon processor 5500 series, Choice of up to 12 DDR3 DIMMs per node (2GB, 4GB, or 8GB DIMMs), 2 x PCIe x16 (1 per node) - low profile PCI slot, and Four SATA drive bays (2 per node) with optional SAS and hardware RAID 0, 1. The Altix XE500 server is a 3U configuration with Intel Xeon processor 5500 series, with a choice of up to 18 DDR3 DIMMs (2GB, 4GB, or 8GB DIMMs), 2 x PCIe x16 gen2 (full height) and 4 x PCIe x8 gen2 (full height) PCI slots, and Eight SATA or SAS drives with optional hardware RAID (0, 1, 5, 6, 10). The Altix XE1200 cluster The Altix XE1300 cluster All Altix XE systems support Novell SUSE Linux Enterprise Server, Red Hat Enterprise Linux, and Microsoft Windows. VMware support was added across the Altix XE product line. Altix ICE The Altix ICE blade platform is an Intel Xeon-based system featuring diskless compute blades and a Hierarchical Management Framework (HMF) for scalability, performance, and resiliency. While the earlier Itanium-based Altix systems run a single-system image (SSI) Linux kernel on 1024 processors or more using a standard SuSE Linux Enterprise Server (SLES) distribution, the Altix ICE's clustering capabilities use standard SLES or Red Hat Enterprise Linux distributions and scale to over 51,200 cores on NASA's Pleiades supercomputer. The Altix ICE 8200LX blade enclosure features two 4x DDR IB switch blade and one high-performing plane, and the Altix ICE 8200EX features four 4x DDR IB switch blades, and two high-performing planes. Both configurations support either hypercube or fat tree topology, and 16 compute blades within an IRU. The IP-83 and IP-85 compute blades support Intel Xeon 5200 or 5400 Series processors, and the IP-95 compute blade support Intel Xeon 5500 Series processors. In November 2011 the ICE 8400 is based on either Intel Xeon 5500 or 5600 processors or the AMD Opteron 6100 series processors. , these use the Xeon Phi coprocessors. Altix UV The Altix UV supercomputer architecture was announced in November 2009. Codenamed Ultraviolet during development, the Altix UV combines a development of the NUMAlink interconnect used in the Altix 4000 (NUMAlink 5) with quad-, six- or eight-core "Nehalem-EX" Intel Xeon 7500 processors. Altix UV systems run either SuSE Linux Enterprise Server or Red Hat Enterprise Linux, and scale from 32 to 2,048 cores with support for up to 16 Terabytes (TB) of shared memory in a single system image. In 2010 and 2011, SGI retired the Altix name for new servers produced by the company. Altix UV and Altix ICE have been shortened to "SGI UV" and "SGI ICE," while the Altix XE line is named "Rackable." References Silicon Graphics, Inc. (June 12, 2007). Altix 3000 Rackmount Owner's Guide. Silicon Graphics, Inc. (June 12, 2007). SGI Altix 1330 Cluster Datasheet. Silicon Graphics, Inc. (June 12, 2007). SGI Altix 330 Server Datasheet. Silicon Graphics, Inc. (June 12, 2007). SGI Altix 350 Server Datasheet. Silicon Graphics, Inc. (June 12, 2007). SGI Altix 3700 Bx2 Servers and Supercomputers Datasheet. External links SGI's webpage for Altix Linux Journal article regarding scaling of Linux on Altix New Altix Software Allows 256-Processor Linux System Linux Magazine about scaling Altix to 512p HPCwire interview about scaling Altix to 1024p SGI Altix Again Crushes World Record for Memory Bandwidth SGI Altix Servers Attain Common Criteria Security Certification Article on SGI ProPack, Real-time, and Cluster Support for Altix SGI Altix manuals and information SGI servers X86 supercomputers
12712929
https://en.wikipedia.org/wiki/Nero%20Vision
Nero Vision
Nero Video (known as Nero Vision until 15 October 2011) is a video editing software from Nero AG that provides simple editing functions (in Express mode) as well as advanced video editing (Advanced mode), which includes multitrack timeline and key framing functions. Nero Video also provides a wide range of functions for including photos and music in video projects, as well as a broad selection of transition, video, audio and title effects. In addition, it includes templates for semi-automatic film creation and for picture-in-picture effects. Once editing is complete, users can export their finished film as a file or upload it to the web. They can also use Nero Video to burn the film onto DVDs and Blu-rays and personalize the discs’ menu and chapters step by step. Video disc creation is a separate module and can be executed directly from the start screen. That enables users to complete their disc-only projects quickly and easily. This authoring module also includes simple cropping and arranging tools. Development The software demonstrates significant technical progress with each new version. Nero Vision 4 In the first version, Nero Vision 4, users were already able to edit videos in a small template and had a fixed choice per film of one video channel, one effect channel, one text channel and two audio channels. They could not add images to their film at that time and could only edit menus within in an automatic template. Nero Vision 5 In this version, Nero added HD capability. Although the menu areas included more templates, there were no major innovations in this version. Nero 9 The third version of the software included manual menu editing to enable the creation of additional submenus. This version also included the ability to add images to videos. Nero Vision Xtra This version was much more extensive and included multitrack editing as well as a refreshed look and feel. Nero added multiple new effects to this version as well as optional key frames on the timeline, helping users to vary things like the image size or effects during the film. The menu editing remained more or less unchanged, although users could now also burn their film to Blu-ray discs. Nero Video 11 This version saw a name change to Nero Video and its major new feature was the Express mode, which allowed easy yet full-functioned video editing in a simplified timeline. It also included Express Effects, a range of predefined effect templates that users could drag and drop onto their clips. Nero also completely redesigned the start screen and simplified imports of AVCHD media. Older effects were split off into separate functions and new sound effects were added. Nero Video 12 This version introduced several new effects, such as slow motion, time lapse, image stabilization, and retro designs. Nero Video 2014 This version saw the addition of Nero Tilt-Shift effects and Nero RhythmSnap. Nero also added a drag and drop function to the start screen, enabling users to get started on video or disc projects faster. In addition, this version introduced 4K video and slide show editing capabilities. Nero Video 2015 Along with typeface and font style enhancements, this version includes animated text effect templates, and the ability for users to create their own text effect templates. Moreover, the function to change the disc output format on the fly was introduced in the authoring module. Nero Video 2016 Along with typeface and font style enhancements, this version includes animated text effect templates, more text effects, and the ability for users to create their own text effect templates. Furthermore, the version offer full 4K support: 4K Video playback, 4k Video footage and more. Nero Video 2017 Along with typeface and font style enhancements, this version includes 4K Video templates and Video effects, and the ability for users to create their own text effect templates. Furthermore, with this Version you can Play Videos with embedded subtitles and drag and drop extra ones to the Playback functions. You can also export several single Videos from Long Videos in one go. Product versions Nero Video is a part of Nero 2017 Classic, Nero 2017 Platinum, Nero Video 2017. With the exception of the version included in Nero 2017 Classic, where there are slightly fewer effects and templates, and no 4K editing, all versions of Nero Video in the other products are identical. All products also include Nero MediaHome. Nero MediaHome enables users to more easily manage and play their images, videos and music files. Alongside audio CD ripping and the creation of playlists and slideshows, MediaHome includes important features that let users sort their media, including tagging, face recognition in photos, geo location support and manual geotagging for photos and videos. Streaming to TVs and home media players is also included. Starting with Version 2015, users can stream media directly to their mobile devices (iOS, Android, Amazon) with the Nero MediaHome Receiver App. Overview of Nero Video features The current version, Nero Video 2017, includes the following main functions: Streaming of videos from digital camcorders and other external devices (such as smartphones) 4K Support Playback of videos with embedded subtitles Multi-Export function: Export several single videos from long Videos in one go. Video Editing Preview in brilliant quality for single and dual monitor Import of videos, photos, and music from USB thumb drives and hard disks TV recording with TV cards or sticks Streaming of DV and HDV material by connecting an IEEE 1394 camcorder Recording analog video material using A/D converter cards Import and editing of Microsoft PowerPoint projects Import via Nero MediaBrowser Express editing with simplified timeline Express effects (retro, Tilt-Shift, text, brightness, contrast, speed and many more) Advance editing with multitrack capability (theoretically unlimited) Chroma key (aka GreenScreen) effects Slow motion and Time lapse effects Picture-in-picture effects Video stabilization for shaky films Menu authoring and burning of DVDs and Blu-ray discs Adaptable menu editing (optional extra menus, several main menus etc.) Export of video files to hard disks and the web Export of audio files Support for multiple file formats, including MP4, MPEG2, FLV, AVI, WMA, etc. References External links Official website Video editing software Windows multimedia software Proprietary software
17921227
https://en.wikipedia.org/wiki/Windows%20Assessment%20and%20Deployment%20Kit
Windows Assessment and Deployment Kit
Windows Assessment and Deployment Kit (Windows ADK), formerly Windows Automated Installation Kit (Windows AIK or WAIK), is a collection of tools and technologies produced by Microsoft designed to help deploy Microsoft Windows operating system images to target computers or to a virtual hard disk image in VHD format. It was first introduced with Windows Vista. WAIK is a required component of Microsoft Deployment Toolkit. History Windows AIK Version 1.0 was released with Windows Vista. New or redesigned tools and technologies included Windows System Image Manager (Windows SIM), Sysprep, ImageX, and Windows Preinstallation Environment (WinPE) v2.0. Windows AIK Version 1.1 was released with Windows Vista SP1 and Windows Server 2008. A number of new tools were introduced, including PostReflect and VSP1Cln. WinPE 2.1 could be more customized. Supported operating systems include Windows Server 2008, Windows Vista, Windows Server 2003 SP1, Windows Server 2003 SP2 and Windows XP SP2. Windows AIK Version 2.0 was released with Windows 7 beta. Significantly, a single new tool, DISM, replaced several earlier tools including PEImg and IntlCfg, which were deprecated. User State Migration Tool (USMT) was added to this version of WAIK. Supported operating systems include Windows Server 2003 R2 SP2, Windows Vista SP1, Windows Server 2008, Windows 7 and Windows Server 2008 R2. Windows AIK version 3.0 is exactly the same as 2.0; the version number has only been updated to correspond with the release of Service Pack 1 for Windows 7. Microsoft has also released a WAIK supplement for Windows 7 SP1. WAIK readme references the WAIK supplement, which optionally adds WinPE v3.1 to a previously installed, compatible WAIK. Sysprep is not included with WAIK, but is instead included with the operating system. AIK has been renamed The Windows Assessment and Deployment Kit (ADK) for Windows 8 and includes Windows OEM Preinstallation Kit. ImageX was removed from this version, as DISM is offering all its features. Components Application compatibility tools This toolset consists of Compatibility Administrator and Standard User Analyzer. Configuration Designer DISM Deployment Image Servicing and Management (DISM) is a command-line tool that can perform a large number of servicing tasks. It can query, configure, install and uninstall Windows features such as locale settings, language packs, optional components, device drivers, UWP apps, or Windows updates. DISM can perform these tasks on the live (running) Windows instance, an offline instance in another partition, or a Windows installation image inside a WIM file. Starting with Windows Server 2012, it can repair damaged or corrupt Windows files by downloading a fresh copy from the Internet. DISM has been part of Windows since Windows 7 and Windows Server 2008 R2. Before Windows Server 2012 and Windows 8, DISM had incorporated the majority of ImageX functions but not all; ImageX was still needed to capture the disk image for deployment. However, DISM deprecated ImageX in Windows 8. In addition, Windows 8 and Windows Server 2012 expose DISM services in PowerShell through 22 cmdlets for object-oriented scripting. This number has reached 62 in Windows 10 and Windows Server 2016. Imaging and Configuration Designer Preinstallation environment WAIK includes Windows Preinstallation Environment, a lightweight version of Windows that can be booted via PXE, CD-ROM, USB flash drive or external hard disk drive and is used to deploy, troubleshoot or recover Windows environments. It replaces MS-DOS boot disks, Emergency Repair Disk, Recovery Console and Automated System Recovery boot disks. Traditionally used by large corporations and OEMs (to preinstall Windows client operating systems to PCs during manufacturing), WinPE is now available free of charge via WAIK. User state migration WAIK for Windows 7 includes User State Migration Tool v4.0, a command-line interface tool for transferring Windows user settings from one installation to another as part of an operating system upgrade or wipe-and-reload recovery, for example, to clean out a rootkit. USMT v4.0 can transfer the settings from Microsoft Windows XP or later to Microsoft Windows Vista and later. Volume Activation Management Tool (VAMT) Windows Assessment Toolkit Described in Microsoft documents. Windows Performance Toolkit Described in Microsoft documents. Former components ImageX ImageX is the command-line tool used to create, edit and deploy Windows disk images in the Windows Imaging Format. Starting with Windows Vista, Windows Setup uses the WAIK API to install Windows. The first distributed prototype of ImageX was build 6.0.4007.0 (main.030212-2037). It allowed Microsoft OEM partners to experiment with the imaging technology and was developed in parallel with Longhorn alpha prototypes. It was first introduced in Milestone 4 into the Longhorn project, and used in later builds of Longhorn. Build 6.0.5384.4 (Beta 2) added significant advantages over previous versions, like read-only and read/write folder mounting capabilities, splitting to multiple image files (SWM), a WIM filter driver and the latest LZX compression algorithms. It has been used since pre-RC (release candidates) of Windows Vista. References External links Windows administration Microsoft software
34408890
https://en.wikipedia.org/wiki/WeChat
WeChat
WeChat () is a Chinese multi-purpose instant messaging, social media and mobile payment app developed by Tencent. First released in 2011, it became the world's largest standalone mobile app in 2018, with over 1 billion monthly active users. WeChat has been described as China's "app for everything" and a "super app" because of its wide range of functions. WeChat provides text messaging, hold-to-talk voice messaging, broadcast (one-to-many) messaging, video conferencing, video games, sharing of photographs and videos and location sharing. User activity on WeChat is analyzed, tracked and shared with Chinese authorities upon request as part of the mass surveillance network in China. WeChat censors politically sensitive topics in China. Data transmitted by accounts registered outside China is surveilled, analyzed and used to build up censorship algorithms in China. In response to a border dispute between India and China, WeChat was banned in India in June 2020 along with several Chinese apps. U.S. President Donald Trump sought to ban U.S. "transactions" with WeChat through an executive order but was blocked by a preliminary injunction issued in the United States District Court for the Northern District of California in September 2020. History By 2010, Tencent had already attained a massive user base with their desktop messenger app QQ. Recognizing smart phones were likely to disrupt this status quo, CEO Pony Ma sought to proactively invest in alternatives to their own QQ messenger app. WeChat began as a project at Tencent Guangzhou Research and Project center in October 2010. The original version of the app was created by Allen Zhang and named "Weixin" () by Pony Ma and launched in 2011. User adoption of WeChat was initially very slow, with users wondering why key features were missing; however, after the release of the Walkie-talkie-like voice messaging feature in May of that year, growth surged. By 2012, when the number of users reached 100 million, Weixin was re-branded "WeChat" for the international market. During a period of government support of e-commerce development—for example in the 12th five-year plan (2011–2015)—WeChat also saw new features enabling payments and commerce in 2013, which saw massive adoption after their virtual Red envelope promotion for Chinese New Year 2014. WeChat had over 889 million monthly active users by 2016, and as of 2019 WeChat's monthly active users had risen to an estimate of one billion. As of January 2022, it was reported that WeChat has more than 1.2 billion users. After the launch of WeChat payment in 2013, its users reached 400 million the next year, 90 percent of whom were in China. By comparison, Facebook Messenger and WhatsApp had about one billion monthly active users in 2016 but did not offer most of the other services available on WeChat. For example, in Q2 2017, WeChat's revenues from social media advertising were about US$0.9 billion (RMB6 billion) compared with Facebook's total revenues of US$9.3 billion, 98% of which were from social media advertising. WeChat's revenues from its value-added services were US$5.5 billion. Features Messaging WeChat provides many features similar to Snapchat such as text messaging, hold-to-talk voice messaging, broadcast (one-to-many) messaging, video calls and conferencing, video games, photograph and video sharing, as well as location sharing. WeChat also allows users to exchange contacts with people nearby via Bluetooth, as well as providing various features for contacting people at random if desired (if people are open to it). It can also integrate with other social networking services such as Facebook and Tencent QQ. Photographs may also be embellished with filters and captions, and automatic translation service is available. WeChat supports different instant messaging methods, including text messages, voice messages, walkie talkie, and stickers. Users can send previously saved or live pictures and videos, profiles of other users, coupons, lucky money packages, or current GPS locations with friends either individually or in a group chat. WeChat's character stickers, such as Tuzki, resemble and compete with those of LINE, a Japanese-South Korean messaging application. WeChat also provides a message recall feature to allow users to recall and withdraw information (e.g. Images, documents) that are sent within 2 minutes in a conversation. To use this feature, users can select the message or file to be recalled by long pressing. In the menu that appears select 'recall' and 'ok' to complete the withdrawal process. Eventually, the selected messages or files will be removed from the WeChat chatting box on both the sender’s and recipient’s phones. Public accounts WeChat users can register as a public account (), which enables them to push feeds to subscribers, interact with subscribers and provide them with services. Users can also create an official account, which fall under service, subscription, or enterprise accounts. Once users as individuals or organizations set up a type of account, they cannot change it to another type. By the end of 2014, the number of WeChat official accounts had reached 8 million. Official accounts of organizations can apply to be verified (cost 300 RMB or about US$45). Official accounts can be used as a platform for services such as hospital pre-registrations, visa renewal or credit card service. To create an official account, the applicant must register with Chinese authorities, which discourages "foreign companies". Moments "Moments" () is WeChat's brand name for its social feed of friends' updates. "Moments" is an interactive platform that allows users to post images, text, and short videos taken by users. It also allows users to share articles and music (associated with QQ Music or other web-based music services). Friends in the contact list can give thumbs up to the content and leave comments. Moments can be linked to Facebook and Twitter accounts, and can automatically post Moments content directly on these two platforms. In 2017 WeChat had a policy of a maximum of two advertisements per day per Moments user. Privacy in WeChat works by groups of friends: only the friends from the user's contact are able to view their Moments' contents and comments. The friends of the user will only be able to see the likes and comments from other users only if they are in a mutual friend group. For example, friends from high school are not able to see the comments and likes from friends from a university. When users post their moments, they can separate their friends into a few groups, and they can decide whether this Moment can be seen by particular groups of people. Contents posted can be set to "Private", and then only the user can view it. WeChat Pay digital payment services Users who have provided bank account information may use the app to pay bills, order goods and services, transfer money to other users, and pay in stores if the stores have a WeChat payment option. Vetted third parties, known as "official accounts", offer these services by developing lightweight "apps within the app". Users can link their Chinese bank accounts, as well as Visa, MasterCard and JCB. WeChat Pay () is a digital wallet service incorporated into WeChat, which allows users to perform mobile payments and send money between contacts. Although users receive immediate notification of the transaction, the WeChat Pay system is not an instant payment instrument, because the funds transfer between counterparts is not immediate. The settlement time depends on the payment method chosen by the customer. All WeChat users have their own WeChat Payment accounts. Users can acquire a balance by linking their WeChat account to their debit cards, or by receiving money from other users. For non-Chinese users of WeChat Pay, an additional identity verification process of providing a photo of a valid ID is required before certain functions of WeChat Pay become available. Users who link their credit card can only make payments to vendors, and cannot use this to top up WeChat balances. WeChat Pay can be used for digital payments, as well as payments from participating vendors. As of March 2016, WeChat Pay had over 300 million users. WeChat Pay's main competitor in China and the market leader in online payments is Alibaba Group's Alipay. Alibaba company founder Jack Ma considered the red envelope feature to be a "Pearl Harbor moment", as it began to erode Alipay's historic dominance in the online payments industry in China, especially in peer-to-peer money transfer. The success prompted Alibaba to launch its own version of virtual red envelopes in its competing Laiwang service. Other competitors, Baidu Wallet and Sina Weibo, also launched similar features. In 2019 it was reported that WeChat had overtaken Alibaba with 800 million active WeChat mobile payment users versus 520 million for Alibaba's Alipay. However Alibaba had a 54 per cent share of the Chinese mobile online payments market in 2017 compared to WeChat's 37 per cent share. In the same year, Tencent introduced "WeChat Pay HK", a payment service for users in Hong Kong. Transactions are carried out with the Hong Kong dollar. In 2019 it was reported that Chinese users can use WeChat Pay in 25 countries outside China, including, Italy, South Africa and the UK. Enterprise WeChat For work purposes, companies and business communication, a special version of WeChat called Enterprise WeChat (or Qiye Weixin) was launched in 2016. The app was meant to help employees separate work from private life. In addition to the usual chat features, the program let companies and their employees keep track of annual leave days and expenses that need to be reimbursed, employees could ask for time off or clock in to show they were at work. WeChat Mini Program In 2017, WeChat launched a feature called "Mini Programs" (). A mini program is an app within an app. Business owners can create mini apps in the WeChat system, implemented using JavaScript plus a proprietary API. Users may install these inside the WeChat app. In January 2018, WeChat announced a record of 580,000 mini programs. With one Mini Program, consumers could scan the Quick Response code using their mobile phone at a supermarket counter and pay the bill through the user's WeChat mobile wallet. WeChat Games have received huge popularity, with its "Jump Jump" game attracting 400 million players in less than 3 days and attaining 100 million daily active users in just two weeks after its launch, as of January 2018. WeChat Channels In 2020, WeChat Channels was launched, which is a short video platform within WeChat that allows users to create and share short video clips and photos to their own WeChat Channel. Users of Channels can also discover content posted to other Channels by others via the in-built feed. Each post can include hashtags, a location tag, a short description, and a link to an WeChat official account article. In September 2021, it was reported that WeChat Channels began allowing users to upload hour-long videos, twice of the duration limit previously imposed on all WeChat Channels videos. Comparisons are often drawn between WeChat Channels and Tiktok (or Douyin) for their similarity in features. In January 2022, there were reports that WeChat is set to diversify further and place more emphasis on new products and services like WeChat Channels, amid new regulatory restrictions imposed in China. By June 2021, WeChat Channels had accumulated over 200 million users. More than 27 million people had used the platform to watch Irish boy band Westlife's online concert in 2021, and 15 million users also viewed the Shenzhou 12 spaceflight launch using the app service. Others In 2015, WeChat offered a heat map feature that showed crowd density. Quartz columnist Josh Horwitz alleged the feature is being used by the Chinese government to track irregular assemblies of people to determine unlawful assembly. In January 2016, Tencent launched WeChat Out, a VOIP service allowing users to call mobile phones and landlines around the world. The feature allowed purchasing credit within the app using a credit card. WeChat Out was originally only available in the United States, India, and Hong Kong, but later coverage was expanded to Thailand, Macau, Laos, and Italy. In March 2017, Tencent released WeChat Index. By inserting a search term in the WeChat Index page, users could check the popularity of this term in the past 7, 30, or 90 days. The data was mined from data in official WeChat accounts and metrics such as social sharing, likes and reads were used in the evaluation. In May 2017, Tencent started news feed and search functions for its WeChat app. The Financial Times reported this was a "direct challenge to Chinese search engine Baidu". WeChat allowed people to add friends by a variety of methods, including searching by username or phone number, adding from phone or email contacts, playing a "message in a bottle" game, or viewing nearby people who are also using the same service. In 2015 WeChat added a "friend radar" function. In 2017, WeChat was reported to be developing an augmented reality (AR) platform as part of its service offering. Its artificial intelligence team was working on a 3D rendering engine to create a realistic appearance of detailed objects in smartphone-based AR apps. They were also developing a simultaneous localization and mapping technology, which would help calculate the position of virtual objects relative to their environment, enabling AR interactions without the need for markers, such as Quick Response codes or special images. In late 2019, WeChat released dark theme for Android users. It was released later in early 2020 for iOS amidst rumors that WeChat would be removed from the Apple store if they do not release dark theme. In spring 2020, WeChat users are now able to change their WeChat ID more than once, being allowed to change their username only once per year. Prior to this, a WeChat ID could not be changed more than once. On 17 June 2020, WeChat released a new add-on called "WeChat Nudge". The feature was first introduced in MSN Messenger 7.0, in 2005. The feature was called Buzz in Yahoo! Messenger and the feature had interoperability with MSN Messenger's Nudge. Similar to Messenger and Yahoo, users can access WeChat Nudge by double-clicking on other users' profiles in the chat. This virtually shakes user's profile photo and sends a vibration notification. Both users must have the latest wechat update(7.0.13). If a user does not have the latest update they can't nudge another user but can still receive nudges. A user can only nudge another user if they have previous conversations. Newly added friends without previous messages cannot nudge each other. On January 21, 2021, WeChat released its iOS Version 8.0. WeChat has added the function of animated emoji, such as the emojis of Bomb, Fireworks, and Celebration. When sending or receiving them from the chat box, an explosion animation will pop up . WeChat Business WeChat Business () is one of the latest mobile social network business model after e-commerce, which utilizes business relationships and friendships to maintain a customer relationship. Comparing with the traditional E-business like JD.com and Alibaba, WeChat Business has a large range of influence and profits with less input and lower threshold, which attracts lots of people to join in WeChat business. Marketing modes B2C Mode This is the main profit mode of WeChat Business. The first one is to launch advertisements and provide services through the WeChat Official Account, which is a B2C mode. This mode has been used by many hospitals, banks, fashion brands, internet companies and personal blogs because the Official Account can access online payment, location sharing, voice messages, and mini-games. It is like a 'mini app', so the company has to hire specific staff to manage the account. By 2015, there were more than 100 million WeChat Official Accounts on this platform. B2B Mode WeChat salesperson in this mode is for promoting products by individuals, which belongs to C2C mode. In this mode, individual sellers post relevant photos and messages of their agent products on the WeChat Moments or WeChat groups and sell products to their WeChat friends. Besides, they develop friendships with their customers by sending messages in festivals or write comments under their updates on WeChat moments to increase their trust. Also, continuing to communicate with the regular customers raises the 'WOF' (word-of-mouth) communications, which influences decision-making. Some WeChat businessmen already have an online shop in Taobao, but use WeChat to maintain existing customers. Existing problems As more and more people have joined WeChat Business, it has brought many problems. For example, some sellers have begun to sell fake luxury goods such as bags, clothes and watches. Some of them have special channels to obtain high-quality fake luxury products and sell them at a low price. Moreover, some sellers have even disguised themselves as international flight attendants or overseas students to post fake stylish photos on WeChat Moments. They then claim that they can provide overseas purchasing services but sell fake luxury goods at the same price as the true ones. Other popular products selling on WeChat are facial masks. The marketing mode is like that of Amway but most goods are unbranded products which come from illegal factories making excess hormones which could have serious effects on customers' health. However, it is difficult for customers to defend their rights because a large number of sellers' identities are uncertified. Additionally, the lack of any supervision mechanism in WeChat business also provides opportunities for criminals to continue this illegal behavior. Marketing Campaigns In a 2016 campaign, users could upload a paid photo on "Moments" and other users could pay to see the photo and comment on it. The photos were taken down each night. Collaborations In 2014, Burberry partnered with WeChat to create its own WeChat apps around its fall 2014 runway show, giving users live streams from the shows. Another brand, Michael Kors used WeChat to give live updates from their runway show, and later to run a photo contest "Chic Together WeChat campaign". In 2016, L'Oréal China cooperated with Papi Jiang to promote their products. Over one million people watched her first video promoting L'Oreal's beauty brand MG. In 2016, WeChat partnered with 60 Italian companies (WeChat had an office in Milan) who were able to sell their products and services on the Chinese market without having to get a license to operate a business in China. In 2017, Andrea Ghizzoni, European director of Tencent, said that 95 percent of global luxury brands used WeChat. Platforms WeChat's mobile phone app is available only to Android and iOS. BlackBerry, Windows Phone, and Symbian phones were supported before. However, as of 22 September 2017, WeChat was no longer working on Windows Phones. The company ceased the development of the app for Windows Phones before the end of 2017. Although Web-based OS X and Windows clients exist, this requires the user to have the app installed on a supported mobile phone for authentication, and neither message roaming nor 'Moments' are provided. Thus, without the app on a supported phone, it is not possible to use the web-based WeChat clients on the computer. The company also provides WeChat for Web, a web-based client with messaging and file transfer capabilities. Other functions cannot be used on it, such as the detection of nearby people, or interacting with Moments or Official Accounts. To use the Web-based client, it is necessary to first scan a QR code using the phone app. This means it is not possible to access the WeChat network if a user does not possess a suitable smartphone with the app installed. WeChat could be accessed on Windows using BlueStacks until December 2014. After that, WeChat blocked Android emulators and accounts that have signed in from emulators may be frozen. There have been some reported issues with the Web client. Specifically when using English, some users have experienced autocorrect, autocomplete, auto-capitalization, and auto-delete behavior as they type messages and even after the message was sent. For example, "gonna" was autocorrected to "go", the E's were auto-deleted in "need", "wechat" was auto-capitalized to "Wechat" but not "WeChat", and after the message was sent, "don't" got auto-corrected to "do not". However, the auto-corrected word(s) after the message was sent appeared on the phone app as the user had originally typed it ("don't" was seen on the phone app whereas "do not" was seen on the Web client). Users could translate a foreign language during a conversation and the words were posted on Moments. WeChat opens up video calls for multiple people not only for a one-person call. Controversies State surveillance and intelligence gathering WeChat operates from China under Chinese law, which includes strong censorship provisions and interception protocols. Its parent company is obliged to share data with the Chinese government under the China Internet Security Law and National Intelligence Law. WeChat can access and expose the text messages, contact books, and location histories of its users. Due to WeChat's popularity, the Chinese government uses WeChat as a data source to conduct mass surveillance in China. Some states and regions such as India, Australia the United States, and Taiwan fear that the app poses a threat to national or regional security for various reasons. In June 2013, the Indian Intelligence Bureau flagged WeChat for security concerns. India has debated whether or not they should ban WeChat for the possibility that too much personal information and data could be collected from its users. In Taiwan, legislators were concerned that the potential exposure of private communications was a threat to regional security. In 2016, Tencent was awarded a score of zero out of 100 in an Amnesty International report ranking technology companies on the way they implement encryption to protect the human rights of their users. The report placed Tencent last out of a total of 11 companies, including Facebook, Apple, and Google, for the lack of privacy protections built into WeChat and QQ. The report found that Tencent did not make use of end-to-end encryption, which is a system that allows only the communicating users to read the messages. It also found that Tencent did not recognize online threats to human rights, did not disclose government requests for data, and did not publish specific data about its use of encryption. A September 2017 update to the platform's privacy policy detailed that log data collected by WeChat included search terms, profiles visited, and content that had been viewed within the app. Additionally, metadata related to the communications between WeChat users—including call times, duration, and location information—was also collected. This information, which was used by Tencent for targeted advertising and marketing purposes, might be disclosed to representatives of the Chinese government: To comply with an applicable law or regulations. To comply with a court order, subpoena, or other legal process. In response to a request by a government authority, law enforcement agency, or similar body. In May 2020, Citizen Lab published a study which claimed that WeChat monitors foreign chats to hone its censorship algorithms. On August 14, 2020, Radio Free Asia reported that in 2019, Gao Zhigang, a citizen of Taiyuan city, Shanxi Province, China, used WeChat to forward a video to his friend Geng Guanjun in USA. Gao was later convicted on the charge of the crime “picking quarrels and provoking troubles”, and sentenced to ten-months imprisonment. The Court documents show that China's network management and propaganda departments directly monitor WeChat users, and the Chinese police used big data facial technology to identify Geng Guanjun as an overseas democracy activist. In September 2020, Chevron Corporation mandated that its employees delete WeChat from company-issued phones. Privacy issues Users inside and outside of China also have expressed concern for the privacy issues of the app. Human rights activist Hu Jia was jailed for three years for sedition. He speculated that the officials of the Internal Security Bureau of the Ministry of Public Security listened to his voicemail messages that were directed to his friends, repeating the words displayed within the voice mail messages to Hu Jia. Chinese authorities have further accused the WeChat app of threatening individual safety. China Central Television (CCTV), a state-run broadcaster, featured a piece in which WeChat was described as an app that helped criminals due to its location-reporting features. CCTV gave an example of such accusations through reporting the murder of a single woman who, after he attempted to rob her, was murdered by a man she met on WeChat. The location-reporting feature, according to reports, was the reason for the man knowing the victim's whereabouts. Authorities within China have linked WeChat to numerous crimes. The city of Hangzhou, for example, reported over twenty crimes related to WeChat in the span of three months. XcodeGhost malware In 2015, Apple published a list of the top 25 most popular apps infected with the XcodeGhost malware, confirming earlier reports that version 6.2.5 of WeChat for iOS was infected with it. The malware originated in a counterfeit version of Xcode (dubbed "XcodeGhost"), Apple's software development tools, and made its way into the compiled app through a modified framework. Despite Apple's review process, WeChat and other infected apps were approved and distributed through the App Store. Even though some sources claimed that the malware was capable of prompting the user for their account credentials, opening URLs and reading the device's clipboard, Apple responded that the malware was not capable of doing "anything malicious" or transmitting any personally identifiable information beyond "apps and general system information" and that it had no information that suggested that this had happened. Some commentators considered this to be the largest security breach in the App Store's history. Current ban in India In June 2020, the Government of India banned WeChat along with 58 other Chinese apps citing data and privacy issues, in response to a border clash between India and China earlier in the year. The banned Chinese apps were "stealing and surreptitiously transmitting users’ data in an unauthorized manner to servers which have locations outside India," and was "hostile to national security and defense of India", claimed India's Ministry of Electronics and Information Technology. Previous ban in Russia On 6 May 2017, Russia blocked access to WeChat for failing to give its contact details to the Russian communications watchdog. The ban was swiftly lifted on 11 May 2017 after Tencent provided "relevant information" for registration to the Federal Service for Supervision of Communications, Information Technology and Mass Media (Roskomnadzor). Ban and injunction against ban in the United States On August 6, 2020, U.S. President Donald Trump signed an executive order, invoking the International Emergency Economic Powers Act, seeking to ban WeChat in the U.S. in 45 days, due to its connections with the Chinese-owned Tencent. This was signed alongside a similar executive order targeting TikTok and its Chinese-owned ByteDance. The Department of Commerce issued orders on September 18, 2020 to enact the ban on WeChat and TikTok by the end of September 20, 2020, citing national security and data privacy concerns. The measures ban the transferring of funds or processing through WeChat in the U.S. and bar any company from offering hosting, content delivery networks or internet transit to WeChat. Magistrate Judge Laurel Beeler of the United States District Court for the Northern District of California issued a preliminary injunction blocking the Department of Commerce order on both TikTok and WeChat on September 20, 2020 based on respective lawsuits filed by TikTok and US WeChat Users Alliance, citing the merits of the plaintiffs' First Amendment claims. The Justice Department had previously asked Beeler to not block the order to ban the apps saying it would undermine the presidents ability to deal with threats to national security. In her ruling, Beeler said that while the government had established that Chinese government activities raised significant national security concerns, it showed little evidence that the WeChat ban would address those concerns. On June 9, 2021, U.S. President Joe Biden signed an executive order revoking the ban on WeChat and TikTok. Instead, he directed the commerce secretary to investigate foreign influence enacted through the apps. Notorious Markets list In 2022, the Office of the United States Trade Representative added WeChat's ecommerce ecosystem to its list of Notorious Markets for Counterfeiting and Piracy. Censorship Global censorship Starting in 2013, reports arose that Chinese-language searches even outside China were being keyword filtered and then blocked. This occurred on incoming traffic to China from foreign countries but also exclusively between foreign parties (the service had already censored its communications within China). In the international example of blocking, a message was displayed on users' screens: "The message "" your message contains restricted words. Please check it again." These are the Chinese characters for a Guangzhou-based paper called Southern Weekly (or, alternatively, Southern Weekend). The next day Tencent released a statement addressing the issue saying "A small number of WeChat international users were not able to send certain messages due to a technical glitch this Thursday. Immediate actions have been taken to rectify it. We apologize for any inconvenience it has caused to our users. We will continue to improve the product features and technological support to provide a better user experience." WeChat planned to build two different platforms to avoid this problem in the future; one for the Chinese mainland and one for the rest of the world. The problem existed because WeChat's servers were all located in China and thus subjected to its censorship rules. Following the overwhelming victory of pro-democracy candidates in the 2019 Hong Kong local elections WeChat censored messages related to the election and disabled the accounts of posters in other countries such as U.S. and Canada. Many of those targeted were of Chinese ancestry. In 2020, WeChat started censoring messages concerning the COVID-19 pandemic. In December 2020 WeChat blocked a post by Australian Prime Minister Scott Morrison during a diplomatic spat between Australia and China. In his WeChat post Morrison had criticized a doctored image posted by a Chinese diplomat and praised the Chinese-Australian community. According to Reuters the company claimed to have blocked the post for "violated regulations, including distorting historical events and confusing the public." Two censorship systems In 2016, the Citizen Lab published a report saying that WeChat was using different censorship policies in mainland China and other areas. They found that: Keyword filtering was only enabled for users who registered via phone numbers from mainland China; Users did not get notices anymore when messages are blocked; Filtering was more strict on group chat; Keywords were not static. Some newfound censored keywords were in response to current news events; The Internal browser in WeChat blocked Chinese accounts from accessing some websites such as gambling, Falun Gong and critical reports on China. International users were not blocked except for accessing some gambling and pornography websites. Restricting sharing websites in "Moments" In 2014, WeChat announced that according to "related regulations", domains of the web pages that want to get shared in WeChat Moments need to get an Internet Content Provider (ICP) license by 31 December 2014 to avoid being restricted by WeChat. Censorship in Iran In September 2013, WeChat was blocked in Iran. The Iranian authorities cited WeChat Nearby (Friend Radar) and the spread of pornographic content as the reason of censorship. The Committee for Determining Instances of Criminal Content (a working group under the supervision of the attorney general) website FAQ says: Because WeChat collects phone data and monitors member activity and because app developers are outside of the country and not cooperating, this software has been blocked, so you can use domestic applications for cheap voice calls, video calls and messaging. On 4 January 2018, WeChat was unblocked in Iran. Crackdown on LGBTQ accounts in China On July 6, 2021, several WeChat accounts associated with China's university campuses LGBTQ movement were blocked and then deleted without warning. Some of the accounts, which consisted of a mix of registered student clubs and unofficial grassroots groups had operated for years as safe spaces for China's LGBTQ youth, with tens of thousands of followers. Many of the closed WeChat accounts display messages saying that they had "violated" Internet regulations, without giving further details, with account names being deleted and replaced with "unnamed", with a notice claiming that all content was blocked and accounts were suspended after receiving relevant complaints. The U.S. State Department expressed concern that the accounts were deleted when they were merely expressing their views, exercising their right to freedom of expression and freedom of speech. Several groups that had their accounts deleted spoke out against the ban with one stating "[W]e hope to use this opportunity to start again with a continued focus on gender and society, and to embrace courage and love". See also Comparison of cross-platform instant messaging clients Comparison of instant messaging protocols Comparison of LAN messengers Comparison of VoIP software List of SIP software List of video telecommunication services and product brands References External links Software companies established in 2011 2011 software Android (operating system) software BlackBerry software Chinese brands Communication software Instant messaging clients IOS software Mobile applications Mobile telecommunication services Organizations that oppose LGBT rights Proprietary cross-platform software Symbian software Super-apps Tencent software Universal Windows Platform apps WatchOS software Delisted applications Internet properties established in 2011 Tencent Notorious markets
19922472
https://en.wikipedia.org/wiki/CT%20Connect
CT Connect
CT Connect is a software product that allows computer applications to monitor and control telephone calls. This monitoring and control is called computer-telephone integration, or CTI. CT Connect implements CTI by providing server software that supports the CTI link protocols used by a range of telephone systems, and client software that provides an application programming interface (API) for telephony functions. CT Connect is used most frequently in call center applications. Large call centers must handle huge volumes of calls, and the coordination of calls with business applications is essential. Software Function and Structure CT Connect software is not a CTI application in itself; rather, it is a software component that communicates with telephone systems, converts telephone call status and control information to a standardized form, and presents that information to third-party applications. This component approach contrasts with that of integrated offerings such as those from Genesys Telecommunications Laboratories that combine the low-level telephone interface function with higher-level application logic such as retrieving and displaying information from a data base. Software developers seeking to include telephone-related functions in their applications incorporate CT Connect software modules into their projects. The CT Connect server module must be installed on a computer that has a special CTI communication link to the telephone system. The CT Connect client module is installed on the same computer as the developer's application software. The client module presents an API for application telephone functions such as dialing a new call or generating an alert for an incoming call. The client module executes these requests by signaling them to the server module, which in turn requests the function from the telephone system. The client-server structure allows multiple applications running on multiple computers to share access to a single telephone system. CTI Standards The principal standards specification for CTI is ECMA Computer-supported Telecommunications Applications or CSTA. The CSTA standard specifies a call model (that is, how parties participate in a call and the steps a call goes through as it proceeds) and a set of messages that can be exchanged between a telephone system and a computer system. When both telephone systems and computer systems implement the CSTA standard, a customer can choose freely among competing products with confidence that they will interoperate. As described more fully below, the CT Connect development team participated in the definition of CSTA. CT Connect and its predecessor, Digital Equipment's Computer Integrated Telephony product line, implement the CSTA call model and support the relevant CSTA protocol standards. Several telecommunications equipment vendors have used CT Connect as a laboratory CSTA reference to test their own products' compliance with the standard. Environment Independence CTI servers like CT Connect must operate in a heterogeneous computing environment and must interoperate with other system components chosen by application developers and users. CT Connect takes an "open" stance towards other system components, interoperating with a wider range of components than other CTI servers. This open stance is undoubtedly one reason that CT Connect has survived for 20 years while other competitors such as IBM CallPath and Novell Telephony Services have been discontinued. For CTI servers, important dimensions of interoperation are: Telephone System Independence Many telephone switch manufacturers have implemented CTI links for their products. Some of these are proprietary protocols and some are implementations of the ECMA CSTA standard. Some of these manufacturers also offer their own CTI servers (such as Nortel's Contact Center Manager Server), but those servers generally operate only with that single manufacturer's telephone system. By contrast, the companies that have owned and marketed CT Connect over the years (see the CT Connect history below) have made arrangements with most major PBX manufacturers to get access to their protocol specifications and implement their CTI link protocols. API Independence Computer-telephone APIs have proliferated. Microsoft introduced TAPI, Novell introduced TSAPI, Sun Microsystems introduced JTAPI, and a group of vendors formed the Versit Consortium to better integrate the existing protocol standards and APIs. Some manufacturers have attempted to use CTI API specifications to restrict users to their other products. For example, the Microsoft TAPI specification can be implemented only on the Microsoft Windows family of operating systems, and the Novel Telephony Services CTI server (now discontinued) could be used only on Novell networks. By contrast, CT Connect's client-server architecture permits client modules that offer various CTI APIs, and all client modules can interoperate with a common CTI server module. Operating System Independence Like CTI APIs (see above), most CTI servers operate under only one operating system. By contrast, CT Connect has historically included client and server modules that operate under a range of popular operating systems including the Unix and Microsoft Windows families. History CT Connect has had a significant role in the conception and evolution of the CTI concept and its implementation. The product has been owned and marketed by three companies over its 20-year lifetime. Digital Equipment Corporation The software that eventually became CT Connect was originally developed in the late 1980s at Digital Equipment Corporation. During the 1980s, telephony was evolving from analog to digital technology. The international telecommunications standards body CCITT (now the ITU) published specifications for the Integrated Services Digital Network (ISDN). The ISDN specifications define digital interfaces between a subscriber and the network that can simultaneously support multiple telephone calls and packet data transmission. In the 1980s Digital Equipment was a leader in computer networking with its DECnet software. Digital Equipment studied the ISDN specifications and concluded that the ISDN interface lacked the ability to coordinate a voice call with related data. The Digital Equipment team named this capability computer integrated telephony (or CIT) and evangelized the concept among vendors and customers. (The acronym for the original Digital Equipment product line, CIT, is frequently confused with the industry acronym for any form of computer-telephone integration, CTI. The latter term was adopted by members of the MultiMedia Telecommunications Association (MMTA), a prominent US industry association. The MMTA was incorporated into a larger trade association, TIA, in 2000.) (See and for early expositions of the CTI concept by Digital Equipment.) Digital Equipment needed the commercial cooperation of telecommunications equipment manufacturers because their telephone switching systems had to be modified to report telephone call information via new CTI data links. Two Canadian private branch exchange (PBX) manufacturers that were interested in the idea: Northern Telecom (now Nortel) and Mitel, which at that time was controlled by British Telecom. Digital Equipment worked with both of these companies – one in Canada and one in the United Kingdom – to design and implement CTI data links between their respective products. Both efforts were successful, and systems using PBXs from both companies were shown at the quadrennial CCITT exhibition in Geneva in the fall of 1987. Digital Equipment released its first CTI software products, operating with the Mitel SX-20 and Northern Telecom SL-1 PBXs, the following year. Between 1988 and 1992, the Digital Equipment team approached more telecommunications equipment manufacturers (including Siemens ROLM and the former AT&T, whose Definity PBX product line is now owned by Avaya) and implemented the additional protocols needed to interoperate with their telephone systems. However, it quickly became clear that the number of proprietary protocols was becoming unmanageable. Call models differed between telephone systems, making it difficult to write CTI-enabled application software. The need for a standardized cross-vendor CTI call model and supporting protocol was becoming apparent. A group of computer and telecommunications equipment vendors interested in this problem approached the European Computer Manufacturers’ Association (now simply known as ECMA) with a proposal to undertake this standardization. The proposal was accepted and Robert Roden, an architect from the Digital Equipment technical team, was chosen as convenor (chairperson) for the standards work. Phase I (the first edition) of CSTA was released in 1992. The CSTA standard has since progressed through several editions, incorporating technologies such as voice response and XML. Each Phase of the CSTA standard includes both a call model as well as a recommended set of communication protocols. See for more information about the early work on CTI standards. The initial CTI software from Digital Equipment supported only VAX computers running the VMS operating system both as the server providing the telephone system interconnection and as application clients. Support for Digital's ULTRIX operating system was later added. But during the early 1990s, personal desktop computers became more pervasive in corporate computing environments. Digital Equipment responded to this trend by releasing Pathworks, a networking suite that allowed IBM-compatible PCs and Apple Macintosh computers to participate in a DECnet network. The Digital Equipment CIT team built on this platform and released a PC-based version of the CIT client that supported applications written for the Microsoft Windows operating system. The Windows desktop environment offered easier integration between business applications and telephone functions via the Microsoft Dynamic Data Exchange (DDE) mechanism, since many off-the-shelf PC applications and development platforms supported the DDE interface. Dialogic Acquires Digital's Technology and Team In 1995, Digital Equipment sold the CIT product line to Dialogic Corporation, which was then the leader in server-scale telephone interfaces for the PC form factor. Digital's CIT management staff and most of the development team moved to Dialogic with the sale, forming a new 'CT Division' within Dialogic and maintaining the product’s momentum. Dialogic converted the product from its original proprietary Digital Equipment hardware and software platform to an industry standard platform based on personal computer hardware and the Microsoft Windows family of operating systems. The resulting product, rechristened CT-Connect, was released by Dialogic in August, 1995. (The hyphen was dropped from the name in later releases.) Dialogic Acquired by Intel In 1999, Intel acquired Dialogic and all of its hardware and software product lines including CT Connect. As with the transition from Digital Equipment to Dialogic, most of the CT Connect business and technical team remained after the acquisition and product development and sales continued. CT Connect was renamed Intel NetMerge Call Processing Software to conform to Intel product naming conventions but remained the same product under the skin. The late 1990s saw the rising popularity of Voice over Internet Protocol (VoIP) telephony. Realizing that CTI would be as important with VoIP as it had with traditional telephony, the CT Connect team enhanced CT Connect to support application control of VoIP voice calls. Intel was issued 11 US patents related to this work. CT Connect Acquired By Envox Worldwide Intel sold the product line to Envox Worldwide in 2005. As before, much of the technical team continued with the product during the transition. Envox restored the product name to CT Connect and continues to market it. Syntellect, Inc. acquires Envox On October 20, 2008, Envox was acquired by Syntellect, Inc. Syntellect is a call center focused software company headquartered in Phoenix, Arizona. References For a detailed explanation of client-server CTI technology and CTI software including the role of CT Connect, see Margulies, E., (1997). Client Server Computer Telephony. Lawrence: CMP Books. . For another detailed explanation of CT Connect's client-server architecture and its ability to support multiple CTI APIs, see Carden, C., (1997). Understanding Computer Telephony: How To Voice Enable Databases From PCs to LANs to Mainframes, New York: Flatiron Publishing (now Elsevier Science Ltd.). . Grigonis, Richard, (2000). Computer Telephony Encyclopedia, Gilroy, CA: CMP Books. . Newton, Harry, (2008). Newton's Telecom Dictionary, New York: Flatiron Publishing. . Telephone service enhanced features
61099702
https://en.wikipedia.org/wiki/Salut%20%C3%A0%20Toi%20%28software%29
Salut à Toi (software)
Salut à Toi (SàT) is a multifunctional communications application and decentralized social network published under the AGPL-3.0-or-later license. It uses the XMPP. Initially made for instant messaging and chat, it developed additional functionality which can be used for microblogging, blogging, filesharing, audio and video streaming, and gaming. It has Atom feeds, and both WYSIWYM and WYSIWYG editors. Architecture Salut à Toi uses a client-server architecture. The client consists of a backend daemon (which can be installed locally or on a server) and one of several frontends. Frontends include: jp, a command-line interface Cagou, a frontend for desktops and mobile phones Libervia, a web interface Primitivus, a text-based user interface Third-party frontends include: Wix, WxWidgets-based desktop GUI (now deprecated) Bellaciao, Qt-based graphical user interface (development on hold) Sententia, an Emacs frontend (development is currently stalled) References External links Official site Conference presentation at . Conference presentation at Linuxfr (fr). Quiz for Libervia - Salut à toi at opengameart.org, a game based on Salut à toi (en). XMPP clients Free XMPP clients Microblogging software Free instant messaging clients Free software programmed in Python 2009 software Linux software Android (operating system) software Cross-platform free software
1101705
https://en.wikipedia.org/wiki/Argonaut%20Games
Argonaut Games
Argonaut Games was a British video game developer founded in 1982, most notable for the development of the Super NES video game Star Fox and its supporting Super FX hardware, as well as for developing Croc: Legend of the Gobbos and the Starglider series. The company was liquidated in late 2004, and ceased to exist in early 2007. History Founded as Argonaut Software by teenager Jez San in 1982, the company name is a play on his name (J. San) and the mythological story of Jason and the Argonauts. Its head offices were in Colindale, London, and later in the Argonaut House in Edgware, London. Its U.S. head office was in Woodside, California in the San Francisco Bay Area. In 1990, Argonaut collaborated with Nintendo during the early years of the NES and SNES, a notable incident being when Argonaut submitted a proof-of-concept method of defeating the Game Boy's copyright protection mechanism to Nintendo. The combined efforts from both Nintendo and Argonaut yielded a prototype of the game Star Fox, initially codenamed "SnesGlider" and inspired by their earlier Atari ST and Amiga game Starglider, that they had running on the NES and then some weeks later on a prototype SNES. Jez San told Nintendo that his team could only improve performance or functionality of the demonstration if Nintendo allowed Argonaut to design custom hardware to extend the SNES to have true 3D capability. Nintendo agreed, so San hired chip designers and made the Super FX chip. They originally codenamed it the Mathematical Argonaut Rotation I/O, or "MARIO", as is printed on the chip's surface. So powerful was the Super FX chip used to create the graphics and gameplay, that they joked that the Super NES was just a box to hold the chip. After building the Super FX, Argonaut designed several different chips for other companies' video game machines, which were never released. These include machines codenamed GreenPiece and CD-I 2 for Philips, the platform codenamed VeggieMagic for Apple and Toshiba, and Hasbro's "virtual reality" game system codenamed MatriArc. In 1995, Argonaut Software was split into Argonaut Technologies Limited (ATL) and Argonaut Software Limited (ASL). With space being a premium at the office on Colindale Avenue, ATL was relocated to an office in the top floor of a separate building. The building was called Capitol House on Capitol Way, just around the corner. There, they continued the design of CPU and GPU products and maintained "BRender", Argonaut's proprietary software 3D engine. They won a chip design project with LSI Logic for a potential PlayStation 2 design. LSI Logic became a minor investor in Argonaut. In 1996, John Edelson was hired as the company General Manager. John Edelson ran the group for two years. Capital was raised in 1996–1998 from Tom Teichman and Apax Partners. According to Jez San, Argonaut remained an independent developer by choice, and had turned down several buyout offers. In 1997, the two arms of the company once again shared an office as the entire company was moved to a new building in Edgware. In September 1997, Croc: Legend of the Gobbos was released by Fox Interactive for the PlayStation and Sega Saturn. A PC version of the game was also later released in 1998. In 1998, ATL was rebranded ARC after the name of their main product, the Argonaut RISC Core, and became an independent company spun off to the same shareholders. ARC was an embedded IP provider. Bob Terwilliger was engaged as the President. Argonaut Software Limited became Argonaut Games and was floated in 1999. In late October 2004, Argonaut Games called in receivers David Rubin & Partners, laid off 100 employees, and was put up for sale. Many former employees would join newly established developer Rocksteady Studios. A lack of a consistent stream of publishing deals had led to cash-flow issues and a profit warning earlier that year. In 2005, the company entered liquidation and was dissolved in early 2007. BRender BRender (abbreviation of "Blazing Renderer") is a development toolkit and a realtime 3D graphics engine for computer games, simulators, and graphic tools. It was developed and licensed by Argonaut Software. The engine had support for Intel's MMX instruction set and it supported Microsoft Windows, MS-DOS and PlayStation platforms. Support for 3D hardware graphics accelerator cards was added. Software made with BRender includes Carmageddon, Croc: Legend of the Gobbos, FX Fighter, I-War, and 3D Movie Maker. Games developed Cancelled games References External links An overview of Argonaut Games at Games Investor Defunct video game companies of the United Kingdom Defunct companies based in London Video game companies established in 1982 Video game companies disestablished in 2004 Video game development companies 1982 establishments in England 2004 disestablishments in England Star Fox
41650285
https://en.wikipedia.org/wiki/Hashcat
Hashcat
Hashcat is a password recovery tool. It had a proprietary code base until 2015, but was then released as open source software. Versions are available for Linux, OS X, and Windows. Examples of hashcat-supported hashing algorithms are LM hashes, MD4, MD5, SHA-family and Unix Crypt formats as well as algorithms used in MySQL and Cisco PIX. Hashcat has been publicly noticed because of its optimizations; partly based on flaws in other software discovered by the creator of hashcat. An example was a flaw in 1Password's password manager hashing scheme. It has also been compared to similar software in a Usenix publication and been described on Ars technica. Variants Previously, two variants of hashcat existed: hashcat - CPU-based password recovery tool oclHashcat/cudaHashcat - GPU-accelerated tool (OpenCL or CUDA) With the release of hashcat v3.00, the GPU and CPU tools were merged into a single tool called hashcat. The CPU-only version became hashcat-legacy. Both CPU and GPU now require OpenCL. Many of the algorithms supported by hashcat-legacy (such as MD5, SHA1, and others) can be cracked in a shorter time with the GPU-based hashcat. However, not all algorithms can be accelerated by GPUs. Bcrypt is an example of this. Due to factors such as data-dependent branching, serialization, and memory (and more), oclHashcat/cudaHashcat weren't catchall replacements for hashcat-legacy. hashcat-legacy is available for Linux, OSX and Windows. hashcat is available for macOS, Windows, and Linux with GPU, CPU and generic OpenCL support which allows for FPGAs and other accelerator cards. Sample output $ hashcat -d 2 -a 0 -m 400 -O -w 4 example400.hash example.dict hashcat (v5.1.0) starting... OpenCL Platform #1: Intel(R) Corporation ======================================== * Device #1: Intel(R) Core(TM) i5-2500K CPU @ 3.30GHz, skipped. OpenCL Platform #2: NVIDIA Corporation ====================================== * Device #2: GeForce GTX 970, 1010/4041 MB allocatable, 13MCU * Device #3: GeForce GTX 750 Ti, skipped. Hashes: 1 digests; 1 unique digests, 1 unique salts Bitmaps: 16 bits, 65536 entries, 0x0000ffff mask, 262144 bytes, 5/13 rotates Rules: 1 Applicable optimizers: * Optimized-Kernel * Zero-Byte * Single-Hash * Single-Salt Minimum password length supported by kernel: 0 Maximum password length supported by kernel: 55 Watchdog: Temperature abort trigger set to 90c Dictionary cache hit: * Filename..: example.dict * Passwords.: 128416 * Bytes.....: 1069601 * Keyspace..: 128416 The wordlist or mask that you are using is too small. This means that hashcat cannot use the full parallel power of your device(s). Unless you supply more work, your cracking speed will drop. For tips on supplying more work, see: https://hashcat.net/faq/morework Approaching final keyspace - workload adjusted. $H$9y5boZ2wsUlgl2tI6b5PrRoADzYfXD1:hash234 Session..........: hashcat Status...........: Cracked Hash.Type........: phpass, WordPress (MD5), phpBB3 (MD5), Joomla (MD5) Hash.Target......: $H$9y5boZ2wsUlgl2tI6b5PrRoADzYfXD1 Time.Started.....: Thu Apr 25 05:10:35 2019 (0 secs) Time.Estimated...: Thu Apr 25 05:10:35 2019 (0 secs) Guess.Base.......: File (example.dict) Guess.Queue......: 1/1 (100.00%) Speed.#2.........: 2654.9 kH/s (22.24ms) @ Accel:128 Loops:1024 Thr:1024 Vec:1 Recovered........: 1/1 (100.00%) Digests, 1/1 (100.00%) Salts Progress.........: 128416/128416 (100.00%) Rejected.........: 0/128416 (0.00%) Restore.Point....: 0/128416 (0.00%) Restore.Sub.#2...: Salt:0 Amplifier:0-1 Iteration:1024-2048 Candidates.#2....: 0 -> zzzzzzzzzzz Hardware.Mon.#2..: Temp: 44c Fan: 40% Util: 50% Core:1265MHz Mem:3004MHz Bus:8 Started: Thu Apr 25 05:10:32 2019 Stopped: Thu Apr 25 05:10:37 2019 Attack types Hashcat offers multiple attack modes for obtaining effective and complex coverage over a hash's keyspace. These modes are: Brute-force attack Combinator attack Dictionary attack Fingerprint attack Hybrid attack Mask attack Permutation attack Rule-based attack Table-Lookup attack (CPU only) Toggle-Case attack PRINCE attack (in CPU version 0.48 and higher only) The traditional bruteforce attack is considered outdated, and the Hashcat core team recommends the Mask-Attack as a full replacement. Competitions Team Hashcat (the official team of the Hashcat software composed of core Hashcat members) won first place in the KoreLogic "Crack Me If you Can" Competitions at DefCon in 2010, 2012, 2014, 2015, and 2018, and at DerbyCon in 2017. See also Brute-force attack Brute-force search Hacker (computer security) Hacking tool Openwall Project Password cracking References External links A guide to password cracking with Hashcat Talk: Confessions of a crypto cluster operator based on oclHashcat at Derbycon 2015 Talk: Hashcat state of the union at Derbycon 2016 Password cracking software Free security software Formerly proprietary software