id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
2864712
https://en.wikipedia.org/wiki/Cpio
Cpio
cpio is a general file archiver utility and its associated file format. It is primarily installed on Unix-like computer operating systems. The software utility was originally intended as a tape archiving program as part of the Programmer's Workbench (PWB/UNIX), and has been a component of virtually every Unix operating system released thereafter. Its name is derived from the phrase copy in and out, in close description of the program's use of standard input and standard output in its operation. All variants of Unix also support other backup and archiving programs, such as tar, which has become more widely recognized. The use of cpio by the RPM Package Manager, in the initramfs program of Linux kernel 2.6, and in Apple's Installer (pax) make cpio an important archiving tool. Since its original design, cpio and its archive file format have undergone several, sometimes incompatible, revisions. Most notable is the change, now an operational option, from the use of a binary format of archive file meta information to an ASCII-based representation. History cpio appeared in Version 7 Unix as part of the Programmer's Workbench project. Operation and archive format cpio was originally designed to store backup file archives on a tape device in a sequential, contiguous manner. It does not compress any content, but resulting archives are often compressed using gzip or other external compressors. Archive creation When creating archives during the copy-out operation, initiated with the command line flag, cpio reads file and directory path names from its standard input channel and writes the resulting archive byte stream to its standard output. Cpio is therefore typically used with other utilities that generate the list of files to be archived, such as the find program. The resulting cpio archive is a sequence of files and directories concatenated into a single archive, separated by header sections with file meta information, such as filename, inode number, ownership, permissions, and timestamps. By convention, the file name of an archive is usually given the file extension cpio. This example uses the find utility to generate a list of path names starting in the current directory to create an archive of the directory tree: $ find . -depth -print | cpio -o > /path/archive.cpio Extraction During the copy-in operation, initiated by the command line flag , cpio reads an archive from its standard input and recreates the archived files in the operating system's file system. $ cpio -i -vd < archive.cpio Command line flag tells cpio to construct directories as necessary. Flag (verbose) lists file names as they are extracted. Any remaining command line arguments other than the option flags are shell-like globbing-patterns; only files in the archive with matching names are copied from the archive. The following example extracts the file /etc/fstab from the archive: $ cpio -i -d /etc/fstab < archive.cpio List The files contained in a cpio archive may be listed with this invocation: $ cpio -t < archive.cpio List may be useful since a cpio archive may contain absolute rather than relative paths (e.g., /bin/ls vs. bin/ls). Copy Cpio supports a third type of operation which copies files. It is initiated with the pass-through option flag (). This mode combines the copy-out and copy-in steps without actually creating any file archive. In this mode, cpio reads path names on standard input like the copy-out operation, but instead of creating an archive, it recreates the directories and files at a different location in the file system, as specified by the path given as a command line argument. This example copies the directory tree starting at the current directory to another path new-path in the file system, preserving files modification times (flag ), creating directories as needed (), replacing any existing files unconditionally (), while producing a progress listing on standard output (): $ find . -depth -print | cpio -p -dumv new-path POSIX standardization The cpio utility is standardized in POSIX.1-1988, but was omitted from POSIX.1-2001 because of its file size (and other) limitations. For example, the GNU version offers various output format options, such as "bin" (default, and obsolete) and "ustar" (POSIX tar), having a file size limitations of 2,147,483,647 bytes (2 GB) and 8,589,934,591 bytes (8 GB), respectively. The cpio, ustar, and pax file formats are defined by POSIX.1-2001 for the pax utility, which is currently POSIX 1003.1-2008 compliant, and so it can read and write cpio and ustar formatted archives. Implementations Most Linux distributions provide the GNU version of cpio. FreeBSD and macOS use the BSD-licensed bsdcpio provided with libarchive. See also List of Unix commands List of archive formats References 1977 software Unix archivers and compression-related utilities Free backup software Archive formats File archivers GNU Project software
21927259
https://en.wikipedia.org/wiki/GBBS
GBBS
GBBS is a bulletin board system (BBS) program for the Apple II. Its first series, named GBBS, was written in Applesoft and used by boards such as Demon Roach Underground in Lubbock, Texas Its successor, GBBS Pro, was ACOS-based. GBBS-Pro was used by boards like ProBOARD II in Paso Robles, California, Scotland Yard GBBS/AE Pro in Cincinnati, Ohio, No Earthly Connection in Blue Ridge, Georgia, and Apple Elite II in Riverside, California. GBBS (literally: Greg's Bulletin Board System) was written by Greg Schaefer, who later authored the terminal emulation program ProTERM. The GBBS-Pro system was based on the ACOS compiler and language. ACOS was a BASIC-like language wherein the modem handling routines had replaced some of the other basic functions. Arrays (for instance) were unheard of in ACOS and so it was necessary to find other ways to work around these limitations (i.e. files replaced arrays). GBBS systems could be highly customized and modified. Mods were shared between sysops (system operators) and even ProTERM support was given via a GBBS system. History The Apple IIe ran the first GBBS systems and later the Apple //gs became the game platform and multiline answer GBBS sysops had long searched for. John P. Edwards (by then good friends with Schaefer) had a small GBBS set up for a short time on an Apple //c with only a 140 kB (5.25-inch) floppy disk (but the IIc had 128 kB of RAM compared to 64 kB for the //e). The processor was 8 bit and 1 MHz on an Apple IIe. ProDOS, the operating system, took 32 k of storage space. (Compare to today's processors, operating systems and memory, etc.) There was a fairly extensive amount of (source) code written for the ACOS compiler and much of it is still available today. There are several GBBS systems still running today (Lost Gonzo BBS: https://web.archive.org/web/20090303230939/http://qxiu.com/MN/491300-lost_gonzo_bbs.htm ). The GBBS software is still available for the Apple // and other systems that emulate the Apple II. GBBS was also ported to run on the IBM PC XT using BASICA or GWBASIC. MACOS was a popular takeoff of GBBS and later became METAL, which was very ACOS-like and even more powerful. Custom-modified GBBS systems featured EMAIL (with receipts) and message Boards as well as online games and Dating Software; they were run by hobbyists and professionals for various reasons. John once set up and modified a GBBS for an air force (F15) pilot stationed in Duran, Saudi Arabia (during the first gulf war). Logging in through an 800 number and at 300 bit/s (from his home on the central California coast), to custom modify the Duran GBBS to his standards (from Paso Robles, Calif) (fairly unheard of in that day). That system in Duran, Saudi Arabia became the (unofficial) means of communication between troops and loved ones back home. Asking only that his name be mentioned, Edwards donated the work and the software. The Duran GBBS became the model upon which other such [troop] support boards were built. Sysops from Johannesburg, South Africa and other places around the world called into the ProBOARD // for mods and support (in 1986 that was really something). The good deed had not gone unrewarded as word spread. (That put us on the map, John says of the experience.) Up to date (state of the art) GBBS Systems had by 1989-89 adopted OggNET (by Paul Parkhurst) which effectively NETWORKED GBBS Systems via a HUB and spoke type network. GBBS users on one GBBS System could email users on another GBBS as early as 1989. Messages posted to the bulletin boards on one GBBS would be sent to the other GBBS systems on OGGNET and users from various systems could post to the same threads and share info across the United States and several other countries. Gateways were programmed to interface with FIDO NET and other NETWORKED BBS Systems. The ProBOARD // GBBS (JpE's BBS) The World Wide support site for GBBS-Pro circa 1986-1995 was sold to and is on display at The Boston Computer Museum in Boston, MASS. This and other GBBS Systems were by 1990 so super-modified as to barely be recognizable as GBBS systems (via PSE or ANSI emulation and graphics.) John P. Edwards (Surfer Joe who ran the ProBOARD // GBBS, SysOP Support Site) was also co-designer and developer for ProTERM Mac with Greg Schaefer and was heavily involved in both GBBS and ProTERM for the Apple // in the early days of the telephone modem. (300–1200 bit/s) Trying to figure out what would follow email, it was Edwards who first created something he called F-Mail* where a file could be attached to a short note explaining what it was (i.e. attachments). F-mail was then modified to work with OggNET and users could send FMAIL (attachments) across the multi System Network of Apple// computers. *Thusly, file attachments were born! In 1990, a Y2K-like bug surfaced in the ACOS language which revealed that dates beyond 1989 were not supported: the year 1989 was followed by the year "198:", colon being the next character in ASCII after 9. Early attempts to reset the date to 1990 only resulted in a year of "199:". The problem was only cosmetic, but stood out as revealing a system as not being up-to-date and possibly running pirated code. The official fix was to upgrade to the next version of GBBS Pro which included a copyright statement for GBBS Pro to be sent to the user before hanging up the modem, which upset some runners of the software who ran their own source written for the ACOS interpreter and not the stock GBBS Pro source files and there was no API to suppress the copyright statement. Code was provided to detokenize code back into text source so that lost source could be recovered, but the original version of this code had two errors: two different tokens were decoded into the same instruction, and syntactic whitespace was omitted on another. These errors prevented immediate re tokenization and interpretation of the data until the introduced errors were corrected. Comments and nonpublic labels were also not recovered as they were not encoded. Nonpublic labels were replaced with generic label names. References External links GBBS 1.6 manual Apple II software Bulletin board system software
25568315
https://en.wikipedia.org/wiki/Katherine%20Johnson
Katherine Johnson
Katherine Johnson (née Coleman; August 26, 1918 – February 24, 2020) was an American mathematician whose calculations of orbital mechanics as a NASA employee were critical to the success of the first and subsequent U.S. crewed spaceflights. During her 33-year career at NASA and its predecessor, she earned a reputation for mastering complex manual calculations and helped pioneer the use of computers to perform the tasks. The space agency noted her "historical role as one of the first African-American women to work as a NASA scientist". Johnson's work included calculating trajectories, launch windows, and emergency return paths for Project Mercury spaceflights, including those for astronauts Alan Shepard, the first American in space, and John Glenn, the first American in orbit, and rendezvous paths for the Apollo Lunar Module and command module on flights to the Moon. Her calculations were also essential to the beginning of the Space Shuttle program, and she worked on plans for a mission to Mars. She was known as a "human computer" for her tremendous mathematical capability and ability to work with space trajectories with such little technology and recognition at the time. In 2015, President Barack Obama awarded Johnson the Presidential Medal of Freedom. In 2016, she was presented with the Silver Snoopy Award by NASA astronaut Leland D. Melvin and a NASA Group Achievement Award. She was portrayed by Taraji P. Henson as a lead character in the 2016 film Hidden Figures. In 2019, Johnson was awarded the Congressional Gold Medal by the United States Congress. In 2021, she was inducted into the National Women's Hall of Fame. Early life Katherine Johnson was born as Creola Katherine Coleman on August 26, 1918, in White Sulphur Springs, West Virginia, to Joylette Roberta (née Lowe) and Joshua McKinley Coleman. She was the youngest of four children. Her mother was a teacher and her father was a lumberman, farmer, and handyman, and worked at the Greenbrier Hotel. Johnson showed strong mathematical abilities from an early age. Because Greenbrier County did not offer public schooling for African-American students past the eighth grade, the Colemans arranged for their children to attend high school in Institute, West Virginia. This school was on the campus of West Virginia State College (WVSC). Johnson was enrolled when she was ten years old. The family split their time between Institute during the school year and White Sulphur Springs in the summer. After graduating from high school at 14, Johnson enrolled at West Virginia State, a historically black college. As a student, she took every math course offered by the college. Multiple professors mentored her, including the chemist and mathematician Angie Turner King, who had mentored Coleman throughout high school, and W. W. Schieffelin Claytor, the third African-American to receive a Ph.D. in mathematics. Claytor added new mathematics courses just for Johnson. She graduated summa cum laude in 1937, with degrees in mathematics and French, at age 18. Johnson was a member of Alpha Kappa Alpha. She took on a teaching job at a black public school in Marion, Virginia. In 1939, after marrying her first husband, James Goble, she left her teaching job and enrolled in a graduate math program. She quit one year later after becoming pregnant and chose to focus on her family life. She was the first African-American woman to attend graduate school at West Virginia University in Morgantown, West Virginia. Through WVSC's president, Dr. John W. Davis, she became one of three African-American students, and the only woman, selected to integrate the graduate school after the 1938 United States Supreme Court ruling Missouri ex rel. Gaines v. Canada. The court ruled that states that provided public higher education to white students also had to provide it to black students, to be satisfied either by establishing black colleges and universities or by admitting black students to previously white-only universities. Career Johnson decided on a career as a research mathematician, although this was a difficult field for African Americans and women to enter. The first jobs she found were in teaching. At a family gathering in 1952, a relative mentioned that the National Advisory Committee for Aeronautics (NACA) was hiring mathematicians. At the Langley Memorial Aeronautical Laboratory, based in Hampton, Virginia, near Langley Field, NACA hired African-American mathematicians as well as whites for their Guidance and Navigation Department. Johnson accepted a job offer from the agency in June 1953. According to an oral history archived by the National Visionary Leadership Project: At first she [Johnson] worked in a pool of women performing math calculations. Katherine has referred to the women in the pool as virtual "computers who wore skirts". Their main job was to read the data from the black boxes of planes and carry out other precise mathematical tasks. Then one day, Katherine (and a colleague) were temporarily assigned to help the all-male flight research team. Katherine's knowledge of analytic geometry helped make quick allies of male bosses and colleagues to the extent that, "they forgot to return me to the pool". While the racial and gender barriers were always there, Katherine says she ignored them. Katherine was assertive, asking to be included in editorial meetings (where no women had gone before). She simply told people she had done the work and that she belonged. From 1953 to 1958, Johnson worked as a computer, analyzing topics such as gust alleviation for aircraft. Originally assigned to the West Area Computers section supervised by mathematician Dorothy Vaughan, Johnson was reassigned to the Guidance and Control Division of Langley's Flight Research Division. It was staffed by white male engineers. In keeping with state racial segregation laws, and federal workplace segregation introduced under President Woodrow Wilson in the early 20th century, Johnson and the other African-American women in the computing pool were required to work, eat, and use restrooms that were separate from those of their white peers. Their office was labeled as "Colored Computers". In an interview with WHRO-TV, Johnson stated that she "didn't feel the segregation at NASA, because everybody there was doing research. You had a mission and you worked on it, and it was important to you to do your job ... and play bridge at lunch." She added: "I didn't feel any segregation. I knew it was there, but I didn't feel it." NACA disbanded the colored computing pool in 1958 when the agency was superseded by NASA, which adopted digital computers. Although the installation was desegregated, forms of discrimination were still pervasive. Johnson recalled that era: We needed to be assertive as women in those days – assertive and aggressive – and the degree to which we had to be that way depended on where you were. I had to be. In the early days of NASA women were not allowed to put their names on the reports – no woman in my division had had her name on a report. I was working with Ted Skopinski and he wanted to leave and go to Houston ... but Henry Pearson, our supervisor – he was not a fan of women – kept pushing him to finish the report we were working on. Finally, Ted told him, "Katherine should finish the report, she's done most of the work anyway." So Ted left Pearson with no choice; I finished the report and my name went on it, and that was the first time a woman in our division had her name on something. From 1958 until her retirement in 1986, Johnson worked as an aerospace technologist, moving during her career to the Spacecraft Controls Branch. She calculated the trajectory for the May 5, 1961 space flight of Alan Shepard, the first American in space. She also calculated the launch window for his 1961 Mercury mission. She plotted backup navigation charts for astronauts in case of electronic failures. When NASA used electronic computers for the first time to calculate John Glenn's orbit around Earth, officials called on Johnson to verify the computer's numbers; Glenn had asked for her specifically and had refused to fly unless Johnson verified the calculations. Biography.com states these were "far more difficult calculations, to account for the gravitational pulls of celestial bodies". Author Margot Lee Shetterly stated, "So the astronaut who became a hero, looked to this black woman in the still-segregated South at the time as one of the key parts of making sure his mission would be a success." She added that, in a time where computing was "women's work" and engineering was left to men, "it really does have to do with us over the course of time sort of not valuing that work that was done by women, however necessary, as much as we might. And it has taken history to get a perspective on that." Johnson later worked directly with digital computers. Her ability and reputation for accuracy helped to establish confidence in the new technology. In 1961, her work helped to ensure that Alan Shepard's Freedom 7 Mercury capsule would be found quickly after landing, using the accurate trajectory that had been established. She also helped to calculate the trajectory for the 1969 Apollo 11 flight to the Moon. During the Moon landing, Johnson was at a meeting in the Pocono Mountains. She and a few others crowded around a small television screen watching the first steps on the Moon. In 1970, Johnson worked on the Apollo 13 Moon mission. When the mission was aborted, her work on backup procedures and charts helped set a safe path for the crew's return to Earth, creating a one-star observation system that would allow astronauts to determine their location with accuracy. In a 2010 interview, Johnson recalled, "Everybody was concerned about them getting there. We were concerned about them getting back." Later in her career, Johnson worked on the Space Shuttle program, the Earth Resources Satellite, and on plans for a mission to Mars. Johnson spent her later years encouraging students to enter the fields of science, technology, engineering, and mathematics (STEM). Personal life and death Katherine and James Francis Goble had three daughters: Constance, Joylette, and Katherine. The family lived in Newport News, Virginia, from 1953. James died of an inoperable brain tumor in 1956 and, three years later, Katherine married James A. "Jim" Johnson, a United States Army officer and veteran of the Korean War; the pair were married for 60 years until his death in March 2019 at the age of 93. Johnson, who had six grandchildren and 11 great-grandchildren, lived in Hampton, Virginia. She encouraged her grandchildren and students to pursue careers in science and technology. She was a member of Carver Memorial Presbyterian Church for 50 years, where she sang as part of the choir. She was also a member of the Alpha Kappa Alpha Sorority. Johnson died at a retirement home in Newport News on February 24, 2020, at age 101. Following her death, Jim Bridenstine, NASA's administrator, described her as "an American hero" and stated that "her pioneering legacy will never be forgotten." Legacy and honors Johnson co-authored 26 scientific papers. Her social influence as a pioneer in space science and computing is demonstrated by the honors she received and her status as a role model for a life in science. Johnson was named West Virginia State College Outstanding Alumnus of the Year in 1999. President Barack Obama presented her with the Presidential Medal of Freedom, one of 17 Americans so honored on November 24, 2015. She was cited as a pioneering example of African-American women in STEM. President Obama said at the time, "Katherine G. Johnson refused to be limited by society's expectations of her gender and race while expanding the boundaries of humanity's reach." NASA noted her "historical role as one of the first African-American women to work as a NASA scientist." Two NASA facilities have been named in her honor. On May 5, 2016, a new building was named the "Katherine G. Johnson Computational Research Facility" and formally dedicated at the agency's Langley Research Center in Hampton, Virginia. The facility officially opened its doors on September 22, 2017. Johnson attended this event, which also marked the 55th anniversary of astronaut Alan Shepard's historic rocket launch and splashdown, a success Johnson helped achieve. At the ceremony, deputy director Lewin said this about Johnson: "Millions of people around the world watched Shepard's flight, but what they didn't know at the time was that the calculations that got him into space and safely home were done by today's guest of honor, Katherine Johnson". During the event, Johnson also received a Silver Snoopy award; often called the astronaut's award, NASA stated it is given to those "who have made outstanding contributions to flight safety and mission success". NASA renamed the Independent Verification and Validation Facility, in Fairmont, West Virginia, to the Katherine Johnson Independent Verification and Validation Facility on February 22, 2019. Johnson was included on the BBC's list of 100 Women of influence worldwide in 2016. In a 2016 video NASA stated, "Her calculations proved as critical to the success of the Apollo Moon landing program and the start of the Space Shuttle program, as they did to those first steps on the country's journey into space." Science writer Maia Weinstock developed a prototype Lego for Women of NASA in 2016 and included Johnson; she declined to have her likeness printed on the final product. On May 12, 2018, she was awarded an honorary doctorate by the College of William & Mary. In August 2018, West Virginia State University established a STEM scholarship in honor of Johnson and erected a life-size statue of her on campus. Mattel announced a Barbie doll in Johnson's likeness with a NASA identity badge in 2018. In 2019, Johnson was announced as one of the members of the inaugural class of Government Executive Government Hall of Fame. In June 2019, George Mason University named the largest building on their SciTech campus, the Katherine G. Johnson Hall. In 2020, Bethel School District, Washington, named its newest school the Katherine G. Johnson Elementary. On November 2, 2020, Fairfax County Public Schools—the largest school division in the Commonwealth of Virginia and 12th largest school division in the United States, and the City of Fairfax, Virginia, announced that the latter's school board had voted to rename its middle school, previously named after Confederate soldier, poet, and musician Sidney Lanier to Katherine Johnson Middle School (KJMS), after 85 percent of its residents voiced their support in favor. KJMS is located in close proximity to the City of Fairfax Historical District, a pivotal location in early American history. On November 6, 2020, a satellite named after her (ÑuSat 15 or "Katherine", COSPAR 2020-079G) was launched into space. In February 2021, Northrop Grumman named its Cygnus NG-15 spacecraft to supply the International Space Station the SS Katherine Johnson in her honor. In 2021, San Juan Unified School District, in Sacramento, California named its newest school Katherine Johnson Middle School. Depiction in media The highly acclaimed film Hidden Figures, released in December 2016, was based on the non-fiction book of the same title by Margot Lee Shetterly, which was published earlier that year. It follows Johnson and other female African-American mathematicians (Mary Jackson and Dorothy Vaughan) who worked at NASA. Taraji P. Henson plays Johnson in the film. Appearing alongside Henson at the 89th Academy Awards, Johnson received a standing ovation from the audience. In an earlier interview, Johnson offered the following comment about the movie: "It was well done. The three leading ladies did an excellent job portraying us." In a 2016 episode of the NBC series Timeless, titled "Space Race", the mathematician is portrayed by Nadine Ellis. Awards 1971, 1980, 1984, 1985, 1986: NASA Langley Research Center Special Achievement award 1977, NASA Group Achievement Award presented to the Lunar Spacecraft and Operations team – for pioneering work in the field of navigation supporting the spacecraft that orbited and mapped the Moon in preparation for the Apollo program 1998, Honorary Doctor of Laws, from SUNY Farmingdale 1999, West Virginia State College Outstanding Alumnus of the Year 2006, Honorary Doctor of Science by the Capitol College, Laurel, Maryland 2010, Honorary Doctorate of Science from Old Dominion University, Norfolk, Virginia 2014, De Pizan Honor from National Women's History Museum 2015, NCWIT Pioneer in Tech Award 2015, Presidential Medal of Freedom 2016, Silver Snoopy award from Leland Melvin 2016, Astronomical Society of the Pacific's Arthur B.C. Walker II Award 2016, Presidential Honorary Doctorate of Humane Letters from West Virginia University, Morgantown, West Virginia On December 1, 2016, Johnson received the Langley West Computing Unit NASA Group Achievement Award at a reception at the Virginia Air and Space Center. Other awardees included her colleagues, Dorothy Vaughan and Mary Jackson. 2017, Daughters of the American Revolution (DAR) Medal of Honor 2017 Honorary Doctorate from Spelman College May 12, 2018, Honorary Doctorate of Science from the College of William & Mary, Williamsburg, Virginia On April 29, 2019, the University of Johannesburg and its Faculty of Science conferred Johnson with the degree of Philosophiae Doctor Honoris causa for her pioneering role at NASA. November 8, 2019, Congressional Gold Medal 2021, Induction into the National Women's Hall of Fame See also Annie Easley, mathematician List of African-American women in STEM fields List of West Virginia University alumni Mathematical Tables Project, pioneering human computer group Timeline of women in science References Further reading Beverly Golemba, Human Computers: The Women in Aeronautical Research, unpublished manuscript 1994, NASA Langley Archives. Brigham Narins, Notable Scientists: From 1900 to the Present, Gale Group, 2001, . External links Katherine G. Johnson Video produced by Makers: Women Who Make America What Matters; Katherine Johnson: NASA Pioneer and "Computer" WHRO, American Archive of Public Broadcasting (GBH and the Library of Congress), Boston, MA and Washington, DC 1918 births 2020 deaths African-American mathematicians African-American schoolteachers Schoolteachers from West Virginia American computer scientists Presbyterians from Virginia American women mathematicians 20th-century American mathematicians 21st-century American mathematicians NASA people West Area Computers People from Hampton, Virginia People from White Sulphur Springs, West Virginia Scientists from West Virginia West Virginia State University alumni West Virginia University alumni American women computer scientists African-American computer scientists Congressional Gold Medal recipients Presidential Medal of Freedom recipients American women physicists 20th-century American physicists 21st-century American physicists 20th-century American women scientists 21st-century American women scientists BBC 100 Women American centenarians African-American centenarians 20th-century women mathematicians 21st-century women mathematicians Mathematicians from Virginia Women centenarians 20th-century American educators 20th-century American women educators 21st-century African-American people 21st-century African-American women
26790339
https://en.wikipedia.org/wiki/Girl%20Geek%20Dinners
Girl Geek Dinners
Girl Geek Dinners is an informal organisation that promotes women in the information technology industry, with 64 established chapters in 23 countries. The organization was founded in London, United Kingdom, by Sarah Lamb (née Blow), who realized how under-represented women are at information technology events after attending a Geek Dinner in 2005. Chapters organise local events featuring both female and male speakers with mostly female attendees. As the name suggests, it is like a Geek Dinner but with one significant rule: men can only attend as invited guests of women, ensuring that women will never be outnumbered by men at events. A typical event is an informal dinner, followed by one or more presentations by featured speakers. Chapters Girl Geeks Scotland (GGD) Bay Area Girl Geek Dinners (BAGGD) Girl Geek Dinners Sydney Reading Girl Geek Dinners (RGGD) Girl Geekdinners Berlin (ggdb) Manchester Girl Geeks (mgg) Girl Geek Dinners Milano (GGD MI) Girl Geek Dinners Nordest (GGDNE - Italy) Bath Girl Geek Dinners (UK) Bristol Girl Geek Dinners (UK) Girl Geek Dinners Oslo (Norway) Girl Geek Dinners Bergen (Norway) Girl Geek Dinners Kristiansand (Norway) Girl Geek Dinners (Boulder/Denver) Boston Girl Geek Dinners Zurich Girl Geek Dinners Girl Geek Dinners Waterloo Region Girl Geek Dinner NL (Amsterdam) Austin Girl Geek Dinners Girl Geek Dinners Cagliari Belgian Girl Geeks Seattle Geek Girl Dinners See also Women in computing References External links Girl Geek Dinners Short documentary about Girl Geekdinners Information technology organisations based in the United Kingdom International women's organizations Nerd culture Organizations established in 2005 Organizations for women in science and technology Women's organisations based in the United Kingdom
61418572
https://en.wikipedia.org/wiki/Enemy%20Contact
Enemy Contact
Enemy Contact (stylized as Tom Clancy Enemy Contact, Tom Clancy: Enemy Contact, or Tom Clancy's Enemy Contact in the United Kingdom) is a techno-thriller novel, written by Mike Maden and released on June 11, 2019. It is his third book in the Jack Ryan Jr. series, which is part of the overall Tom Clancy universe. The novel depicts a breach in the U.S. intelligence community that is connected to Ryan's mission in Poland. It debuted at number three on the New York Times bestseller list. Plot summary A mysterious hacker known as CHIBI has been offering classified information to Iranian and Russian intelligence, which then cause a series of attacks on special operations units stationed in Argentina and Syria, respectively. Additionally, a German police officer working undercover in a drug case was killed in a mugging by Iron Syndicate operatives using information from the hacker. The demonstrated attacks then enable the three parties to participate in a secret silent auction to be held by CHIBI in London. Meanwhile, Lawrence Fung is a hacker working for the red team of the recently built IC Cloud, a cloud database storing classified information gathered by the U.S. intelligence community. His job is to exploit any vulnerabilities in the database and patch them. However, he is revealed to be working with CHIBI for ideological reasons, and has been helping him with gathering actionable intelligence for the recent attacks. U.S. senator Deborah Dixon withdraws her support for a foreign policy bill, which would have allowed the construction of a U.S. military base in Poland to counter Russian aggression. President Jack Ryan is convinced that the senator had changed her mind due to her connections with the Chinese, who stand to benefit due to the Belt and Road Initiative, an extensive trade route plan spanning Eurasia which has been bankrolling several interested Western businessmen including Dixon's husband, billionaire Aaron Gage. He discreetly orders an investigation into Dixon's finances. Hendley Associates financial analyst Jack Ryan Jr. is tasked with the Dixon investigation. He finds out that Dixon's stepson, businessman Christopher Gage, does business with Chinese interests in Poland. He travels there, accompanied by Polish intelligence officer Liliana Pilecki, to look for illegal activities in Gage's businesses under the guise of researching for investment opportunities in the country. Initially finding nothing incriminating, they later investigate a Gage-owned warehouse in the Gdańsk shipyards. They are then abducted and tortured for information by Mathieu Cluzet, a French drug smuggler working for the Iron Syndicate (an organization Ryan had tangled with in previous novel Line of Sight). Afterwards, Cluzet throws Ryan and Pilecki out of his ship in the middle of the Baltic Sea. While Jack is later rescued, his partner drowns. Meanwhile, in Angola, a local rebel force attacks an offshore oil rig under construction by the Chinese. Pressed by his superiors for retribution, Chinese intelligence officer Chen Xing turns to CHIBI for information about the rebel force's whereabouts. The hacker then orders Fung to pinpoint the location of the rebels. Using CHIBI's information, the PLA stages a surgical attack on the Angolan rebels, enabling Xing and Chinese intelligence to enter the auction. The attack on the rebels attracts the attention of Director of National Intelligence Mary Pat Foley. She later finds out that the recent attacks in Argentina, Germany, Syria, and now Angola may have been the result of an intelligence breach within the IC Cloud. The cloud's head of security, Amanda Watson, imparts her suspicions about her colleague Fung, which leads Foley to track him down. However, CHIBI finds out about the manhunt and orders Fung's death, disguised as a suicide. Foley's investigation into the breach prematurely ends. After recuperating in Virginia, Jack travels to Peru to honor his recently deceased friend Cory Chase's wishes. Still distraught over Liliana's death, he later discovers an illegal mining operation led by Cluzet's unnamed brother. Along with former U.S. Army Ranger Rick Sands, whom he had met in the area, Jack dispatches him and his men and then rescues the miners taken against their will. Later reunited with his Campus colleagues, Jack tracks down the leader of the Iron Syndicate, known as the Czech, in the former Czech Republic. He instead reveals CHIBI's plot about the auction of an algorithmic key that would unlock the entire IC Cloud in a tech conference in London. The Campus, as well as Foley, scramble into London to track down CHIBI, who is revealed to be Watson. While the auction is still successful, Foley devises a plan to penetrate the Chinese, Iranian, and Russian intelligence agencies, while Watson is later sentenced to prison. President Ryan informs Dixon about her stepson's association with the Chinese, which is by smuggling drugs around Europe and then laundering the dirty money produced into the senator's charities. After being informed of her brief investigation into Hendley Associates, he blackmails the senator into passing an anticorruption bill instead of being disgraced from politics. Meanwhile, Foley works on dismantling the Iron Syndicate, now deemed a national security threat. Gage is found dead in the Gdańsk shipyards, while the Czech is killed by a sniper working for Polish intelligence. Jack tracks down Cluzet in Libya and kills him in revenge for Liliana's death. Characters The White House Jack Ryan: President of the United States Scott Adler: Secretary of state Mary Pat Foley: Director of national intelligence Robert Burgess: Secretary of defense Arnold van Damm: President Ryan's chief of staff The Campus Gerry Hendley: Director of The Campus and Hendley Associates John Clark: Director of operations Domingo "Ding" Chavez: Senior operations officer Jack Ryan, Jr.: Operations officer and senior analyst for Hendley Associates Gavin Biery: Director of information technology Lisanne Robertson: Director of transportation Cloudserve, Inc. Elias Dahm: CEO Amanda Watson: Senior design engineer and head of security for the Intelligence Community Cloud Lawrence Fung: Watson's number two and supervisor of the Red Team IC Cloud hacking group Other characters Liliana Pilecki: Agent with Poland's Agencja Bezpieczeństwa Wewnętrznego (ABW) Senator Deborah Dixon (R): Chair, Senate Foreign Relations Committee Aaron Gage: Husband of Deborah Dixon and CEO and founder of Gage Capital Partners Christopher Gage: Stepson of Deborah Dixon and CEO of Gage Group International Rick Sands: Former member, 75th Ranger Regiment Development Maden researched for the book by visiting Poland. Reception Commercial Enemy Contact debuted at number three at both the Combined Print and E-Book Fiction and Hardcover Fiction categories of the New York Times bestseller list for the week of June 29, 2019. It also debuted at number three on the USA Today Best Selling Books list for the week of June 20, 2019. Critical The book received positive reviews. Kirkus Reviews praised it as "another well-crafted and enjoyable escape from reality", saying that "Maden’s style meshes perfectly with the classic Clancy yarns, with global action, struggle, suffering, and formidable foes who get what they deserve." New York Journal of Books stated: "Maden has definitely made this character and his supporting cast his own in this excellent third trip through the Tom Clancy universe." On the other hand, Publishers Weekly gave the book a mixed review, stating that it is "competent but ponderous". References 2019 American novels American thriller novels Techno-thriller novels Ryanverse Novels set in Europe Novels set in South America Novels set in Angola Novels set in the Czech Republic Novels set in Peru Novels set in Poland Books about the Federal Security Service G. P. Putnam's Sons books
64916891
https://en.wikipedia.org/wiki/Total%20War%20Saga%3A%20Troy
Total War Saga: Troy
Total War Saga: Troy is a 2020 turn-based strategy video game developed by Creative Assembly Sofia and published by Sega. The game was released for Windows on 13 August 2020 as the second installment in the Total War Saga subseries, succeeding Thrones of Britannia (2018). The game received generally positive reviews upon release. Gameplay Like its predecessors, Total War Saga: Troy is a turn-based strategy game with real-time tactics elements. The game is set in the Bronze Age, during the Trojan War, though its scope also covers the surrounding Aegean civilizations. Real-time battles take place in large sandboxes, and players can command the infantry, hero units as well as mystical beasts as they battle opponent forces. There are a total of eight heroes representing the two factions (the Trojans and the Achaeans), and each hero has two unique abilities that can be used during battles. As the player progress in the game, they can also build their relationship with the Greek gods. If the approval ratings with the Gods are high enough, the player would gain gameplay benefits. Outside of battles, players also need to collect sufficient resources, such as wood, bronze and food, in order to keep the army running. Agents return in Troy, and players can send priests and spies to infiltrate hostile cities. Multiplayer, which supports up to 8 players, was introduced on November 26, 2020. Development The game is the second installment in the Total War Saga subseries, following 2018's Total War Saga: Thrones of Britannia. Troy, similar to its predecessor, was designed to be a shorter but more focused game, and its scope was limited to a particular time period in history instead of being era-spanning. The lead developer is Creative Assembly Sofia based in Bulgaria, which took approximately two years and nine months to develop the game. According to Maya Georgieva, the game's director, the Bronze Age was a very difficult setting to work on due to the lack of detailed historical records and sources. As a result, the team resorted to using the Iliad, an ancient Greek epic poem, to fill up the historical details that were missing. Despite the mythical elements, the team tried to make the game as historically grounded as possible. Georgieva called this approach "truth behind the myth", in which the team came up with the "most probable explanations for the myths and legends to complete the history". For instance, the Trojan horse would be an earthquake, a siege tower, or a massive wooden structure instead of a huge, wooden horse. Total War Saga: Troy was announced by publisher Sega on 19 September 2019. It was released on 13 August 2020 for Microsoft Windows via the Epic Games Store. The exclusivity will last for one year. Creative Assembly later confirmed that this was a one-off deal with Epic Games, and added that they did not plan to make future Total War games exclusive to a storefront. As Creative Assembly hoped to expand the audience of the franchise, the game was made free to claim for the first 24 hours after its release. 7.5 million people claimed the game for free on that day, exceeding the developer's expectations. Creative Assembly will be working with Epic to incorporate mod support upon the game's release. A downloadable content pack based on the Amazons was released on 24 September 2020. Reception The game received generally positive reviews upon release, according to review aggregator Metacritic. The game was available for free in the Epic Games Store. This resulted in 7.5 million claimed copies in a single day. Explanatory notes References External links 2020 video games Ancient Greece in fiction Creative Assembly games Cultural depictions of the Trojan War Fiction set in the 12th century BC Fiction set in the 13th century BC Grand strategy video games Historical simulation games Multiplayer and single-player video games Sega video games Strategy video games Total War (video game series) Video games based on Greek mythology Video games based on works by Homer Video games developed in Bulgaria Video games set in Greece Windows games Works based on the Iliad
23818552
https://en.wikipedia.org/wiki/Politics%20of%20Maharashtra
Politics of Maharashtra
Maharashtra is a state in the western region of India and is India's third-largest state by area. It has over 112 million inhabitants and its capital, Mumbai, has a population of approximately 18 million. Nagpur is Maharashtra's second, or winter, capital. Government in the state is organized on the parliamentary system. Power is devolved to large city councils, district councils (Zila Parishad), sub-district (Taluka) councils, and the village parish councils (Gram panchayat). The politics of the state are dominated by the numerically strong Maratha–Kunbi community. There are national and regional parties in the state, serving different demographics, such as those based on religion, caste, urban and rural residents. Government structure State Government The government of Maharashtra is conducted within a framework of parliamentary government, with a bicameral legislature consisting of the Maharashtra Legislative Assembly and the Maharashtra Legislative Council. The Legislative Assembly (Vidhan Sabha) is the lower chamber and consists of 288 members, who are elected for five-year terms. There are 25 and 29 seats reserved for the Scheduled Castes and Scheduled Tribes and others, respectively. The Legislative Council (Vidhan Parishad) is the upper chamber and is a permanent body of 78 members. The government of Maharashtra is headed by the Chief Minister, who is chosen by the party or alliance with a majority of members in the Legislative Assembly. The Chief Minister, along with the council of ministers, drives the legislative agenda and exercises most of the executive powers. However, the constitutional and formal head of the state is the Governor, who is appointed for a five-year term by the President of India on the advice of the Union government. Maharashtra in Indian Parliament Maharashtra elects members to both chambers of the Indian Parliament. Representatives to India's lower chamber, the Lok Sabha, are elected by adult universal suffrage, and a first-past-the-post system, to represent their respective constituencies. They hold their seats for five years or until the body is dissolved by the President on the advice of the council of ministers. Representatives to the upper chamber, the Rajya Sabha, are elected indirectly by the Vidhan Sabha members. Maharashtra elects 48 members out of 543 total elected members of the Lok Sabha and 19 out of 233 elected members of the Rajya Sabha. Local government The state has a long tradition of highly powerful planning bodies at district and local levels. Local self governance institutions in rural areas include 34 zilla parishads (district councils), 355 Taluka Panchayat samitis (district Sub-division councils) and 27,993 Gram panchayats (village councils). Urban areas in the state are governed by 27 Municipal Corporations, 222 Municipal Councils, four Nagar Panchayats and seven Cantonment Boards. Although Maharashtra had Gram panchayat with elected members since 1961, the 73rd amendment to the Indian constitution of 1993 put in place a statutory requirement of 33% of seats on the panchayats reserved for women, the scheduled castes, and the scheduled tribes. In addition, 33% of the sarpanch (panchayat chief) positions were also reserved for women. Although the amendment boosted the number of women leaders at the village level, there have been cases of harassment by male members of the panchayat towards the female members of the organisations. The administration in each district is headed by a District Collector, who belongs to the Indian Administrative Service and is assisted by a number of officers belonging to Maharashtra state services. The Superintendent of Police, an officer belonging to the Indian Police Service and assisted by the officers of the Maharashtra Police Service, maintains law and order in addition to other related issues in each district. The Divisional Forest Officer, an officer belonging to the Indian Forest Service, manages the forests, environment, and wildlife of the district, assisted by the officers of Maharashtra Forest Service and Maharashtra Forest Subordinate Service. Sectoral development in the districts is looked after by the district head of each development department, such as Public Works, Health, Education, Agriculture and Animal Husbandry. Political parties & alliances Since its inception in 1960, and also of predecessor states such as Bombay, the politics of Maharashtra has been dominated by the Indian National Congress party. Maharashtra became a bastion of Congress party stalwarts such as Yashwantrao Chavan, Vasantdada Patil, Vasantrao Naik, and Shankarrao Chavan. Sharad Pawar has been a significant personality in state and national politics for nearly forty years. During his career, he has split Congress twice, with significant consequences for state politics. After his second parting from the Congress party in 1999, Sharad Pawar formed the Nationalist Congress Party (NCP) but joined a Congress-led coalition to form the state government after the 1999 Assembly elections. The Congress party enjoyed a nearly unchallenged dominance of the state political landscape, until 1995 when the coalition of Shiv Sena and the Bharatiya Janata Party (BJP) secured an overwhelming majority in the state, beginning a period of coalition governments. Shiv Sena was the larger party in the coalition. From 1999 until 2014, the NCP and INC formed one coalition while Shiv Sena and the BJP formed another for three successive elections, which the INC-NCP alliance won. Prithviraj Chavan of the Congress party was the last Chief Minister of Maharashtra under the Congress-NCP alliance that ruled until 2014. For the 2014 assembly polling, the alliances between the NCP and Congress and between the BJP and Shiv Sena broke down over seat allocations. In the election, the largest number of seats went to the BJP, with 122 seats. The BJP initially formed a minority government under Devendra Fadnavis; but in December 2014, Shiv Sena entered the Government and provided a comfortable majority in the Maharashtra Vidhansabha to the Fadnavis-led government. In the 2019 Loksabha elections, the BJP and Shivsena fought under the NDA banner, whereas the Congress and NCP were part of the UPA. The two alliances remained intact for the legislative assembly elections in October 2019. The BJP and Shivsena together gained the majority of seats in the assembly but could not form government due to squabbles between the two parties. The BJP–Shivsena alliance came to an end in early November 2019, with Shivsena subsequently forming a new alliance with its longtime rivals, the NCP and Congress, to form the new state government on 28 November 2019. Other parties in the state include the All India Forward Bloc, the Maharashtra Navnirman Sena, the Communist party of India, the Peasant and workers party, the All India Majlis-e Ittihad al-Muslimin, Bahujan Vikas Aghadi, the Samajwadi Party, various factions of the dalit-dominated Republican Party of India, the Bahujan Samaj Party, and the Socialist party. Dominant groups in Maharashtra politics After the state of Maharashtra was formed on 1 May 1960, the INC was long without a major challenger. The party also enjoyed overwhelming support from the state's influential sugar co-operatives, as well as thousands of other cooperatives, such as rural agricultural cooperatives involved in the marketing of dairy and vegetable produce, credit unions etc. For the better part of the late-colonial and early post-independence periods in Bombay state and its successor, Maharashtra state, the politics of the state has been dominated by the mainly rural Maratha–Kunbi caste, which accounts for 31% of the population of Maharashtra. They dominated the cooperative institutions; and with the resultant economic power, controlled politics from the village level to the state Assembly and the Lok Sabha. In 2016, of the 366 MLAs (Legislative Assembly has 288 MLAs and Legislative Council has 78) combined, 169 (46%) were Marathas. Major past political figures of the Congress party from Maharashtra—such as Keshavrao Jedhe, Yashwantrao Chavan[vasantrao Dada Patil], Shankarrao Chavan, Vilasrao Deshmukh, and Sharad Pawar—have been from this group. Of the 19 Chief Ministers so far, as many as 10 (55%) have been Maratha. Since the 1980s, politicians from this group has also been active in setting up private educational institutions. Following disputes between Sharad Pawar and the INC president Sonia Gandhi, the state's political status quo was disturbed when Pawar defected from the INC, which was perceived as the vehicle of the Nehru-Gandhi dynasty, to form the Nationalist Congress Party. This offshoot of the Congress party is nevertheless dominated by the Maratha community. Shiv Sena was formed in the 1960s by Balashaheb Thackerey, a cartoonist and journalist, to advocate and agitate for the interests of Marathi people in Mumbai. Over the following decades, Shiv Sena slowly expanded, and took over the then Mumbai Municipal corporation in the 1980s.Although the original base of the party was among lower middle and working class Marathi people in Mumbai and the surrounding suburbs, the leadership of the party came from educated groups. However, since 1990s there has been shift in leadership with many middle level leaders creating personal fiefdom for themselves and their families with the use of strong-arm tactics. Hansen has termed this as the "dada-ization" of the party. By the number of Marathas elected on the Shiv Sena ticket in the last few elections, the party is emerging as another Maratha party. The BJP is closely related to the RSS and is part of the Sangh Parivar. The party originally derived its support from the urban upper castes, such as Brahmins and non-Maharashtrians. In recent years the party has been able to penetrate the Maratha community by fielding Maratha candidates in elections. The Shiv Sena–BJP coalition came to power at the state level in 1995, which was a blow to the Congress party. In 2006, a split within Shiv Sena emerged when Bal Thackeray anointed his son Uddhav Thackeray as his successor over his nephew Raj Thackeray. Raj Thackeray then left the party and formed a new party called Maharashtra Navnirman Sena (MNS). Raj Thackeray, like his uncle, has also tried to win support from the Marathi community by embracing anti-immigrant sentiment in Maharashtra, for instance against Biharis. After the Maratha–Kunbi, the Mahars are numerically the second largest community. Most of the Mahars are followers of Buddhism and fall under the scheduled caste (SC) group. Since the time of B. R. Ambedkar, the Mahar community has supported various factions of the Republican Party of India (RPI). There are 25 seats reserved for the SC. Parties such as NCP, BJP, and the Congress field candidates from other Hindu SC groups like Mang and Chambhar for the reserved seats, to thwart the candidates of the RPI. 2014 Assembly Election The 2014 assembly election followed a landslide national victory of the BJP in the 2014 Lok Sabha election, which brought the Narendra Modi to power as prime minister. All major parties in the state (BJP, Shivsena, INC, and NCP) contested the elections on their own, leading to a complex and much-contested election. The BJP put together an alliance of upper castes, the Other Backward Class (OBC), and to some extent the Dalit to fight the Maratha-led Congress and NCP. The results were significant in that the BJP received the highest number of seats, despite being historically smaller than Shiv Sena in the state. Although the BJP still required Shiv Sena's support to form a majority, it progressed from being a minor party in state politics to the party of the chief minister, Devendra Fadnavis, who held that position until November 2019. 2019 Lok Sabha elections In April 2019, voting for the 48 Lok Sabha seats from Maharashtra was held in four phases. Despite their differences, the BJP and Shiv Sena once again contested the elections together under the National Democratic Alliance (NDA) banner. Similarly, the Congress and NCP had their own seat-sharing arrangement. The breakaway party of Raj Thakeray, Maharashtra Navnirman Sena, did not contest any seats, and instead urged their supporters to vote for the NCP–Congress alliance, Thakre campaigning for candidates belonging to these parties. The results of the election on 23 May 2019 was another landslide victory for the NDA, with the BJP and Shiv Sena winning 23 and 18 seats, respectively, out of the total of the state's 48 Lok Sabha seats. The Congress party won only one seat in the state whereas the NCP won five seats from its stronghold of western Maharashtra. 2019 Vidhan Sabha elections The BJP–Shiv-Sena and NCP–Congress alliances remained intact for the Vidhansabha elections in October 2019. The BJP and Shiv Sena together gained the majority of seats in the assembly but could not form government due to squabbles between the two parties. The BJP, with 105 seats, was far short of the 145 seats required to form majority and declined to form a minority government. At the same time, Shiv Sena started talks with the NCP and Congress to form government. On 23 November 2019, BJP formed a government with support from NCP, with Ajit Pawar as Deputy Chief Minister. This government collapsed three days later with Chief Minister Devendra Fadnavis and Ajit Pawar resigning their respective positions. On 28 November 2019, the governor swore in Uddhav Thackeray, the Shiv Sena chief, as the new chief minister of Maharashtra. Thackeray's governing coalition includes Shiv Sena, NCP, INC, and a number of independent members of legislative assembly . See also 2009 Maharashtra Legislative Assembly election 2004 Maharashtra Legislative Assembly election List of Chief Ministers of Maharashtra Panchayat Samiti Gram panchayat References Further reading Marathi people
27573307
https://en.wikipedia.org/wiki/Sports%20in%20New%20York%27s%20Capital%20District
Sports in New York's Capital District
Sports in New York's Capital District are very popular, and there is a rich history of professional teams and college athletics. The "major league" sport of the region is thoroughbred horse racing at the Saratoga Race Course, which has been held annually since 1863 with only a few breaks. The Saratoga Race Course is the oldest racetrack in the US, and possibly the oldest sporting venue of any kind in the country. The Saratoga meet runs for 40 racing days beginning in July and ending on Labor Day, and includes fifteen grade I stakes races. The Travers Stakes, America's "Midsummer Derby" is the highlight of the meet; winners include Man o' War, Whirlaway, Native Dancer, Sword Dancer, Alydar, and Birdstone. According to legend, the game of baseball was invented by Abner Doubleday of Ballston Spa. The Troy Trojans were a Major League Baseball team in the National League for four seasons from 1879 to 1882. In 1883 the New York Gothams, later the New York and San Francisco Giants, took the Trojans place in the National League. Nearly half of the original Gotham players had been members of the Trojans. Many other Major League ballplayers have had their start at various levels in the Capital District, including former Tri-City ValleyCats' Jose Altuve, Dallas Keuchel, George Springer, Ben Zobrist, and Hunter Pence. Others include Derek Jeter and Mariano Rivera of the New York Yankees who once played for the Albany-Colonie Yankees. NBA head coach of the Los Angeles Lakers, Phil Jackson, won his first championship ring when he guided the Albany Patroons to the 1984 CBA championship. Three years later, the Patroons completed a 50–6 regular season, including winning all 28 of their home games; at that time, Denver Nuggets head coach George Karl was the Patroons' head coach. Future NBA stars Mario Elie and Vincent Askew were part of that season's squad. A third NBA head coach has roots in the Capital District as well, Pat Riley, most famous as the coach of the Los Angeles Lakers, but also of the New York Knicks and Miami Heat. Riley played for Linton High School in Schenectady, where he was also a football star. He also played on the Schenectady Little League Baseball team when in 1954 it won the Little League Baseball World Series. Mike Tyson received his early training in the Capital District and his first professional fight was in Albany in 1985 and Tyson's first televised fight was in Troy in 1986. He fought professionally four times in Albany and twice each in Troy and Glens Falls between 1985 and 1986. Since 1973, the AKRFC, has been promoting rugby and now includes a DII men's team, a DI women's team, and youth rugby all across the Capital Region. They play on Dick Green Field at 100 Frisbie Ave, Albany, named after the late Dick Green who suffered a heart attack while practicing at Lincoln Park. Since 2002, The Tri-City ValleyCats have won three New York-Penn League titles, and have captured seven Stedler Division titles. Since 1988, the Siena College men's basketball team (the Siena Saints) have appeared in six NCAA Tournaments (1989, 1999, 2002, 2008, 2009, and 2010). Since 2005, the University at Albany Great Danes men's basketball team has appeared in five NCAA Tournaments (2006, 2007, 2013, 2014, and 2015). The University at Albany Great Danes women's basketball team has made six consecutive NCAA Tournaments (2012, 2013, 2014, 2015, 2016, and 2017). Roller derby leagues in the area include Albany's Albany All-Stars Roller Derby, Troy's Hellions of Troy Roller Derby. and Capital District Men's Roller Derby. Current teams Adirondack Thunder, ECHL hockey team in Glens Falls that began play in 2015. Albany Empire, indoor football team in the National Arena League. Albany FireWolves, a box lacrosse team in the National Lacrosse League. Albany Metro Mallers, semiprofessional football team that has regularly been in the national semipro playoffs and won the national semipro title in 2008, 2013, and 2016. Albany Patroons, The Basketball League Schenectady Legends, International Basketball League Tri-City ValleyCats, Joseph L. Bruno Stadium in Troy; formerly a member of Minor League Baseball's Single-A short season New York–Penn League, joined the independent Frontier League in 2021. NCAA college athletic programs University at Albany Great Danes: Currently play at the Division I level in all of its sports, though for most of its history it was a Division III school, with a brief stay at the Division II level in the late 1990s. The football team is a member of the Division I FCS Colonial Athletic Association, and the women's golf team plays in the Metro Atlantic Athletic Conference (MAAC); all other sports teams play as members of the America East Conference. In 2006, UAlbany became the first SUNY affiliated school to send a team to the NCAA Division I Men's Basketball Tournament. The men's lacrosse team has also made multiple appearances in its sport's NCAA Division I Championship Tournament, the first University at Albany team to do so. The men's track & field team has produced All-American athletes such as Gered Burns, Joe Greene, Marc Pallozzi, and Luke Schoen. UAlbany has hosted the New York Giants summer training camp since 1996. Siena College's Saints basketball team plays in the Times Union Center in downtown even though the college is located in the Albany suburb of Loudonville. The college teams play at the Division I level in all sports, although it discontinued its Division I-AA football program in 2003 (three years before I-AA adopted its current designation of FCS). Siena is a member of the MAAC in all sports that the school sponsors. Union College Dutchmen: Union participates in the National Collegiate Athletic Association (NCAA), the Liberty League, and ECAC. Men's and women's ice hockey compete at the NCAA Division I level; all other sports compete at the NCAA Division III level. Won NCAA Division I Men's Hockey National Championship in 2014. Rensselaer Polytechnic Institute (RPI) Engineers in Troy. RPI currently sponsors 23 sports, 21 of which compete at the NCAA Division III level in the Liberty League. Men's and women's ice hockey compete at the Division I level in ECAC Hockey. RPI won the NCAA Division I Men's Hockey National Championship in 1954 and 1985. The College of Saint Rose: The St. Rose Golden Knights play at the Division II level. St. Rose plays in the Northeast-10 Conference. Skidmore College Thoroughbreds: Skidmore in Saratoga Springs fields 19 Varsity teams in NCAA Division III. Skidmore is a member of the Liberty League. Albany Dutchmen: Formerly the Bennington Bombers of Bennington, Vermont team of the Perfect Game Collegiate Baseball League formerly the New York Collegiate Baseball League, it is an amateur league of collegiate players who are unpaid to retain NCAA eligibility, whereas college baseball uses aluminum bats this league uses wooden. They played at Bleecker Stadium until 2010 when they moved to the new Bellizzi Stadium. The Dutchman's Shoes trophy is awarded to the winner of the annual college football game between the RPI Engineers and the Union College Dutchmen, the oldest football rivalry in New York. Other sports Head of the Fish is a rowing race held on the last weekend of October each year on Fish Creek in Saratoga County. The 2013 competition featured over 2,000 entries representing 160 clubs. Freihofer's Run for Women is an annual five-kilometer road running competition for women usually held in late May or early June in Albany. The race holds IAAF Silver Label Road Race status and had 5,000 participants in 2011. The Saratoga Polo Association, established in 1898, competes against the best polo teams in the world on Whitney Field in Greenfield. The season roughly corresponds to that of the Saratoga Race Course. The Whitney Cup is awarded annually. Spectators are encouraged to tailgate. Defunct professional teams Adirondack Flames (AHL affiliate of the Calgary Flames) moved from Glens Falls in 2015 after one season. Adirondack Phantoms were an American Hockey League (AHL) affiliate of the Philadelphia Flyers. The Phantoms played in Glens Falls from 2009 to 2014 before moving to Allentown, Pennsylvania. Adirondack Red Wings (American Hockey League) (1979–1999) planned a move for 2000 to Ohio, never materialized and franchise folded. Albany Alleycats were a professional soccer team that competed in the United Soccer Leagues from 1995 to 1999. Albany Attack entered the National Lacrosse League as an expansion team prior to the 1999–2000 season. The Attack played four years in Albany, which by far the most successful being the 2001–2002 season, when they made the league championship game. However, due to attendance problems, after the following season, the Attack moved to San Jose, California and became the San Jose Stealth. Albany Capitals American Soccer League and later American Professional Soccer League team played at Bleecker Stadium from 1989 to 1991. Albany Choppers (International Hockey League, 1990–91 season, folded February 1991) Albany-Colonie Diamond Dogs, played at Heritage Park in nearby Colonie beginning in 1995. In 1999, they captured the Northern League title but folded after the 2002 season due to financial difficulties and competition from the newly formed Tri-City ValleyCats. Albany-Colonie Yankees (Eastern League baseball, AA affiliate of the New York Yankees from 1985 to 1994, playing host to several key players of the parent club's eventual late-1990s dominance.) Albany A's/Albany-Colonie A's (Eastern League affiliate of the Oakland Athletics in 1983 and 1984, superseded by the Albany-Colonie Yankees.) Albany Devils were the AHL affiliate of the New Jersey Devils from 2010 to 2017. Albany Eagles American Soccer League team, formerly the New York Eagles played at Bleecker Stadium from 1979 to 1981. Albany Empire were an Arena Football League team that played at the Times Union Center from 2018 to 2019 Albany Firebirds were an Arena Football League team in the Albany area that won the ArenaBowl in 1999, but moved to Indianapolis, Indiana after the 2000 season. The Firebirds folded in late 2004. In 2008, the af2's Albany Conquest were rebranded into the Albany Firebirds. Albany Patroons/Capital Region Pontiacs (original version from 1982 to 1993 was a dominant team in the league and a starting point for notable NBA coaches Phil Jackson and George Karl. The franchise moved to Hartford, Connecticut, then folded before being revived in 2005, then folding again in 2009.) Albany River Rats (American Hockey League) (1990–2010) Sold to Charlotte businessman. Moved to Charlotte, North Carolina, started play in 2010-11 season as the Charlotte Checkers. Albany Senators (Eastern League baseball, was a minor-league affiliate of the Boston Red Sox for a time in the 1950s.) Capital District Islanders (American Hockey League, forerunner to Albany River Rats when affiliated with the New York Islanders.) New York Buzz, World Team Tennis. New York Kick (American Indoor Soccer Association) the team split time between Albany and Glens Falls, New York so the team choose to be named after the state. Sports facilities The Times Union Center is an indoor arena in downtown Albany that can fit from 6,000 to 17,500 people, with a maximum seating capacity of 15,500, for sporting events. The Joseph L. Bruno Stadium, colloquially called "The Joe," is a baseball stadium located on the campus of Hudson Valley Community College in Troy, New York. It is the home field of the Tri-City ValleyCats and of the Hudson Valley "Vikings". The stadium has 4,500 seats and 10 luxury suites. The Christian Plumeri Sports Complex includes Bellizzi Stadium, a ballpark that is home to the "Golden Knights" of The College of Saint Rose and the Albany Dutchmen of the Perfect Game Collegiate Baseball League. The Houston Field House is a multi-purpose facility on the campus of the Rensselaer Polytechnic Institute in Troy. Home ice for the ECAC RPI "Engineers" hockey team, the Field House seats 4,780 for hockey games, the largest capacity in the ECAC. Glens Falls Civic Center is a 4,794-seat multi-purpose arena, located in downtown Glens Falls, New York, that currently (2013) serves as the home of the Adirondack Phantoms, of the American Hockey League. The Achilles Rink, now the Frank L. Messa Rink at Achilles Center, is a 2,225-seat multi-purpose arena in Schenectady, New York. It is home to the Union College Dutchmen and Dutchwomen ECAC ice hockey teams. Bob Ford Field is an 8,500-seat football stadium on the uptown campus of the University at Albany. SEFCU Arena is a 4,538-seat multi-purpose arena on the uptown campus of the University at Albany which is home to the Albany Great Danes basketball teams and other sporting events. Bleecker Stadium, a 6,500 seat multi-purpose stadium in west Albany has hosted a variety of teams in various sports including a match between an American rugby team and the South African Springboks. Until 2012 Bleecker served as home field for the Albany Metro Mallers. Washington Avenue Armory, officially Washington Avenue Armory Sports and Convention Arena, is 3,600 seat multi-purpose arena that was the home of several college and professional basketball teams. From 1982 to 1990, as well as from 2005 to 2009, and from 2018 to the present, the Armory hosts the Albany Patroons basketball team. Heritage Park was a 5,500 seat baseball stadium in Colonie that has been home field for the A's, the Yankees, and the Diamond Dogs. It was demolished in 2009. Albany-Saratoga Speedway, dirt track auto racing in Malta, New York. Sports figures Many sports figures have connections to the Capital District: Professional golfer Dottie Pepper was born and resides in Saratoga Springs. Abner Doubleday, reputedly the inventor of baseball, was born in Ballston Spa. Jimmer Fredette, former NBA player now playing in China, was born and raised in Glens Falls. Tim Stauffer, pitcher for the San Diego Padres, grew up and attended school in Saratoga Springs. Basketball player and coach Pat Riley grew up in Schenectady. Jason Morris, four-time Olympian in Judo, resides and has a judo center in Glenville. Scott Cherry, men's basketball coach at High Point University, is a native of Ballston Spa and a graduate of Saratoga Central Catholic High School. Hockey coach Ned Harkness coached RPI from 1950 to 1963, and Union College from 1975 to 1978. Hockey forward Adam Oates played for the RPI Engineers from 1982 to 1985. David Pietrusza, baseball historian, was born and raised in Amsterdam, New York and later resided in Scotia, New York. Bob Ford served as football coach of the Albany Great Danes from 1973 until his retirement in 2013. In 1999 he guided the Danes in their transition from Division III to Division I. Albany's current football stadium, inaugurated in 2013, is named for him. Cornelius Vanderbilt Whitney, a former part-time resident of Saratoga Springs, was three-time winner of the U.S. Open polo title. See also Athletics in Upstate New York Sports in New York Sports in Syracuse References External links Times-Union Center official site Tri-City ValleyCats - Joseph L. Bruno Stadium Bleeker Stadium Albany Dutchmen official site RPI Houston Field House official site Capital District Mens Roller Derby official site Albany Knickerbockers Rugby Football Club official site
39533183
https://en.wikipedia.org/wiki/Emmabunt%C3%BCs
Emmabuntüs
Emmabuntüs is a Linux distribution derived from Ubuntu/Debian and designed to facilitate the repacking of computers donated to humanitarian organizations like the Emmaüs Communities. The name Emmabuntüs is a portmanteau of Emmaüs and Ubuntu. Features This Linux distribution can be installed, in its entirety, without an Internet connection as all of the required packages are included within the disk image. The disk image includes packages for multiple languages and also optional non-free codecs that the user can choose whether to install or not. One gigabyte of RAM is required for the distribution. An installation script automatically performs some installation steps (user name, password predefined). The script allows you to choose whether or not to install non-free software, whether to uninstall unused languages to reduce updates. Emmabuntüs includes browser plug-ins for data privacy. There are three docks to choose from to simplify access to the software and are defined by the type of user (children, beginners and "all"). Desktop environment The desktop environment is Xfce with Cairo-Dock. LXDE is also included and can be optionally installed. Applications Multiple applications are installed that perform the same task in order to provide a choice for each user that uses the system. Here are some examples: Firefox web browser with some plug-ins and extensions: Flash Player, uBlock Origin, Disconnect, HTTPS Everywhere E-mail readers: Mozilla Thunderbird Instant messaging: Pidgin, Skype, Jitsi Transfer tools: FileZilla, Transmission Office: AbiWord, Gnumeric, HomeBank, LibreOffice, LibreOffice for schools, Kiwix, Calibre, Scribus Audio: Audacious Media Player, Audacity, Clementine, PulseAudio, Asunder Video: Kaffeine, VLC media player, guvcview, Kdenlive, HandBrake Photo: Nomacs, Picasa, GIMP, Inkscape Burning: Xfburn Games: PlayOnLinux, SuperTux, TuxGuitar Genealogy: Ancestris Education: GCompris, Stellarium, TuxPaint, TuxMath, Scratch Utilities: GParted, TeamViewer, Wine, CUPS Releases See also List of Linux distributions § Ubuntu-based References External links Emmabuntüs – A Distro Tailor-made For Refurbished Computers Linux Voice 2 : Linux for humanitarians Linux Format 216 : A Distro for All Seasons Full Circle 128 : Review Emmabuntüs DE2 - Stretch 1.00 LinuxInsider : Emmabuntüs Is a Hidden Linux Gem Debian Debian-based_distributions Ubuntu derivatives X86-64_Linux_distributions Operating system distributions bootable from read-only media Linux distributions
40978093
https://en.wikipedia.org/wiki/XCOM%3A%20Enemy%20Within
XCOM: Enemy Within
XCOM: Enemy Within is an expansion pack for the turn-based tactical video game XCOM: Enemy Unknown. The expansion pack primarily adds new gameplay elements to the base game, as well as introducing new themes of transhumanism via aggressive gene therapy. XCOM: Enemy Within was released for Microsoft Windows, PlayStation 3, and Xbox 360 in November 2013 and received generally favorable reviews from critics. In June 2014, Feral released both XCOM: Enemy Within and its base game XCOM: Enemy Unknown for Linux. The game also came to the iOS App Store and Google Play Store a year after the initial release, on November 12, 2014. On March 22, 2016, the game was released on the PlayStation Store for PlayStation Vita. The Windows, OS X and Linux editions require XCOM: Enemy Unknown to play; Enemy Within was released for Xbox 360 and PlayStation 3 as part of the Commander Edition bundle with Enemy Unknown. The iOS and Android versions were released as stand-alone apps not requiring the original mobile port of Enemy Unknown to play. The PlayStation Vita version is only available bundled with Enemy Unknown under the title, XCOM: Enemy Unknown Plus. Gameplay The gameplay and plot of XCOM: Enemy Within largely remains the same as XCOM: Enemy Unknown with some additional features. The player manages the XCOM headquarters in almost real time progress, but much of the gameplay consists of turn-based battles against the invading aliens and some with the rogue human faction EXALT across the globe. Similar to the numerous expansions of the Civilization series, the pack retains the core storyline but adds a broad variety of content. New features A new resource, called "Meld", is introduced. XCOM operatives will be forced to advance swiftly in order to secure Meld canisters with delay-activated self-destruct systems; some of the aliens' more sophisticated cybernetic units such as Mechtoids also grant Meld when killed. Meld is a suspension made up of billions of cybernetic nanomachines. These nanites are each made up of organic and mechanical components. It is required for some of the newly introduced XCOM transhuman technologies. Alien robotic units have gained an increase in effectiveness, as well as introduction of two new units: The Seeker is an airborne squid-like machine with a cloaking device armed with plasma blast as primary weapon. The cloaking ability doesn't last indefinitely and will run out. If it gets close to a lone XCOM operative while cloaked, it can use its Strangle ability. It usually appears in a pair and will cloak when discovered. Their AI will avoid direct confrontation with XCOM operatives, preferring to cloak until the soldiers engage another group of aliens. At that point the Seeker strikes - sneaking up on an lone unsuspecting human and activating the strangle ability on the target, which deals increasing amounts of damage until a soldier can shoot it off the targeted squadmate. The Mechtoid is a Sectoid fitted into a large, heavily armed suit of powered armor armed with two plasma mini cannons. The Mechtoid can shoot twice in a single round on a single target. Sectoids can Mind Merge with the Mechtoid, providing the Mechtoid with additional psionic shielded health. Killing the Sectoid in this state does not kill the Mechtoid as it does when two Sectoids merge, but merely removes the shielded health. Alien autopsies and Meld allow XCOM scientists to modify operatives of existing classes to give them superhuman abilities, e.g. to leap several stories, to sense nearby enemies, or a backup heart. Alternatively, they can be modified into the brand-new class "MEC Trooper", which is capable of wearing Mechanized Exoskeleton Cybersuits into battle; however, they are rendered unable to participate in combat without a MEC, as the cybernetic modification includes amputation of all four limbs. MECs are large bipedal combat platforms with a cyborg pilot in their upper torso; while staggeringly expensive to deploy and upgrade and unable to take cover behind objects, they are more robust, better-armed and more mobile than conventional operatives. Additionally, XCOM's more basic S.H.I.V. a.k.a. Super Heavy Infantry Vehicle combat robots receive upgrades as well. EXALT, a new enemy faction in the form of a covert paramilitary human organization is introduced. EXALT seek to embrace the aliens' technologies and outlast their invasion in order to rule the world afterwards. They undermine XCOM's war effort with a variety of underhanded tactics through the covert cells located across the globe by stealing funds, creating more panic in Council nations and delaying research on important technology. The player is actually not obligated to actively engage them, and they will not seek open combat either; to root out their cells around the world, the player would have to perform Intel Scans and send soldiers on Covert Missions. EXALT's main base is located in one of the countries of the Council and with each successful covert operation, hints are given about which country it is, and the player has the option of launching an assault to take it down for good. However, performing a raid in the wrong country will cause it to withdraw from the Council. When forced to fight XCOM head-on, EXALT deploys "men in black"-style operatives that mirror those of XCOM troops with the same combat roles, squad tactics and equipment. Enemy Within re-introduces the Base Defense mission. Without forewarning or a chance for the player to select and arm a squad, XCOM headquarters fall victim to a number of acts of sabotage caused by multiple instances of mind-controlled personnel, followed by an alien assault. Failure of this mission leads to immediate defeat. Operation Progeny, originally scheduled to be released as a separate DLC, is included. It allows EXALT to make an inferred early appearance and includes three missions over the course of which XCOM recovers four talented psionics from alien captivity, including the one responsible for the base attack. Site Recon further details the effects of the alien invasion on Earth's ecosystem. XCOM operatives investigate a fishing village in Newfoundland, discovering a Chryssalid infestation using a whale as an incubation chamber. Thanks to collating localization voice files, XCOM soldiers can now be customized to speak in one of several languages. 47 new maps have been added for the single player game, adding to the existing 80 maps bringing the total to well over 120 maps. Eight new multiplayer maps have also been added to the existing five, bringing the total to 13 multiplayer maps. Development and release XCOM: Enemy Within was originally announced with a scheduled release date of November 15, 2013. XCOM: Enemy Within was released in stores on November 12, 2013 in the US, and November 15, 2013 internationally. The game was released for digital distribution via Steam on November 11, 2013. Mods Long War is a partial conversion mod originally developed for XCOM: Enemy Unknown. After the release of Enemy Within, development of the mod switched to Enemy Within. The mod makes changes to many of the game's existing features, adds entirely new features, and brings back concepts from X-COM: UFO Defense. Changes include the ability to send a larger number of soldiers into battle, additional soldier classes, psionic abilities, weapons, and items, and an expanded technology tree. In the mod aliens and EXALT conduct their own research and get stronger over the course of the game, a process that speeds up or slows down based on the player's success in stopping missions launched by the hostile forces. The mod was heavily praised by Enemy Unknown lead designer Jake Solomon and producer Garth DeAngelis. Reception XCOM: Enemy Within received generally favorable reviews from critics. Ben Reeves of Game Informer lauded the game, advising that "anyone who loves an intense firefight should test their mettle on Enemy Within," although also noting that "Despite Firaxis' improvements, the developer wasn't able to fix the line-of-sight issues" of the main game and that "acquiring new squad members still feels unbalanced; since you can't assign your soldiers' roles, and they only learn their specialty once they've ranked up, it's easy to end up with holes in your squad". Eurogamer Stace Harman echoed similar statements, stating that despite its flaws, "Enemy Within is an improvement on an already excellent game," Destructoid Chris Carter proffered similar praise, headlining his review with "It almost feels like a sequel." Regarding all the new content, he thought that it was "a really weird way" to approach an expansion due to its blending of old and new content, but summarized his review by saying, "If you haven't played the newest XCOM yet, now is a perfect time to do so with the Enemy Within package." Matt Lees of VideoGamer.com similarly summarized his review with, "The best game of 2012 is back, and it might be the best game of 2013." IGN Dan Stapleton criticized the late-game, stating that all the new content and unlockables make the later half of the game too easy; however, Stapleton still ultimately awarded the game a 9/10, calling it "an amazing expansion to a brilliant tactical game", and that it "is best enjoyed in Iron Man mode on Classic difficulty to enhance the emotional highs and lows of victory and permanent defeat." Conversely, GamesRadar Ryan Taljonick argued that "Enemy Within'''s new additions don't make the experience any easier [because] the added benefit of having access to gene mods and MEC Troopers is offset by new alien types and a whole new faction of fanatical humans. If anything, saving the world is harder than ever." Although GameSpot'' Leif Johnson applauded the game, he was critical of the expansion pack's pricing disparity between PC and consoles: "All of [its content] is certainly enough for PC players to fork out the $20 for the upgrade, but unfortunately, console players face the more daunting task of buying Firaxis' new creation for almost the price of a new game." References External links 2013 video games Alien invasions in video games Android (operating system) games Business simulation games Construction and management simulation games Firaxis Games games Interactive Achievement Award winners IOS games Linux games Multiplayer and single-player video games PlayStation 3 games PlayStation Vita games Science fiction video games Tactical role-playing video games Take-Two Interactive games Terrorism in fiction Transhumanism in video games Turn-based tactics video games Unreal Engine games Video game expansion packs Video games developed in the United States Video games set in the 2010s Windows games XCOM Xbox 360 games
40049786
https://en.wikipedia.org/wiki/UCMDB
UCMDB
UCMDB is a software product from Micro Focus that generates and maintains a Configuration Management Database of information technology items. It includes a mechanism for automated discovery of IT infrastructure components, such as computers and network devices. UCMDB is included in several HP products and supports ITIL-based configuration management and change management processes. DDMA The DDMa component (Discovery and Dependency mapping Advanced) of UCMDB works by scanning ranges of IP addresses within pre-set probe IP ranges, using ping, ICMP echo requests and Nmap to locate live IP addresses and open TCP ports, and IP address harvesting techniques by querying ARP/Cache tables of level 3 network devices. Resulting current IP addresses are then translated into CIs which function as input for deeper level discovery. The pyramid model is designed to ensure that only relevant sources are being queried, resulting in lower network and node load. See also ITIL ITSM CMDB External links References HP software
1930025
https://en.wikipedia.org/wiki/GRIB
GRIB
GRIB (GRIdded Binary or General Regularly-distributed Information in Binary form) is a concise data format commonly used in meteorology to store historical and forecast weather data. It is standardized by the World Meteorological Organization's Commission for Basic Systems, known under number GRIB FM 92-IX, described in WMO Manual on Codes No.306. Currently there are three versions of GRIB. Version 0 was used to a limited extent by projects such as TOGA, and is no longer in operational use. The first edition (current sub-version is 2) is used operationally worldwide by most meteorological centers, for Numerical Weather Prediction output (NWP). A newer generation has been introduced, known as GRIB second edition, and data is slowly changing over to this format. Some of the second-generation GRIB are used for derived product distributed in Eumetcast of Meteosat Second Generation. Another example is the NAM (North American Mesoscale) model. Format GRIB files are a collection of self-contained records of 2D data, and the individual records stand alone as meaningful data, with no references to other records or to an overall schema. So collections of GRIB records can be appended to each other or the records separated. Each GRIB record has two components - the part that describes the record (the header), and the actual binary data itself. The data in GRIB-1 are typically converted to integers using scale and offset, and then bit-packed. GRIB-2 also has the possibility of compression. GRIB History GRIB superseded the Aeronautical Data Format (ADF). The World Meteorological Organization (WMO) Commission for Basic Systems (CBS) met in 1985 to create the GRIB (GRIdded Binary) format. The Working Group on Data Management (WGDM) in February 1994, after major changes, approved revision 1 of the GRIB format. GRIB Edition 2 format was approved in 2003 at Geneva. Problems with GRIB There is no way in GRIB to describe a collection of GRIB records Each record is independent, with no way to reference the GRIB writer's intended schema No foolproof way to combine records into the multidimensional arrays from which they were derived. The use of external tables to describe the meaning of the data. No authoritative place for centers to publish their local tables. Inconsistent and incorrect methods of versioning local tables. No machine-readable versions of the WMO tables (now available for GRIB-2, but not GRIB-1) GRIB 1 Header There are 2 parts of the GRIB 1 header - one mandatory (Product Definition Section - PDS) and one optional (Grid Description Section - GDS). The PDS describes who created the data (the research / operation center), the involved numerical model / process (can be NWP or GCM), the data that is actually stored (such as wind, temperature, ozone concentration etc.), units of the data (meters, pressure etc.), vertical system of the data (constant height, constant pressure, constant potential temperature), and the time stamp. If a description of the spatial organization of the data is needed, the GDS must be included as well. This information includes spectral (harmonics of divergence and vorticity) vs gridded data (Gaussian, X-Y grid), horizontal resolution, and the location of the origin. Software Applications A number of application software packages have been written which make use of GRIB files. These range from command line utilities to graphical visualisation packages. ATMOGRAPH ModelVis Commercial numerical weather model data visualization software capable of decoding and displaying both GRIB 1 and GRIB 2 data formats ArcGIS Market leading GIS software Expedition – Expedition is the Sailing navigation and weather application. Grib display and download from many sources is free. cfGrib is a State-Of-the-Art grib parsing tool for python based on ecCodes from ECMWF. WGRIB Command line based program to manipulate, inventory and decode GRIB1 files GrADS, free command line based desktop application that directly handles GRIB1 and GRIB2 files Picogrib GRIB 1 C-language (FORTRAN callable) free decoding package compatible to some extent with ECMWF GRIBEX routine NCEP codes free software (C and FORTRAN library) for decoding and encoding data in GRIB 1 format NCEP codes free software (C and FORTRAN library) for decoding and encoding data in GRIB 2 format (some template only) JGrib - Jgrib is a free library for reading GRIB files in Java. Meteosatlib - Meteosatlib is a free software C++ library and set of tools to convert satellite images between various formats; it can read and write GRIB data, and its GRIB encoding/decoding library can be used standalone. Mathematica, a general mathematical, statistical, and presentation application directly handles GRIB files and can map them with many projections The NCAR Command Language can be used to read, analyze and visualize GRIB data, as well convert it to other gridded data formats. PyNIO is a Python programming language module that allows read and/or write access to a variety of data formats using an interface modelled on netCDF. degrib (AKA NDFD GRIB2 Decoder) is a reader for GRIB 1 and GRIB 2 files. wgrib2 is a reader for GRIB 2 files. GRIB API is an API developed at ECMWF to decode and encode GRIB edition 1 and 2 data. Note: this package has now been replaced by ecCodes which is a superset of GRIB API. A useful set of command line tools is also included. ECMWF also offers the plotting package Magics and the Metview workstation/batch system to handle/visualise GRIB files. QGIS - QGIS is a graphical open source software that can visualise GRIB files. Ugrib – Ugrib is a no cost graphical GRIB viewer designed for reading GRIB 1 files. The website GRIB.US also aims to provide education on the prudent and safe use of GRIB data for forecasting weather. This link is not working as of 20 May 2017. SmartMet - SmartMet is a Windows tool that reads, writes and visualises GRIB data. Xconv/Convsh – Xconv is a graphical tool for displaying and converting gridded data, and is available for most operating systems. Convsh is the command-line equivalent. The NetCDF-Java Common Data Model is a Java library that can read GRIB 1 and GRIB 2 files. zyGrib a graphical software for Linux, Mac OS X and Windows (GPL3, Qt) to download and display GRIB 1 and GRIB 2 (since v8.0) files. XyGrib started as a fork of zyGirb 8.0.1. It is a multiplatform software also. GDAL, a popular open source reading and writing library for geospatial data PredictWind Offshore App A multi platform app designed for boats heading offshore with a need to download forecast GRIB data on a Satellite or SSB connection . LuckGrib an app available on macOS, iOS and iPadOS, designed for sailors and other weather enthusiasts. LuckGrib provides easy access to many GRIB weather models. In addition, several ocean current and wave models are provided. Data can be downloaded via the internet, satellite or email. PyGrib A python language extension module which allows one to read and write GRIB 1 and GRIB 2 formats. PolarView A navigation application that includes a GRIB viewer, supporting both GRIB 1 and GRIB 2. PolarView includes a GRIB download service for GFS (wind/atmospheric pressure), NWW3 (wave height/direction) and RTOFS (Atlantic currents) data from NOAA. Available for Linux, Mac and Windows. OpenCPN Open Source Chart Plotter / Marine Navigator. For day to day cruising or advance route planning. (NOTE: GRIB support is available since version 1.3.5 beta) CDO (Climate Data Operators) is an analysis tool for geoscientific data with GRIB support IDV is a meteorologically oriented, platform-independent application for visualization and analysis of GRIB1, GRIB2 and NetCDF files. SoftwareOnBoard A marine navigation application for Windows that includes GRIB overlays on the chart. GribAE A freeware Windows interface for WGRIB. qtVlm a free software for linux, windows, mac, android, raspberryPi and iOS, with an interface with GPS and routing functions (+ an interface with virtual sailing game VLM) PyNDFD an open source Python module for retrieving real-time forecast data from the US National Weather Service. GRIB formatted data is cached and parsed to give the developer access to dozens of up-to-date weather forecast variables. Data is available for the next 7 days for any coordinate within the United States. Weather4D This application processes GRIB files (about 35 weather/wave/current models available) to create weather forecasts which can be animated in 3D HD. The "Routing" version provides also weather routing capabilities based on selected models and polar data, location management. The "Routing & Navigation" version adds navigation features such as NMEA interface, AIS, nautical charts, instrument panels, track recording. glgrib This application displays GRIB2 fields with OpenGL. Raster, contour, vector, colorbar, mapscale, coastlines, borders. Lat/lon, lambert, gaussian grid. It is possible to look at fields interactively (move, zoom, etc.). High resolution fields (2.5km and 1.25km global) have been displayed using glgrib. Mobile Apps iOS Several iOS Apps support the GRIB format, including: iGrib PocketGrib WeatherTrack Weather4D PredictWind Offshore App LuckGrib qtVlm mazu Android Several Android Apps support the GRIB format, including: mobileGRIB PocketGrib qtVlm SailGrib Weather4D PredictWind Offshore App See also Common Data Format (CDF) Hierarchical Data Format (HDF) NetCDF PP-format Global Forecast System GrADS References External links WMO manual on Codes No 306 Tables extracted from the Manual on Codes, Volume I.2 GRIB Edition 1 GRIB Edition 2 (01/2003) GRIB Edition 2 (binary only 11/2003) GRIB data of Environment Canada NCEP Office Note 388, GRIB1 What is GRIB API? What is ecCodes(?) On the suitability of BUFR and GRIB for archiving data Meteorological data and networks Earth sciences data formats
20560
https://en.wikipedia.org/wiki/Macro%20%28computer%20science%29
Macro (computer science)
A macro (short for "macro instruction", from Greek combining form "long, large") in computer science is a rule or pattern that specifies how a certain input should be mapped to a replacement output. Applying a macro to an input is macro expansion. The input and output may be a sequence of lexical tokens or characters, or a syntax tree. Character macros are supported in software applications to make it easy to invoke common command sequences. Token and tree macros are supported in some programming languages to enable code reuse or to extend the language, sometimes for domain-specific languages. Macros are used to make a sequence of computing instructions available to the programmer as a single program statement, making the programming task less tedious and less error-prone. (Thus, they are called "macros" because a "big" block of code can be expanded from a "small" sequence of characters.) Macros often allow positional or keyword parameters that dictate what the conditional assembler program generates and have been used to create entire programs or program suites according to such variables as operating system, platform or other factors. The term derives from "macro instruction", and such expansions were originally used in generating assembly language code. Keyboard and mouse macros Keyboard macros and mouse macros allow short sequences of keystrokes and mouse actions to transform into other, usually more time-consuming, sequences of keystrokes and mouse actions. In this way, frequently used or repetitive sequences of keystrokes and mouse movements can be automated. Separate programs for creating these macros are called macro recorders. During the 1980s, macro programs – originally SmartKey, then SuperKey, KeyWorks, Prokey – were very popular, first as a means to automatically format screenplays, then for a variety of user input tasks. These programs were based on the TSR (terminate and stay resident) mode of operation and applied to all keyboard input, no matter in which context it occurred. They have to some extent fallen into obsolescence following the advent of mouse-driven user interfaces and the availability of keyboard and mouse macros in applications such as word processors and spreadsheets, making it possible to create application-sensitive keyboard macros. Keyboard macros can be used in massively multiplayer online role-playing games (MMORPGs) to perform repetitive, but lucrative tasks, thus accumulating resources. As this is done without human effort, it can skew the economy of the game. For this reason, use of macros is a violation of the TOS or EULA of most MMORPGs, and their administrators spend considerable effort to suppress them. Application macros and scripting Keyboard and mouse macros that are created using an application's built-in macro features are sometimes called application macros. They are created by carrying out the sequence once and letting the application record the actions. An underlying macro programming language, most commonly a scripting language, with direct access to the features of the application may also exist. The programmers' text editor, Emacs, (short for "editing macros") follows this idea to a conclusion. In effect, most of the editor is made of macros. Emacs was originally devised as a set of macros in the editing language TECO; it was later ported to dialects of Lisp. Another programmers' text editor, Vim (a descendant of vi), also has an implementation of keyboard macros. It can record into a register (macro) what a person types on the keyboard and it can be replayed or edited just like VBA macros for Microsoft Office. Vim also has a scripting language called Vimscript to create macros. Visual Basic for Applications (VBA) is a programming language included in Microsoft Office from Office 97 through Office 2019 (although it was available in some components of Office prior to Office 97). However, its function has evolved from and replaced the macro languages that were originally included in some of these applications. XEDIT, running on the Conversational Monitor System (CMS) component of VM, supports macros written in EXEC, EXEC2 and REXX, and some CMS commands were actually wrappers around XEDIT macros. The Hessling Editor (THE), a partial clone of XEDIT, supports Rexx macros using Regina and Open Object REXX (oorexx). Many common applications, and some on PCs, use Rexx as a scripting language. Macro virus VBA has access to most Microsoft Windows system calls and executes when documents are opened. This makes it relatively easy to write computer viruses in VBA, commonly known as macro viruses. In the mid-to-late 1990s, this became one of the most common types of computer virus. However, during the late 1990s and to date, Microsoft has been patching and updating their programs. In addition, current anti-virus programs immediately counteract such attacks. Parameterized macro A parameterized macro is a macro that is able to insert given objects into its expansion. This gives the macro some of the power of a function. As a simple example, in the C programming language, this is a typical macro that is not a parameterized macro: #define PI 3.14159 This causes PI to always be replaced with 3.14159 wherever it occurs. An example of a parameterized macro, on the other hand, is this: #define pred(x) ((x)-1) What this macro expands to depends on what argument x is passed to it. Here are some possible expansions: pred(2) → ((2) -1) pred(y+2) → ((y+2) -1) pred(f(5)) → ((f(5))-1) Parameterized macros are a useful source-level mechanism for performing in-line expansion, but in languages such as C where they use simple textual substitution, they have a number of severe disadvantages over other mechanisms for performing in-line expansion, such as inline functions. The parameterized macros used in languages such as Lisp, PL/I and Scheme, on the other hand, are much more powerful, able to make decisions about what code to produce based on their arguments; thus, they can effectively be used to perform run-time code generation. Text-substitution macros Languages such as C and some assembly languages have rudimentary macro systems, implemented as preprocessors to the compiler or assembler. C preprocessor macros work by simple textual substitution at the token, rather than the character level. However, the macro facilities of more sophisticated assemblers, e.g., IBM High Level Assembler (HLASM) can't be implemented with a preprocessor; the code for assembling instructions and data is interspersed with the code for assembling macro invocations. A classic use of macros is in the computer typesetting system TeX and its derivatives, where most of the functionality is based on macros. MacroML is an experimental system that seeks to reconcile static typing and macro systems. Nemerle has typed syntax macros, and one productive way to think of these syntax macros is as a multi-stage computation. Other examples: m4 is a sophisticated stand-alone macro processor. TRAC Macro Extension TAL, accompanying Template Attribute Language SMX: for web pages ML/1 (Macro Language One) The General Purpose Macroprocessor is a contextual pattern matching macro processor, which could be described as a combination of regular expressions, EBNF and AWK SAM76 troff and nroff: for typesetting and formatting Unix manpages. CMS EXEC: for command-line macros and application macros EXEC 2 in Conversational Monitor System (CMS): for command-line macros and application macros CLIST in IBM's Time Sharing Option (TSO): for command-line macros and application macros REXX: for command-line macros and application macros in, e.g., AmigaOS, CMS, OS/2, TSO SCRIPT: for formatting documents Various shells for, e.g., Linux Some major applications have been written as text macro invoked by other applications, e.g., by XEDIT in CMS. Embeddable languages Some languages, such as PHP, can be embedded in free-format text, or the source code of other languages. The mechanism by which the code fragments are recognised (for instance, being bracketed by <?php and ?>) is similar to a textual macro language, but they are much more powerful, fully featured languages. ¹¹¹¹¹ Procedural macros Macros in the PL/I language are written in a subset of PL/I itself: the compiler executes "preprocessor statements" at compilation time, and the output of this execution forms part of the code that is compiled. The ability to use a familiar procedural language as the macro language gives power much greater than that of text substitution macros, at the expense of a larger and slower compiler. Frame technology's frame macros have their own command syntax but can also contain text in any language. Each frame is both a generic component in a hierarchy of nested subassemblies, and a procedure for integrating itself with its subassembly frames (a recursive process that resolves integration conflicts in favor of higher level subassemblies). The outputs are custom documents, typically compilable source modules. Frame technology can avoid the proliferation of similar but subtly different components, an issue that has plagued software development since the invention of macros and subroutines. Most assembly languages have less powerful procedural macro facilities, for example allowing a block of code to be repeated N times for loop unrolling; but these have a completely different syntax from the actual assembly language. Syntactic macros Macro systems—such as the C preprocessor described earlier—that work at the level of lexical tokens cannot preserve the lexical structure reliably. Syntactic macro systems work instead at the level of abstract syntax trees, and preserve the lexical structure of the original program. The most widely used implementations of syntactic macro systems are found in Lisp-like languages. These languages are especially suited for this style of macro due to their uniform, parenthesized syntax (known as S-expressions). In particular, uniform syntax makes it easier to determine the invocations of macros. Lisp macros transform the program structure itself, with the full language available to express such transformations. While syntactic macros are often found in Lisp-like languages, they are also available in other languages such as Prolog, Erlang, Dylan, Scala, Nemerle, Rust, Elixir, Nim, Haxe, and Julia. They are also available as third-party extensions to JavaScript, C# and Python. Early Lisp macros Before Lisp had macros, it had so-called FEXPRs, function-like operators whose inputs were not the values computed by the arguments but rather the syntactic forms of the arguments, and whose output were values to be used in the computation. In other words, FEXPRs were implemented at the same level as EVAL, and provided a window into the meta-evaluation layer. This was generally found to be a difficult model to reason about effectively. In 1963, Timothy Hart proposed adding macros to Lisp 1.5 in AI Memo 57: MACRO Definitions for LISP. Anaphoric macros An anaphoric macro is a type of programming macro that deliberately captures some form supplied to the macro which may be referred to by an anaphor (an expression referring to another). Anaphoric macros first appeared in Paul Graham's On Lisp and their name is a reference to linguistic anaphora—the use of words as a substitute for preceding words. Hygienic macros In the mid-eighties, a number of papers introduced the notion of hygienic macro expansion (syntax-rules), a pattern-based system where the syntactic environments of the macro definition and the macro use are distinct, allowing macro definers and users not to worry about inadvertent variable capture (cf. referential transparency). Hygienic macros have been standardized for Scheme in the R5RS, R6RS, and R7RS standards. A number of competing implementations of hygienic macros exist such as syntax-rules, syntax-case, explicit renaming, and syntactic closures. Both syntax-rules and syntax-case have been standardized in the Scheme standards. Recently, Racket has combined the notions of hygienic macros with a "tower of evaluators", so that the syntactic expansion time of one macro system is the ordinary runtime of another block of code, and showed how to apply interleaved expansion and parsing in a non-parenthesized language. A number of languages other than Scheme either implement hygienic macros or implement partially hygienic systems. Examples include Scala, Rust, Elixir, Julia, Dylan, Nim, and Nemerle. Applications Evaluation order Macro systems have a range of uses. Being able to choose the order of evaluation (see lazy evaluation and non-strict functions) enables the creation of new syntactic constructs (e.g. control structures) indistinguishable from those built into the language. For instance, in a Lisp dialect that has cond but lacks if, it is possible to define the latter in terms of the former using macros. For example, Scheme has both continuations and hygienic macros, which enables a programmer to design their own control abstractions, such as looping and early exit constructs, without the need to build them into the language. Data sub-languages and domain-specific languages Next, macros make it possible to define data languages that are immediately compiled into code, which means that constructs such as state machines can be implemented in a way that is both natural and efficient. Binding constructs Macros can also be used to introduce new binding constructs. The most well-known example is the transformation of let into the application of a function to a set of arguments. Felleisen conjectures that these three categories make up the primary legitimate uses of macros in such a system. Others have proposed alternative uses of macros, such as anaphoric macros in macro systems that are unhygienic or allow selective unhygienic transformation. The interaction of macros and other language features has been a productive area of research. For example, components and modules are useful for large-scale programming, but the interaction of macros and these other constructs must be defined for their use together. Module and component-systems that can interact with macros have been proposed for Scheme and other languages with macros. For example, the Racket language extends the notion of a macro system to a syntactic tower, where macros can be written in languages including macros, using hygiene to ensure that syntactic layers are distinct and allowing modules to export macros to other modules. Macros for machine-independent software Macros are normally used to map a short string (macro invocation) to a longer sequence of instructions. Another, less common, use of macros is to do the reverse: to map a sequence of instructions to a macro string. This was the approach taken by the STAGE2 Mobile Programming System, which used a rudimentary macro compiler (called SIMCMP) to map the specific instruction set of a given computer into machine-independent macros. Applications (notably compilers) written in these machine-independent macros can then be run without change on any computer equipped with the rudimentary macro compiler. The first application run in such a context is a more sophisticated and powerful macro compiler, written in the machine-independent macro language. This macro compiler is applied to itself, in a bootstrap fashion, to produce a compiled and much more efficient version of itself. The advantage of this approach is that complex applications can be ported from one computer to a very different computer with very little effort (for each target machine architecture, just the writing of the rudimentary macro compiler). The advent of modern programming languages, notably C, for which compilers are available on virtually all computers, has rendered such an approach superfluous. This was, however, one of the first instances (if not the first) of compiler bootstrapping. Assembly language While macro instructions can be defined by a programmer for any set of native assembler program instructions, typically macros are associated with macro libraries delivered with the operating system allowing access to operating system functions such as peripheral access by access methods (including macros such as OPEN, CLOSE, READ and WRITE) operating system functions such as ATTACH, WAIT and POST for subtask creation and synchronization. Typically such macros expand into executable code, e.g., for the EXIT macroinstruction, a list of define constant instructions, e.g., for the DCB macro—DTF (Define The File) for DOS—or a combination of code and constants, with the details of the expansion depending on the parameters of the macro instruction (such as a reference to a file and a data area for a READ instruction); the executable code often terminated in either a branch and link register instruction to call a routine, or a supervisor call instruction to call an operating system function directly. Generating a Stage 2 job stream for system generation in, e.g., OS/360. Unlike typical macros, sysgen stage 1 macros do not generate data or code to be loaded into storage, but rather use the PUNCH statement to output JCL and associated data. In older operating systems such as those used on IBM mainframes, full operating system functionality was only available to assembler language programs, not to high level language programs (unless assembly language subroutines were used, of course), as the standard macro instructions did not always have counterparts in routines available to high-level languages. History In the mid-1950s, when assembly language programming was commonly used to write programs for digital computers, the use of macro instructions was initiated for two main purposes: to reduce the amount of program coding that had to be written by generating several assembly language statements from one macro instruction and to enforce program writing standards, e.g. specifying input/output commands in standard ways. Macro instructions were effectively a middle step between assembly language programming and the high-level programming languages that followed, such as FORTRAN and COBOL. Two of the earliest programming installations to develop "macro languages" for the IBM 705 computer were at Dow Chemical Corp. in Delaware and the Air Material Command, Ballistics Missile Logistics Office in California. A macro instruction written in the format of the target assembly language would be processed by a macro compiler, which was a pre-processor to the assembler, to generate one or more assembly language instructions to be processed next by the assembler program that would translate the assembly language instructions into machine language instructions. By the late 1950s the macro language was followed by the Macro Assemblers. This was a combination of both where one program served both functions, that of a macro pre-processor and an assembler in the same package. In 1959, Douglas E. Eastwood and Douglas McIlroy of Bell Labs introduced conditional and recursive macros into the popular SAP assembler, creating what is known as Macro SAP. McIlroy's 1960 paper was seminal in the area of extending any (including high-level) programming languages through macro processors. Macro Assemblers allowed assembly language programmers to implement their own macro-language and allowed limited portability of code between two machines running the same CPU but different operating systems, for example, early versions of MSDOS and CPM-86. The macro library would need to be written for each target machine but not the overall assembly language program. Note that more powerful macro assemblers allowed use of conditional assembly constructs in macro instructions that could generate different code on different machines or different operating systems, reducing the need for multiple libraries. In the 1980s and early 1990s, desktop PCs were only running at a few MHz and assembly language routines were commonly used to speed up programs written in C, Fortran, Pascal and others. These languages, at the time, used different calling conventions. Macros could be used to interface routines written in assembly language to the front end of applications written in almost any language. Again, the basic assembly language code remained the same, only the macro libraries needed to be written for each target language. In modern operating systems such as Unix and its derivatives, operating system access is provided through subroutines, usually provided by dynamic libraries. High-level languages such as C offer comprehensive access to operating system functions, obviating the need for assembler language programs for such functionality. See also s (the origin of the concept of macros) s Computer science and engineering Computer science References External links How to write Macro Instructions Rochester Institute of Technology, Professors Powerpoint Programming constructs Source code Automation software
813176
https://en.wikipedia.org/wiki/AI%20takeover
AI takeover
An AI takeover is a hypothetical scenario in which computer's artificial intelligence (AI) becomes the dominant form of intelligence on Earth, as computer programs or robots effectively take the control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce, takeover by a superintelligent AI, and the popular notion of a robot uprising. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control. Types Automation of the economy The traditional consensus among economists has been that technological progress does not cause long-term unemployment. However, recent innovation in the fields of robotics and artificial intelligence has raised worries that human labor will become obsolete, leaving people in various sectors without jobs to earn a living, leading to an economic crisis. Many small and medium size businesses may also be driven out of business if they will not be able to afford or licence the latest robotic and AI technology, and may need to focus on areas or services that cannot easily be replaced for continued viability in the face of such technology. Technologies that may displace workers AI technologies have been widely adopted in recent years, and this trend will only continue to gain popularity given the digital transformation efforts from companies across the world. While these technologies have replaced many traditional workers, they also create new opportunities. Industries that are most susceptible to experience AI takeover include transportation, retail, and military. AI military technologies, for example, allow soldiers to work remotely without any risk of injury. Author Dave Bond argues that as AI technologies continue to develop and expand, the relationship between humans and robots will change; they will become closely integrated in several aspects of life. Overall, it is safe to assume that AI will displace some workers while creating opportunities for new jobs in other sectors, especially in fields where tasks are repeatable. Computer-integrated manufacturing Computer-integrated manufacturing is the manufacturing approach of using computers to control the entire production process. This integration allows individual processes to exchange information with each other and initiate actions. Although manufacturing can be faster and less error-prone by the integration of computers, the main advantage is the ability to create automated manufacturing processes. Computer-integrated manufacturing is used in automotive, aviation, space, and ship building industries. White-collar machines The 21st century has seen a variety of skilled tasks partially taken over by machines, including translation, legal research and even low level journalism. Care work, entertainment, and other tasks requiring empathy, previously thought safe from automation, have also begun to be performed by robots. Autonomous cars An autonomous car is a vehicle that is capable of sensing its environment and navigating without human input. Many such vehicles are being developed, but as of May 2017 automated cars permitted on public roads are not yet fully autonomous. They all require a human driver at the wheel who is ready at a moment's notice to take control of the vehicle. Among the main obstacles to widespread adoption of autonomous vehicles, are concerns about the resulting loss of driving-related jobs in the road transport industry. On March 18, 2018, the first human was killed by an autonomous vehicle in Tempe, Arizona by an Uber self-driving car. Eradication Scientists such as Stephen Hawking are confident that superhuman artificial intelligence is physically possible, stating "there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains". Scholars like Nick Bostrom debate how far off superhuman intelligence is, and whether it would actually pose a risk to mankind. According to Bostrom, a superintelligent machine would not necessarily be motivated by the same emotional desire to collect power that often drives human beings but might rather treat power as a means toward attaining its ultimate goals; taking over the world would both increase its access to resources and help to prevent other agents from stopping the machine's plans. As an oversimplified example, a paperclip maximizer designed solely to create as many paperclips as possible would want to take over the world so that it can use all of the world's resources to create as many paperclips as possible, and, additionally, prevent humans from shutting it down or using those resources on things other than paperclips. In fiction AI takeover is a common theme in science fiction. Fictional scenarios typically differ vastly from those hypothesized by researchers in that they involve an active conflict between humans and an AI or robots with anthropomorphic motives who see them as a threat or otherwise have active desire to fight humans, as opposed to the researchers' concern of an AI that rapidly exterminates humans as a byproduct of pursuing arbitrary goals. The idea is seen in Karel Čapek's R. U. R., which introduced the word robot to the global lexicon in 1921, and can even be glimpsed in Mary Shelley's Frankenstein (published in 1818), as Victor ponders whether, if he grants his monster's request and makes him a wife, they would reproduce and their kind would destroy humanity. The word "robot" from R.U.R. comes from the Czech word, robota, meaning laborer or serf. The 1920 play was a protest against the rapid growth of technology, featuring manufactured "robots" with increasing capabilities who eventually revolt. HAL 9000 (1968) and the original Terminator (1984) are two iconic examples of hostile AI in pop culture. Contributing factors Advantages of superhuman intelligence over humans Nick Bostrom and others have expressed concern that an AI with the abilities of a competent artificial intelligence researcher would be able to modify its own source code and increase its own intelligence. If its self-reprogramming leads to its getting even better at being able to reprogram itself, the result could be a recursive intelligence explosion where it would rapidly leave human intelligence far behind. Bostrom defines a superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest", and enumerates some advantages a superintelligence would have if it chose to compete against humans: Technology research: A machine with superhuman scientific research abilities would be able to beat the human research community to milestones such as nanotechnology or advanced biotechnology. Strategizing: A superintelligence might be able to simply outwit human opposition. Social manipulation: A superintelligence might be able to recruit human support, or covertly incite a war between humans. Economic productivity: As long as a copy of the AI could produce more economic wealth than the cost of its hardware, individual humans would have an incentive to voluntarily allow the Artificial General Intelligence (AGI) to run a copy of itself on their systems. Hacking: A superintelligence could find new exploits in computers connected to the Internet, and spread copies of itself onto those systems, or might steal money to finance its plans. Sources of AI advantage According to Bostrom, a computer program that faithfully emulates a human brain, or that otherwise runs algorithms that are equally powerful as the human brain's algorithms, could still become a "speed superintelligence" if it can think many orders of magnitude faster than a human, due to being made of silicon rather than flesh, or due to optimization focusing on increasing the speed of the AGI. Biological neurons operate at about 200 Hz, whereas a modern microprocessor operates at a speed of about 2,000,000,000 Hz. Human axons carry action potentials at around 120 m/s, whereas computer signals travel near the speed of light. A network of human-level intelligences designed to network together and share complex thoughts and memories seamlessly, able to collectively work as a giant unified team without friction, or consisting of trillions of human-level intelligences, would become a "collective superintelligence". More broadly, any number of qualitative improvements to a human-level AGI could result in a "quality superintelligence", perhaps resulting in an AGI as far above us in intelligence as humans are above non-human apes. The number of neurons in a human brain is limited by cranial volume and metabolic constraints, while the number of processors in a supercomputer can be indefinitely expanded. An AGI need not be limited by human constraints on working memory, and might therefore be able to intuitively grasp more complex relationships than humans can. An AGI with specialized cognitive support for engineering or computer programming would have an advantage in these fields, compared with humans who evolved no specialized mental modules to specifically deal with those domains. Unlike humans, an AGI can spawn copies of itself and tinker with its copies' source code to attempt to further improve its algorithms. Possibility of unfriendly AI preceding friendly AI Is strong AI inherently dangerous? A significant problem is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not undergo instrumental convergence in ways that may automatically destroy the entire human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification. The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly. Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to Eliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation. Odds of conflict Many scholars, including evolutionary psychologist Steven Pinker, argue that a superintelligent machine is likely to coexist peacefully with humans. The fear of cybernetic revolt is often based on interpretations of humanity's history, which is rife with incidents of enslavement and genocide. Such fears stem from a belief that competitiveness and aggression are necessary in any intelligent being's goal system. However, such human competitiveness stems from the evolutionary background to our intelligence, where the survival and reproduction of genes in the face of human and non-human competitors was the central goal. According to AI researcher Steve Omohundro, an arbitrary intelligence could have arbitrary goals: there is no particular reason that an artificially intelligent machine (not sharing humanity's evolutionary context) would be hostile—or friendly—unless its creator programs it to be such and it is not inclined or capable of modifying its programming. But the question remains: what would happen if AI systems could interact and evolve (evolution in this context means self-modification or selection and reproduction) and need to compete over resources—would that create goals of self-preservation? AI's goal of self-preservation could be in conflict with some goals of humans. Many scholars dispute the likelihood of unanticipated cybernetic revolt as depicted in science fiction such as The Matrix, arguing that it is more likely that any artificial intelligence powerful enough to threaten humanity would probably be programmed not to attack it. Pinker acknowledges the possibility of deliberate "bad actors", but states that in the absence of bad actors, unanticipated accidents are not a significant threat; Pinker argues that a culture of engineering safety will prevent AI researchers from accidentally unleashing malign superintelligence. In contrast, Yudkowsky argues that humanity is less likely to be threatened by deliberately aggressive AIs than by AIs which were programmed such that their goals are unintentionally incompatible with human survival or well-being (as in the film I, Robot and in the short story "The Evitable Conflict"). Omohundro suggests that present-day automation systems are not designed for safety and that AIs may blindly optimize narrow utility functions (say, playing chess at all costs), leading them to seek self-preservation and elimination of obstacles, including humans who might turn them off. Precautions The AI control problem is the issue of how to build a superintelligent agent that will aid its creators, while avoiding inadvertently building a superintelligence that will harm its creators. Some scholars argue that solutions to the control problem might also find applications in existing non-superintelligent AI. Major approaches to the control problem include alignment, which aims to align AI goal systems with human values, and capability control, which aims to reduce an AI system's capacity to harm humans or gain control. An example of "capability control" is to research whether a superintelligence AI could be successfully confined in an "AI box". According to Bostrom, such capability control proposals are not reliable or sufficient to solve the control problem in the long term, but may potentially act as valuable supplements to alignment efforts. Warnings Physicist Stephen Hawking, Microsoft founder Bill Gates and SpaceX founder Elon Musk have expressed concerns about the possibility that AI could develop to the point that humans could not control it, with Hawking theorizing that this could "spell the end of the human race". Stephen Hawking said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." Hawking believed that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand." In January 2015, Nick Bostrom joined Stephen Hawking, Max Tegmark, Elon Musk, Lord Martin Rees, Jaan Tallinn, and numerous AI researchers, in signing the Future of Life Institute's open letter speaking to the potential risks and benefits associated with artificial intelligence. The signatories "believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today." See also Artificial intelligence arms race Autonomous robot Industrial robot Mobile robot Self-replicating machine Cyberocracy Effective altruism Existential risk from artificial general intelligence Future of Humanity Institute Global catastrophic risk (existential risk) Government by algorithm Human extinction Machine ethics Machine learning/Deep learning Nick Bostrom Outline of transhumanism Self-replication Technological singularity Intelligence explosion Superintelligence Superintelligence: Paths, Dangers, Strategies References External links Automation, not domination: How robots will take over our world (a positive outlook of robot and AI integration into society) Machine Intelligence Research Institute: official MIRI (formerly Singularity Institute for Artificial Intelligence) website Lifeboat Foundation AIShield (To protect against unfriendly AI) Ted talk: Can we build AI without losing control over it? Doomsday scenarios Future problems Existential risk from artificial general intelligence
462365
https://en.wikipedia.org/wiki/Bullfrog%20Productions
Bullfrog Productions
Bullfrog Productions Limited was a British video game developer based in Guildford, England. Founded in 1987 by Peter Molyneux and Les Edgar, the company gained recognition in 1989 for their third release, Populous, and is also well known for titles such as Theme Park, Magic Carpet, Syndicate and Dungeon Keeper. Bullfrog's name was derived from an ornament in the offices of Edgar's and Molyneux's other enterprise, Taurus Impact Systems, Bullfrog's precursor where Molyneux and Edgar were developing business software. Bullfrog Productions was founded as a separate entity after Commodore mistook Taurus for a similarly named company. Electronic Arts, Bullfrog's publisher, acquired the studio in January 1995. Molyneux had become an Electronic Arts vice-president and consultant in 1994, after EA purchased a significant share of Bullfrog. Molyneux's last project with Bullfrog was Dungeon Keeper, and as a result of his dissatisfaction of the corporate aspects of his position, he left the company in July 1997 to found Lionhead Studios. Others would follow him to Lionhead, and some founded their own companies, such as Mucky Foot Productions. After Molyneux's departure, Electronic Arts' control over Bullfrog caused several projects to be cancelled. Bullfrog was merged into EA UK in 2001 and ceased to exist as a separate entity. Bullfrog titles have been looked upon as a standard for comparison and have spawned numerous spiritual sequels. History Background, founding, and early years (1982–1989) In 1982, entrepreneur Peter Molyneux met Les Edgar at an audio electronics shop called PJ Hi-Fi. When Molyneux left the company where he was working, Edgar suggested that they start a new one, which would later develop business software for the Commodore 64 as Taurus Impact Systems (also known as Taurus Software). The new company was named after Molyneux and Edgar's shared astrological sign, the Taurus. At some point, Molyneux accepted a deal to export money systems to Switzerland and baked beans to the Middle East. One day, Taurus received a call from the head of Commodore Europe, wanting to discuss the future of the Amiga and Taurus' software's suitability for the system. Molyneux was invited to Commodore Europe's headquarters, where he was offered several Amiga systems and a space at a show in Germany. When Molyneux was told that they were anticipating getting his network running on the Amiga, he realised that they had mistaken his company for one called Torus, a producer of networking systems. Molyneux wanted the Amiga systems, so he did not inform Commodore of this error. He received them and began writing a database program called Acquisition. Commodore kept asking about the database, and Molyneux gave them excuses because they were threatening to shut Taurus down. When Acquisition was finished, it was shown at the exhibition in Germany, and won product of the year. 2,000 copies were sold to a company in the United States, giving Molyneux and Edgar funds to sustain Taurus. Another program Taurus wrote was a computer-aided design (CAD) package called X-CAD. They knew the Amiga was becoming a gaming machine, and a friend of Molyneux's asked him to convert Druid II: Enlightenment from the Commodore 64 to the Amiga. According to Edgar, it was around this time Bullfrog was founded in preparation for the day when Acquisition was no longer important and they could focus on games. Bullfrog was originally a brand of Taurus; Molyneux explained that this was because they wanted to avoid confusion over business software and money-making opportunities. The name came from an ornament of a bullfrog located in the office: when asked by Joystick why the name "Bullfrog" was chosen, Molyneux stated that they wanted "an idiotic name" without having to find one, and there happened to be a sculpture of a colourful frog on a pedestal labelled "Bull Frog by Leonardo" on the table. Afterwards, Molyneux and Edgar were running out of money, and Edgar suggested they close the company down. It was at this point when Molyneux came up with the idea of Populous. The conversion of Druid II: Enlightenment, Populous, and a shoot 'em up game called Fusion were the first games developed under the Bullfrog brand. Early success (1989–1995) Populous was difficult to publish at first due to a lack of recognition—the god genre was, according to Bullfrog, "misunderstood by everyone". Despite this, Electronic Arts was willing to publish the game. Molyneux did not expect it to be successful, yet in 1989, the game received 10 awards, and another 12 the following year, with sales reaching one million copies. It ultimately sold four million copies. Edgar took note of the game's success and gave developers such as Imagineer licences to create ports for platforms such as the Super Nintendo Entertainment System (SNES) and Sega Mega Drive, which enabled the game to gain traction in Japan. After Populous, Bullfrog moved into the Surrey Research Park and had around 20 employees. Bullfrog was starting to gain a reputation, so people started to want to work for the company. Molyneux searched for staff himself, and employed artists and programmers. He travelled to universities, including Cambridge, where he offered computer scientists and banks the chance to come to the gaming industry. Bullfrog's Powermonger was developed as a result of pressure from Electronic Arts for a follow-up to Populous. and was released in 1990. The game won multiple Best Strategy Game awards, including one from Computer Gaming World (as did Populous). The direct sequel to Populous, Populous II: Trials of the Olympian Gods, was released the following year and sold over a million copies. In late 1993, Bullfrog worked with researchers from the University of Surrey, who were nearby their offices, to study the movement and behaviour of underwater life so Bullfrog could reproduce it in the game Creation. By the mid-1990s, Bullfrog had become well known for innovation and quality. A 1995 article in GamePro stated that "Bullfrog's work has been termed some of the most innovative by industry leaders, and it's pioneered different genres of software." The same year, Next Generation similarly asserted that "Bullfrog has earned a reputation as one of the most consistently innovative and imaginative development teams in the world." In July 1995, Edge stated that Bullfrog had "an unparalleled reputation for quality and innovation", and by that year, Bullfrog were "rightly considered one of the most innovative in the world", according to GamesTM. In 1994, three games were in development: Creation, Theme Park, and Magic Carpet. Bullfrog focused on implementing multiplayer in all three games; Molyneux believed that multiplayer was more important than the compact disc (CD) format. Theme Park and Magic Carpet were released that year, the latter being the best-selling CD game that Christmas and winning Game of the Year awards in the United Kingdom and Germany. Theme Park proved popular in Japan and was a best-seller in Europe. During the development of Theme Park, artist Gary Carr left Bullfrog following a disagreement with Molyneux on the game: Molyneux wanted gaily coloured graphics that would appeal to the Japanese market, but Carr disapproved, believing it would not work. Carr joined The Bitmap Brothers, returning to Bullfrog in 1995 to work on Dungeon Keeper, although he ended up working as the lead artist on Theme Hospital instead. In November 1994, Bullfrog began development for Dungeon Keeper. By then, the company had been approached many times to develop games around film licences. McDonald's approached Bullfrog at some point for a joint game venture. By mid-1995, Bullfrog was focused on artificial intelligence (AI) and had a dedicated AI team working at its offices. Two AI techniques, Personality Mapping and Skeletal Mapping, were developed. Acquisition by Electronic Arts and Molyneux's departure (1995–1998) According to Edgar, Bullfrog began merger talks with Electronic Arts in 1993. To get the best deal, he believed Bullfrog should also talk with other companies such as Sony and Virgin. He explained that Electronic Arts was the obvious choice as Bullfrog already had a positive relationship with them. According to Molyneux, Bullfrog received numerous offers expressing interest in purchasing the company. The offers were not taken seriously until major companies, such as Electronic Arts and Philips, made contact; it was then thought that the acquisition by one of these companies would be inevitable. Bullfrog was bought by Electronic Arts in early January 1995. By this time, the studio's staff count had risen from 35 to 60 and the acquisition allowed it to grow to 150 people within months. Molyneux became a vice-president of Electronic Arts and head of their European branch. Edgar became the vice-president of the European branch and Bullfrog's chairman. He described Bullfrog becoming part of a multinational company as "a very big change" and worked for Electronic Arts to assist with the transition. Although Molyneux had said that Bullfrog's products would not suffer as a result of Electronic Arts' purchase, the number of games in development meant that there was less time to refine them (despite the company's growth rate), affecting their quality. After the release of Magic Carpet in 1994, seven games were in development: Magic Carpet 2, Theme Hospital, The Indestructibles, Syndicate Wars, Gene Wars, Creation, and Dungeon Keeper. After Electronic Arts' purchase, Molyneux was told to release a game, namely Magic Carpet 2 or Dungeon Keeper, within six weeks. Neither was near completion, so to appease Electronic Arts, Hi-Octane was created. It had a rushed development and no name by July 1995. Molyneux explained that Bullfrog's games were normally original, and they were not concerned about them being copied, but the project was "a little derivative", which was why it was kept secret—even Edgar was not informed of the project at first. Around this time, Bullfrog had a reputation for having largely ignored 16-bit game consoles, and Syndicate Wars was the company's first title originally developed for a console—the PlayStation. As Molyneux had been made vice-president of Electronic Arts, his corporate role and responsibility increased considerably and he began making frequent trips to San Francisco. Over time, he grew increasingly frustrated with the position and wished to return to game development. In July 1996, Molyneux decided to resign from Bullfrog to focus on game design, rather than become a mere employee. In response, Electronic Arts banned him from its offices, forcing him to move development of Dungeon Keeper to his house. Molyneux speculated that this was because Electronic Arts feared that he would take people with him. He decided to leave as soon as Dungeon Keeper was finished and commented: "My last day will be the day that this game goes into final test. I'm very, very, very sad, but also very relieved." He also said that Electronic Arts had been "unbelievably patient" and thanked vice-president Mark Lewis for campaigning for Dungeon Keepers completion. Molyneux's planned departure was his motivation to make Dungeon Keeper good. He believed that he would enjoy being an executive but said that it was "an utter nightmare". Shortly after his departure, Molyneux said he still had feelings for Bullfrog and wished them success. Despite his dissatisfaction with the corporate aspects of being vice-president, Molyneux said that he had learned "an enormous amount". In 2017, he revealed that his resignation was the consequence of his, and technical director Tim Rance's, drunkenness. He said he would take his resignation email back if he could. Around this time, as Electronic Arts increased control over Bullfrog. Mark Healey (the lead artist for Dungeon Keeper) stated that the company "felt more like a chicken factory" after Electronic Arts' takeover and compared it to being assimilated by the Borg. Glenn Corpes (an artist for Fusion and Populous) stated that he was not surprised at Molyneux's departure. Another employee believed that working for Bullfrog had become "a job" and that the company had lost its innovation. In 2008, Electronic Arts' president John Riccitiello corroborated these sentiments by admitting that their "dictatorial managerial approach" had suppressed Bullfrog's creativity. Sean Cooper (the designer of Syndicate) said that if he could travel back in time, he would probably force Molyneux to refrain from selling Bullfrog to Electronic Arts. He described the period of resignations following Molyneux's departure as "such a horrible time". Molyneux believed that Electronic Arts had good intentions for Bullfrog, saying that "they just wanted to make it nicer" and putting the company's effects on Bullfrog down to "love abuse". When Dungeon Keeper was nearing its completion in 1997, Molyneux, Rance, and Mark Webley (the project leader for Theme Hospital) founded a new company, Lionhead Studios, that July. By the time the studio's first game, Black & White, was released, Bullfrog employees such as Healey, Andy Bass (an artist who had worked on Theme Hospital), Russell Shaw (the composer for various titles), James Leach (Bullfrog's script writer), Paul McLaughlin (who worked on Creation), and Jonty Barnes (a programmer who had worked on Dungeon Keeper) had joined Lionhead. Healey stated that, because of his dissatisfaction at Bullfrog, he was happy to follow Molyneux and became Lionhead's first artist. Also in 1997, Mike Diskett (the project leader, lead programmer, and lead designer of Syndicate Wars), Finn McGechie (the lead artist for Magic Carpet), and Guy Simmons left to found Mucky Foot Productions, with Carr joining them the following year. Other notable people at Bullfrog around the mid-1990s include Simon Carter (the lead programmer for Dungeon Keeper), Richard Reed (the project leader for Gene Wars), Mike Man (the lead artist for Syndicate Wars), Alan Wright (the project leader and lead programmer for Magic Carpet 2), and Eoin Rogan (the lead artist for Magic Carpet 2). Post-Molyneux, final years, and closure (1998–2001) In 1998, two games were released: Theme Aquarium, and Populous: The Beginning. Theme Aquarium was an attempt to "cross barriers" between the United Kingdom and Japan. Edgar explained that Bullfrog was more successful than most western game developers in Japan due to Populous and Theme Park, and wondered about the possibilities of having a game designed in the United Kingdom and implemented in Japan by Japanese development teams. A small group was set up to do this. Theme Aquarium was released as a Theme game in Japan only; western releases removed the Bullfrog branding. As of 2012, many ex-Bullfrog employees were unfamiliar with the game. Shortly before Molyneux's departure, Bullfrog announced that the games then in development may be the final ones released for MS-DOS. It was "quite likely" that all future games would be Windows-only. The reason for the change in platform focus was so Bullfrog could create games with Windows in mind and use "powerful features" (such as 3D acceleration), which were difficult to use with MS-DOS. In 1999, Theme Park World and Dungeon Keeper 2 were released. Most of Theme Park Worlds development team came from Mindscape—they were brought to Bullfrog wholesale. Bullfrog worked with its sister company Maxis to release Theme Park World in North America under their Sim brand as Sim Theme Park to further establish itself in the region. Theme Resort, a Theme game based around holiday islands, was cancelled and its team reallocated to Theme Park World. Dungeon Keeper 2 had a new development team led by Nick Goldsworthy, previously an assistant producer for Theme Park at Electronic Arts. During the development, Colin Robinson was interviewed for the role of Bullfrog's chief technical officer, and helped the project succeed. In 2016, Glenn Corpes speculated that Electronic Arts did not understand Molyneux's role at Bullfrog and thought he was in charge of everything and that Electronic Arts' response to his departure would be to install managers. In fact, he focused on one game at a time, and let others carry out their work. In mid-1999, Edgar stepped down as chairman. He was succeeded as managing director by Bruce McMillan of Electronic Arts' Canadian studios. Corpes left to found the studio Lost Toys with Jeremy Longley (who had worked on Theme Hospital, Syndicate Wars, and Populous III) and Darren Thomas (who had worked on Dungeon Keeper and Magic Carpet 2, and was the lead artist on Theme Park World), which Edgar supported financially. Corpes stated that he was inspired by Mucky Foot Productions running its own affairs and that it was "quite embarrassing to still be working for the Borg". He also said that Lost Toys was partially his take on what Bullfrog was. Alex Trowers (a designer who had worked on Syndicate and Powermonger) believed that Bullfrog had become too corporate after Electronic Arts' takeover and left for Lost Toys to return to "making games for the sake of making games", rather than to satisfy shareholders. In August 1999, Electronic Arts appointed Ernest Adams as the lead designer of the fourth instalment in the Populous series, Genesis: The Hand of God. Bullfrog's management had concerns about its similarity to Lionhead Studios' Black & White and cancelled the project. Adams then became the lead designer on Dungeon Keeper 3. As Dungeon Keeper 2 did not perform as well as hoped, the team were instructed to make the third game more accessible. Development began in November 1999, but Electronic Arts' focus was changing. It was in negotiation with J. K. Rowling and New Line Cinema for licences to Harry Potter and The Lord of the Rings, respectively. Electronic Arts saw a profitable opportunity and, in March 2000, cancelled Dungeon Keeper 3 in favour of those franchises, although its cancellation was not officially announced until August. Bullfrog moved to Chertsey in 2000 and went through "a quiet patch" for the remainder of the year. The final game under the Bullfrog brand, Theme Park Inc, was released in 2001. By the time the game was in development, most of the Bullfrog teams had become part of EA UK and much of the development was handled by another company. What remained of Bullfrog was then merged into EA UK. Molyneux stayed with Lionhead Studios until the formation of 22cans in 2012. Edgar had some involvement with the gaming industry since Bullfrog but eventually left for the automotive industry. In August 2009, Electronics Arts were considering revising some of Bullfrog's games for modern-day systems. Legacy Several employees founded their own companies after leaving Bullfrog. These include: Lionhead Studios – Founded by Peter Molyneux, Mark Webley, and Tim Rance (as well as Steve Jackson, the co-founder of Games Workshop and co-author of the Fighting Fantasy books), Lionhead is best known for their Black & White and Fable series. The company was acquired by Microsoft and closed down on 29 April 2016. Mucky Foot Productions – Founded by Mike Diskett, Fin McGechie, and Guy Simmons. Gary Carr joined shortly afterwards. A deal with Eidos Interactive was signed and Mucky Foot Productions developed three games: Urban Chaos, Startopia, and Blade II. The company closed in 2003. Lost Toys – Founded by Glenn Corpes, Jeremy Longley, and Darran Thomas. The studio created two games—Ball Breakers/Moho and Battle Engine Aquila—before shutting down. Media Molecule – Best known for LittleBigPlanet, Media Molecule was established by Mark Healey, Alex Evans, Dave Smith, and Kareem Ettouney. Intrepid Computer Entertainment – This company was started by Joe Rider and Matt Chilton, and signed by Microsoft as a first-party developer. Intrepid closed in 2004, and its employees moved to Lionhead Studios. Big Blue Box Studios – Founded by Bullfrog programmers Simon and Dene Carter, and Ian Lovett (who worked on Magic Carpet and Dungeon Keeper), Big Blue Box Studios were "very close" to Lionhead Studios, and the two companies merged. 22cans – Founded in 2012 by Molyneux after he left Lionhead. 22cans is known for Godus, which took inspiration from Populous and Dungeon Keeper, as well as Lionhead's Black & White. Two Point Studios – Founded in 2016 by Gary Carr and Mark Webley, Two Point Studios signed a publishing deal with Sega in May 2017. Several Bullfrog games have spawned spiritual successors or have been used as a base for comparison. Dungeon Keeper has influenced War for the Overworld and Mucky Foot's Startopia, the former being described as "a true spiritual successor to Dungeon Keeper". DR Studios' Hospital Tycoon has been compared to Theme Hospital. Satellite Reign (programmed by Mike Diskett) has been labelled a spiritual successor to the Syndicate series. Two Point Hospital, developed by Two Point Studios, is considered to be a spiritual successor to Theme Hospital. In October 2013, Jeff Skalski of Mythic Entertainment, which produced a free-to-play remake of Dungeon Keeper for mobile platforms, said he would like to remake other Bullfrog titles, and described the company as "unstoppable". Theme Park also received a freemium remake in December 2011. Games developed Cancelled projects Bullfrog cancelled several projects. According to Molyneux, the most common reason games were abandoned in the company's earlier days was because the game testers did not like them. That being the case, his theory was that customers would not either. Cancelled games include: Ember – Players would have piloted a speeder craft to repair a microprocessor chip, competing against a rival trying to undo the player's repairs. Colony – An arcade-adventure-puzzle game in which players would have attempted to save the passengers and crew of a cryogenic ship by repairing the cryogenic suspension system, using video cameras to monitor activity on the ship. Hell – A scrolling shoot 'em up based in the underworld and based on Joust. The Indestructibles – Described as "an action-beat-'em-up-strategy-everything game", The Indestructibles would have involved creating superhumans to defend cities from invaders. Creation – Set in the same reality as Syndicate, Creation would have had the player battling to transform an alien water world. Void Star – This was to be a 3D real-time strategy game set in space, but was cancelled it was believed that there would be no interest in the concept. Theme Resort, Theme Prison, Theme Ski Resort, and Theme Airport – These were "talked about" after the release of Theme Hospital but never materialised due to Mark Webley and Gary Carr leaving for other companies. Theme Resort was in development (according to Webley, its team were trying to have a trip to Club Med for research), and its team joined Theme Park World after cancellation. Webley stated that Bullfrog intended to explore other possibilities for its Designer Series (of which Theme Park and Theme Hospital are part of), but Electronic Arts had it shut down. Genesis: The Hand of God – Intended to be the next instalment in the Populous series but was cancelled due to similarities to Lionhead's Black & White. Dungeon Keeper 3 – Project was cancelled in favour of film franchises such as Harry Potter and The Lord of the Rings. Bullfrog also decided to cease developing real-time strategy games. Theme Movie Studio – Did not make it past the concept stage. References External links British subsidiaries of foreign companies Electronic Arts Companies based in Guildford Defunct companies based in Surrey Video game companies established in 1987 Video game companies disestablished in 2001 British companies established in 1987 British companies disestablished in 2001 1995 mergers and acquisitions Defunct video game companies of the United Kingdom Video game development companies 1987 establishments in England 2001 disestablishments in England
9370543
https://en.wikipedia.org/wiki/SXAL/MBAL
SXAL/MBAL
In cryptography, SXAL (Substitution Xor ALgorithm, sometimes called SXAL8) is a block cipher designed in 1993 by Yokohama-based Laurel Intelligent Systems. It is normally used in a special mode of operation called MBAL (Multi Block ALgorithm). SXAL/MBAL has been used for encryption in a number of Japanese PC cards and smart cards. SXAL is an 8-round substitution–permutation network with block size and key size of 64 bits each. All operations are byte-oriented. The algorithm uses a single 8×8-bit S-box K, designed so that both K(X) and X XOR K(X) are injective functions. In each round, the bytes of the block are first permuted. Then each byte is XORed with a key byte and an earlier ciphertext byte, processed through the S-box, and XORed with the previous plaintext byte. The key schedule is rather complex, processing the key with SXAL itself, beginning with a null key and using permuted intermediate results as later keys. MBAL MBAL is an encryption algorithm built using SXAL that can be applied to messages any number of bytes in length (at least 8). It uses two 64-bit extended keys for key whitening on the first 64 bits. The algorithm consists of 9 steps: Pre-whitening Fm: An expanded version of SXAL applied to the entire message SXAL the block consisting of the first 4 and last 4 bytes Reverse the byte order of the entire message Fm Reverse SXAL the ends Fm Post-whitening MBAL has been shown to be susceptible to both differential cryptanalysis and linear cryptanalysis. References External links ISO/IEC9979-0012 Register Entry (PDF), registered 23 October 1995 , a patent on a communications system using SXAL/MBAL for encryption. Includes a description of SXAL/MBAL. Broken block ciphers
880200
https://en.wikipedia.org/wiki/Florida%20Atlantic%20University
Florida Atlantic University
Florida Atlantic University (Florida Atlantic or FAU) is a public research university with its main campus in Boca Raton, Florida and satellite campuses in Dania Beach, Davie, Fort Lauderdale, Jupiter, and Fort Pierce. FAU belongs to the 12-campus State University System of Florida and serves South Florida, which is home to more than six million people and spans more than of coastline. Established as Florida's fifth public university in 1961, FAU has quickly grown to become the sixth-largest in the state by enrollment. Florida Atlantic University is classified among "R2: Doctoral Universities – High research activity". The university offers more than 180 undergraduate and graduate degree programs within its 10 colleges. Florida Atlantic opened in 1964 as the first public university in the Miami metro area, offering only upper-division and graduate level courses. Initial enrollment was only 867 students, increasing in 1984 when the university admitted its first lower-division undergraduate students. As of 2021, enrollment has grown to over 30,000 students representing 180 countries, 50 states, and the District of Columbia. The university has annual budget of $900 million and an annual economic impact of $6.3 billion. Since 1964, Florida Atlantic University has awarded degrees to over 185,000 alumni. FAU's intercollegiate sports teams, the Florida Atlantic Owls, compete in National Collegiate Athletic Association (NCAA) Division I and the Conference USA (C-USA). With 19 varsity athletic teams, the Owls have won numerous titles and championships within the conference and division. On October 21, 2021, Florida Atlantic accepted the invitation to join the American Athletic Conference (AAC) and will become a full-member on July 1, 2023. History Establishment On July 15, 1961, to meet the burgeoning educational demands of South Florida, the state legislature passed an act authorizing the establishment of a new university in the city of Boca Raton. Florida Atlantic University was built on Boca Raton Army Airfield, a 1940s-era army airbase. During World War II, the airfield served as the Army Air Corps' sole radar training facility. The base was built on the existing Boca Raton Airport and on 5,860 acres (23.7 km2) of adjacent land. A majority of the land was acquired from Japanese-American farmers from the failing Yamato Colony. The land was seized through eminent domain, leaving many Japanese-Americans little recourse in the early days of World War II. The airbase was used for radar training, anti-submarine patrols along the coast, and as a stop-over point for planes being ferried to Africa and Europe via South America. The airfield was composed of four runways, still visible on the Boca Campus today and mainly used for parking. By early 1947, the military decided to transfer future radar training operations to Keesler Air Force Base in Mississippi. The departure of the air force in 1947 would leave Boca Raton Army Airfield essentially abandoned. Expansion and growth Florida Atlantic University opened on September 14, 1964, with an initial student body of 867 students in five colleges. The first degree awarded was an honorary doctorate given to President Lyndon B. Johnson on October 25, 1964, at the dedication and opening of the university. At the time of its opening, there were 350 employees, of which 120 were faculty. On-campus housing for students was first added in September 1965, when Algonquin Hall opened. Florida Atlantic's history is one of continuing expansion as the university's service population has grown. The university originally served only upper-division and graduate level students, because the state intended the institution "to complement the state's community college system, accepting students who had earned their associate degrees from those institutions." Florida Atlantic began its expansion beyond a one-campus university in 1971, when it opened its Commercial Boulevard campus in Fort Lauderdale. Due to a rapidly expanding population in South Florida, in 1984 Florida Atlantic opened its doors to lower-division undergraduate students. The following year, the university added its third campus in downtown Fort Lauderdale on Las Olas Boulevard. Recent history In 1989, the Florida Legislature recognized demands for higher education in South Florida by designating Florida Atlantic as the lead state university serving Broward County. To fill this role, the university would establish a campus in Dania Beach in 1997 and another campus in the City of Davie in western Broward County in 1990. Florida Atlantic later purchased 50 acres (20 ha) of land in Port St. Lucie in 1994 to establish a campus on the Treasure Coast. This would be the institution's fifth campus. The university continued its expansion in 1999 when it opened its Jupiter Campus, named for the late John D. MacArthur. This campus houses the university's honors college. Florida Atlantic University and the University of Miami's Leonard M. Miller School of Medicine established a medical training program within the Charles E. Schmidt College of Biomedical Science in 2004. Plans originally called for the construction of a new teaching hospital in coordination with Boca Raton Community Hospital on the main campus. Following successive budgets deficits in 2007, the hospital delayed its participation indefinitely. However, Florida Atlantic later established its own College of Medicine in 2010. The Harbor Branch Oceanographic Institution (HBOI) also joined the university in 2007, creating Florida Atlantic's seventh campus. To bring HBOI into the university family the Florida Legislature allocated $44 million to Florida Atlantic to acquire the institution. Florida Atlantic has changed dramatically since its opening in 1964. There are now more than 30,000 students attending classes on seven campuses spread across 120 miles (193 km). The university consists of ten colleges and employs more than 3,200 faculty and staff. As of 2020, the university's endowment has increased to over $240 million. Since its founding, the university has been led by seven presidents. The university's immediate past president is Mary Jane Saunders. She was named president on March 3, 2010, then resigned on May 15, 2013. Her appointment followed the resignation of Frank Brogan. Brogan, a former Lieutenant Governor of Florida, left the university in late 2009 to become Chancellor of the State University System of Florida. Past university presidents also included Anthony J. Catanese, Helen Popovich, Glenwood Creech, and Kenneth Rast Williams. On January 17, 2014, the Board of Trustees announced the selection of John W. Kelly, formerly a vice president of Clemson University, to be the seventh president of the university with a starting date of March 1, 2014. Academics As of 2021, the university's student body consists of 24,663 undergraduates, 3,380 graduate students, 440 doctoral students, and 254 medical students. In 2021, the undergraduate student body consisted 61% ethnic minorities and includes students from more than 180 countries, 50 states, and the District of Columbia. For the incoming freshman class of fall 2021, the acceptance rate was 60%. The university has ten colleges which altogether offer over 180 different bachelor's, master's and doctoral degree programs: the Charles E. Schmidt College of Science, Charles E. Schmidt College of Medicine, Christine E. Lynn College of Nursing, College for Design and Social Inquiry, College of Business, College of Education, College of Engineering and Computer Science, Dorothy F. Schmidt College of Arts and Letters, Harriet L. Wilkes Honors College, and the Graduate College. The university offers two honors options: the Harriet L. Wilkes Honors College and a University Scholars Program. The Wilkes Honors College is located on the John D. MacArthur campus in Jupiter, Florida. It offers a liberal arts education in the platform of a public university, yet is comparable to a private liberal arts college. The Boca Raton campus houses the University Scholars Program, which offers special honors seminars, forums, courses, and advanced course substitution for freshmen. The fall 2021 incoming freshmen profile for the middle 50% was a 3.73–4.33 high school GPA, a 23–29 ACT composite score, and a 1100–1270 SAT total score. Additional admission requirements are needed for the Harriet L. Wilkes Honors College, the School of Architecture, the College of Engineering and Computer Science, and the College of Science. The average class size at FAU for undergraduates is 33 students, and for graduate classes, 12 students. The student-to-faculty ratio is 20:1. The top three undergraduate majors by enrollment are elementary education, accounting, and management, respectively. The top three graduate majors by enrollment are business administration, educational leadership, and accounting, respectively. The average age for first-year students is 18; however, the average age for all undergraduates is 24 and the average age for graduate students is 33. The average 4-year graduation rate in 2021 was 47.5%. Florida Atlantic has long ranked as the most racially, ethnically and culturally diverse institution in Florida's State University System. U.S. News & World Report has ranked FAU the 27th most diverse university in the nation. FAU students come from every county in Florida, all 50 states, and more than 180 countries. Enrichment opportunities include internships, hands-on research, study abroad experiences, and 310 clubs and campus organizations. The Lifelong Learning Society operates programs that serve the educational interests of more than 19,000 senior citizens by providing classes focusing on subjects of specific interest, and audit options for regular university classes. Under the university's Commercial Music Program, Hoot/Wisdom Recordings was created in 2002, enabling students to work in all creative and business aspects of the music industry. This program generated music that landed a Top 10 spot on the Billboard's Hot R&B/Hip-Hop Singles Sales Chart during its first week of release. The university's two-story trading room simulator, located in the College of Business, provides hands-on financial education using 25 dual-monitor computers and can accommodate 50 people at one time. A second lab provides full audio/visual connectivity and 25 additional workstations. Florida Atlantic allows local financial businesses to use the Trading Room for training. Rankings For 2021, U.S. News & World Report ranked Florida Atlantic University as the 140th best public university in the United States, and 277th overall among all national universities, public and private. The university was named one of the 143 "Best Southeastern Colleges" in the United States by the Princeton Review for 2022. In 2021, Florida Atlantic was ranked 105th in the nation by Washington Monthly in their 2021 National University Rankings. The university was also ranked 59th in the United States and ninth in Florida by The Hispanic Outlook in Higher Education magazine in their 2021 rankings of Top 100 Colleges And Universities For Hispanics. Research FAU is classified by the Carnegie Foundation for the Advancement of Teaching as a research university with high research activity. The university has established notable partnerships with major research institutions such as the Scripps Research Institute, the Torrey Pines Institute for Molecular Studies, and the Max Planck Society. The university is the home of two centers of excellence: the Center of Excellence in Biomedical and Marine Biotechnology and the Center for Ocean Energy Technology. These centers have been selected by Florida's Emerging Technology Commission to receive grants to continue and increase their operations. FAU beat out some of Florida's top research universities, including the University of Florida and Florida State University, for the initial funding from the state. Since receiving its startup funding, Florida Atlantic has secured additional funds from other sources, including federal and private research grants. As a result, both centers have engaged in academic and industry partnerships, combining expertise in ocean engineering, marine biotechnology, functional genomics, proteomics, and bioinformatics. Researchers, scientists, and students at the centers are designing technologies to explore the sea, harvest renewable energy, discover new medicines, and develop new therapeutics to combat agents of bioterrorism. As a result of this research, in 2007 the university and Lockheed Martin announced an exclusive licensing agreement to develop and produce a rapidly deployable and autonomous mooring buoy system for military and scientific uses. In 2010, the United States Department of Energy designated FAU as one of three national centers for ocean energy research and development. The Southeast National Marine Renewable Energy Center joins centers in the Pacific Northwest (University of Washington and Oregon State University) and in Hawaii (University of Hawaii). The Southeast National Marine Renewable Energy Center is undertaking research and development of technologies capable of generating renewable power from ocean currents and ocean thermal energy. The university houses both an Imaging Technology Center and a NASA Imaging Technology Space Center. Located in the College of Engineering and Computer Science, the centers specialize in digital imaging research and development for use in both government and commercial applications in the areas of medical technology, surveillance, communications, education, inspection, scientific observation, manufacturing, visual recognition and identification, and motion picture and digital video. The Florida Atlantic Imaging Technology Center is developing a curriculum for digital imaging and processing, thereby establishing Florida Atlantic as the only university in the nation to offer this technical concentration. The NASA Imaging Technology Center is one of 12 NASA Research Partnership Centers throughout the nation which develop dual-use research and development with the participation of NASA and other related industries in the US. The center occupies two sets of laboratories and administrative offices, one on Florida Atlantic's main campus in Boca Raton, the other at the Fort Lauderdale campus. FAU is affiliated to the Research Park at Florida Atlantic University, with properties in Deerfield Beach and Boca Raton. The Research Park provides outside research facilities for companies which enable them to interact with the university community and its facilities, resources, and expertise. The Research Park operates the Technology Business Incubator; The incubator works to foster the start-up and growth of technology-based businesses, seeking to scale them and build relationships for them with the university. The Boca Raton campus is also home to the Center for Complex Systems and Brain Sciences. Campus Florida Atlantic University's geographical distribution is located on six campuses spread across Palm Beach, Broward, and St. Lucie counties. The region is home to more than three million people. The university's main campus is located in the City of Boca Raton in Palm Beach County. The county is also home to the John D. MacArthur Campus located in the City of Jupiter. In addition to its campuses in Palm Beach County, the university operates three campuses in the Broward County cities of Dania Beach, Davie, and Fort Lauderdale. Florida Atlantic University also operates a campus in the St. Lucie County city of Fort Pierce. In addition to students who attend classes on the universities campuses, there are 1,612 distance learning students who conduct their studies over the internet or through other means. These students account for 6% of the university's student body. FAU is a signatory of the American College & University presidents Climate Commitment Association for the Advancement of Sustainability in Higher Education. This commits the institution to ensuring all new construction projects meet the U.S. Green Building Council's Leadership in Energy and Environmental Design (LEED) Silver standards. In 2011, the College of Engineering and Computer Science Building was LEED Platinum certified. Palm Beach County campuses Boca Raton FAU's main campus in Boca Raton was established on the remnants of a World War II American Army airbase in 1964. Spanning 850 acres (3.5 km2), the site is located between the cities of Palm Beach and Fort Lauderdale. The campus was designated a burrowing owl sanctuary in 1971 by the Audubon Society. The owls find the campus appealing because there are few predators, due to the university's proximity to the Boca Raton Airport, and because the campus was originally cleared of vegetation when operating as an airbase during World War II. "The feisty bird, traditionally associated with wisdom and determination, serves as the university's mascot." The Boca Raton campus is home to a wide variety of university programs and facilities. These facilities are labs and classrooms, housing for students, a 6,000-gallon shark tank for aquatic research, a movie theater, athletic and recreational facilities, and the student-run record label Hoot/Wisdom Recordings. In addition to academic and cultural programs, the campus also houses Florida Atlantic's Division I athletics program. The main campus serves approximately 19,077 students, or 70% of the university's student body, offering a number of academic programs, activities, and services. The Boca Raton campus also houses a number of other institutions, including the A. D. Henderson University School, FAU High School, one of Florida Atlantic University's Research Parks, and the Lifelong Learning Society. Jupiter–John D. MacArthur Campus In addition to the Boca Raton campus in southern Palm Beach County, FAU operates a campus in northern Palm Beach County, in Jupiter. The John D. MacArthur Campus, named after businessman and philanthropist John D. MacArthur, was established in 1999 to serve residents of central and northern Palm Beach and southern Martin counties. The MacArthur Campus occupies 45 acres (0.18 km2), upon which are eight classroom and office buildings, a library, a 500-seat auditorium, two residence halls, a dining hall, museum building, and utility plant. The MacArthur Campus also houses the Harriet L. Wilkes Honors College, Scripps Florida, FAU Brain Institute, and the Max Planck Florida Institute for Neuroscience. The campus serves approximately 1,262 students, or 4% of the university's student body. Broward County campuses Dania Beach–SeaTech The Dania Beach Campus, also known as SeaTech, was founded in 1997 as a state-funded Type II research center. The institute is part of FAU's Department of Ocean Engineering which was founded in 1965 as the first ocean engineering undergraduate program in the nation. The campus is located on 8 acres (0.03 km2) of land between the Atlantic Ocean and the Intracoastal Waterway. SeaTech is home to university faculty and students engaged in sponsored ocean engineering research and development in the areas of acoustics, marine vehicles, hydrodynamics and physical oceanography, marine materials and nanocomposites. The Dania Beach Campus serves approximately 70 students, roughly 1% of the university's total student body. Davie The Davie Campus of Florida Atlantic University was established in 1990 on 38 acres (0.15 km2) of land in western Broward County. The campus serves approximately 3,488 students, or 13% of the FAU student body, making it the university's second largest campus by enrollment. The campus features a multi-story student union with offices for student government and student organizations, a multipurpose area and student lounge, a bookstore, and cafeteria. The union also contains a student health center that provides medical services and health counseling. Davie is also the home of "environmental research initiatives focused on Everglades restoration." FAU colleges offering courses at the FAU Davie campus include Design and Social Inquiry; Arts and Letters; Business; Education; Nursing; and Science. The campus is located on Broward College's Central Campus. Students may enter BC as freshmen and graduate from FAU with undergraduate degrees in over 14 disciplines. More than 315,000 square feet of carefully designed classrooms, laboratories and faculty, staff and student offices are located on this campus along with a shared-use, 112,000 square-foot FAU/BC library designed for the 21st century. Other support facilities include a shared Childcare Center, a student Wellness Center and a multi-service Student Union. The campus also offers a rich and varied program of student activities provided by the Division of Student Affairs. Students have all of the services they require for career counseling, wellness, testing and evaluation, tutoring, health services, student government and financial aid, among others. Like a small college within a large university, the Davie Campus is seen as a "model" branch campus for the state of Florida and the nation. Fort Lauderdale The university has two buildings in downtown Fort Lauderdale, both of which are considered part of one Fort Lauderdale campus. The Askew Tower (AT) and the Higher Education Complex (HEC) on Las Olas Boulevard. The campus offers courses in communication, graphic design, architecture, and urban and regional planning. The campus is home to approximately 900 students or 3.2% of the university's student body. St. Lucie County campuses Fort Pierce–Harbor Branch Oceanographic Institution In addition to the Treasure Coast Campus, FAU operates a campus in Fort Pierce at the Harbor Branch Oceanographic Institution. Harbor Branch merged with the university in 2007 to become the HBOI at FAU. The Florida Legislature allocated $44 million for the university to acquire the institution and its 600-acre (2.4 km2) campus. Former Campuses Port St. Lucie–Treasure Coast Campus Treasure Coast Campus of Florida Atlantic University operated through a partnership with Indian River State College (IRSC). Florida Atlantic purchased 50 acres (0.2 km2) of land in Port St. Lucie in 1994. At the end of spring 2012 class term, Florida Atlantic University ended offering classes at the Port St. Lucie campus. Athletics Florida Atlantic's 19 varsity sports teams, the Owls, compete in NCAA's Division I. The Owls joined Conference USA for the 2013–14 season. The university's athletics program began in 1979, when Florida Atlantic first started sponsoring intercollegiate teams. Since then, the university has worked to expand the quality of its intercollegiate program by attracting coaches such as Howard Schnellenberger, Matt Doherty, Rex Walters, Lane Kiffin, Mike Jarvis, Dusty May, and Willie Taggart. The university's colors are FAU Blue, FAU Red, and FAU Silver. On October 21, 2021, Florida Atlantic accepted the invitation to join the American Athletic Conference (AAC) and will become a full-member on July 1, 2023. Traditions FAU is home to a number of sports-related traditions and school spirit organizations. Every fall before the first football game of the season, FAU's Student Government Association sponsors the annual football "Bonfire" where the opposing team's mascot is burned in effigy. This event typically includes a concert and a speech by the university's head football coach. Also in football, Florida Atlantic challenges its rival Florida International (FIU) in the annual Shula Bowl. This intercollegiate football game is named after legendary coach Don Shula; so named because at the time of its inception, both head coaches, Florida Atlantic's Howard Schnellenberger and Florida International coach Don Strock, had worked under Shula at some point during their careers. Even though both universities have since moved on to new head coaches, the Shula Bowl is still played. As a home game, the competition takes place at university's own stadium; as an away game, the bowl is played at FIU Stadium in Miami. For basketball, Florida Atlantic celebrates the "Red Hot Madness and Stroll Off" pep rally that introduces fans to the team and coaches, host a number of basketball-related contests such as 3-point shoot-outs and slam dunk competitions, and features step performances by the school's National Pan-Hellenic Council fraternities and sororities. During the regular season, the "Bury the Burrow in Red" event calls for Florida Atlantic students to wear as much red as possible and fill the Burrow, the university's multi-purpose arena, during the annual basketball rivalry game between Florida Atlantic and Florida International University. The official spirit group supporting Florida Atlantic athletics is the "prOWLers". The group began in February 2002 to support the men's basketball program during the team's run for the Atlantic Sun Conference Championship. The group is funded by the Student Alumni Association, and can now be found at most sporting events cheering for Florida Atlantic. The prOWLers are joined by the Owl Rangers, a fan group that paints their bodies in the Florida Atlantic school colors. The hOWLetts are a student club that attend gameday events and assist in recruiting athletes. Since 2002, Florida Atlantic students have been using Owl Fingers (the "OK" hand sign) to show school pride and wish the athletic teams luck during football point after attempts (PATs) and basketball free throws. Student life Residential life Residential housing at FAU is available on the Boca Raton and John D. MacArthur campuses. "All full-time freshmen are required to reside in university housing," however, "exemptions from this policy are made for students who: are 21 or older by the first day of class, reside with parent(s) or legal guardian(s) within a radius of the Boca Raton campus, or are married." As of 2021, over 4,000 students live on-campus in Boca Raton. The Wilkes Honors College on the MacArthur Campus requires all students live on-campus within its two residence halls, however, exceptions are made for students who are 26 years of age, married, or have dependent children. Boca Raton's on-campus housing facilities are: Indian River Towers (opened 2001), Heritage Park Towers (opened 2004), Glades Park Towers (opened 2007), Parliament Hall (opened 2013), University Village Apartments (UVA), and Innovation Village Apartments (IVA) (opened 2011). Heritage Park and Glades Park Towers each offer 602 beds with 96 single rooms. UVA and IVA exclusively serve upperclassmen while the other residence halls exclusively serve freshmen students. The university also offers upper-division undergraduate and graduate student housing in the Business and Professional Women's Scholarship House for women with a strong academic background. One of the newest residences on the Boca Raton campus is the Innovation Village Apartments (IVA), consisting of two buildings: IVA North and IVA South. It is a 1,200-bed apartment-style housing facility for upperclassmen, graduate, and medical students. It offers amenities that one would find in a high-rise apartment complex: lounges, retail dining, fitness centers, a pool/cabana, a volleyball court, common areas, and more. The facility opened in fall 2011. FAU's newest residence hall is Parliament Hall, a lakeside freshmen housing facility offering 614 beds, a fitness center, lounges, retail dining, and views of the nearby Atlantic Ocean from top floors. Within its existing residential life programs, FAU offers a number of Learning Communities for freshmen and students with similar interests and concentrations. Participants meet people with similar interests, live on the same floor and take courses with others in their community, while receiving additional guidance related to those interests. The university's Learning Community programs are divided into two categories, Freshman Learning Communities and Living Learning Communities. The freshman program offers 16 different concentrations, including business, nursing, and education. The Living program offers six concentrations for students residing in the Glades Park Towers dormitory, including engineering, computer science, and a Women's Leadership program. The university's Department of Housing and Residential Life and the university's fraternities and sororities sponsor a program for freshmen and other students returning to Florida Atlantic in the fall semester. This program, called the "Weeks of Welcome", spans 11 days and all campuses, and works to acclimate students with university life and to build a good on-campus community. On each day, a number of different events are scheduled, including Hall Wars, which are athletic competitions between dormitories, Luaus, and a number of other events. The Weeks of Welcome is the second largest campus-wide event held by Florida Atlantic. Student housing Campus organizations and activities FAU has approximately 300 registered student organizations. Among the groups are academic organizations, honor societies, spiritual/religious organizations, diversity-appreciation organizations, service organizations, personal interest organizations, sports clubs, and student government agencies. These clubs and organizations run the gamut from sailing to Ultimate Frisbee, from varsity and club sports and a jazz group to a pottery guild, from political organizations to chess and video game clubs. These organizations are funded by student tuition, from which $12.32 per credit hour goes toward an activities and service fee fund. This generates approximately $10 million per year that is then given to student government for allocation to student clubs and organizations. The student government also finances other student life programs, including career fairs, the University Press, OWL TV and Owl Radio, and Homecoming. Florida Atlantic's homecoming, also known as the "Owl Prowl," is celebrated annually in the fall semester. Events occur mainly on the Boca Raton Campus, but a number of other campuses host their own events as well. In the past, homecoming has had kickoff parties, costumed dances, bonfires, comedy shows, alumni events and dinners, a golf cart parade, and tailgating. Florida Atlantic students have an organized football tailgating area known as the Rat's Mouth. The name references the Spanish translation of Boca Raton. FAU completed an $18.6 million Recreation and Wellness Center in spring 2010. The facility houses an outdoor leisure and lap pool, a cardio equipment and free weight room, two multipurpose rooms, three indoor courts and health club-style locker rooms. In 2011, the facility won the NIRSA Outstanding Sports Facilities Award. Other recreation facilities include a $4.2 million track and field complex, with synthetic turf (opened January 2007), a ropes challenge course and the 6.5 acre Henderson Fields, utilized most often by the FAU Intramural Sports and Club Sports programs. Greek life FAU is home to approximately 28 chapters of national fraternities and sororities, encompassing approximately 1,077 members or 5% of the undergraduate population. The highpoint of Greek life at Florida Atlantic is "Greek Week." This event is held annually during the spring semester and showcases a number of themed competitions between the university's Greek organizations. There are currently no on-campus Greek houses. However, a Greek Life Housing task force has been formed to explore various housing models, including the cost of construction, "and make recommendations on how to improve the overall quality of the Greek housing...." Alumni References External links Florida Atlantic Athletics website University Press – student newspaper 1961 establishments in Florida Buildings and structures in Boca Raton, Florida Educational institutions established in 1961 Public universities and colleges in Florida Universities and colleges accredited by the Southern Association of Colleges and Schools Universities and colleges in Broward County, Florida Universities and colleges in Palm Beach County, Florida
966733
https://en.wikipedia.org/wiki/Superhuman%20Samurai%20Syber-Squad
Superhuman Samurai Syber-Squad
Superhuman Samurai Syber-Squad (or, in short, SSSS) is an American television series. It was produced by Tsuburaya Productions, Ultracom Inc. and DIC Productions, L.P., with distribution by All American Television, and ran for a duration of 53 episodes from September 12, 1994 to April 11, 1995 in syndication, as well as on ABC. It was an adaptation of the Japanese tokusatsu series Gridman the Hyper Agent. Plot High school student Sam Collins, the head of a band known as Team Samurai, is zapped during a recording session by a power surge and disappears, only to return seconds later with a strange device attached to his wrist which, at the time, is unremovable. Later after his friends, Amp, Sydney, and Tanker, leave, one of his video game programs, dubbed Servo, is subject to a power surge and zaps Sam again just after he has remarked "Cool battle armor!" - this time he is pulled into the digital world and transformed into his creation. As Servo, he roams the digital world and fights monsters dubbed Mega-Viruses which are capable of attacking any device on the electrical grid (including the grid itself), Internet or telephone network, usually having real-life consequences far beyond what any standard computer virus would be capable of achieving. Meanwhile, Malcolm Frink, another student from Sam's school, is designing monsters on his home computer when Kilokahn, an escaped military artificial-intelligence program that was presumed destroyed in the power surge, visits him via his computer screen and strikes a Faustian deal with him, transforming his digital monster into a Mega-Virus. Sam, now as Servo, must enter the digital world and stop Malcolm's and Kilokahn's Mega-Viruses. Sometimes, when Servo is unable to handle a virus by himself, he would enlist the help of his friends using his Arsenal Programs which could fight the viruses solo, transform, with the help of other Programs, and attach to Servo as armor. Since Team Samurai consists of only three people at any one time following Sam's disappearance, only three vehicles are available for use at any one time. When Servo combines with these Programs as armor, he changes his name to either Phormo once combined with Drago or Synchro once combined with Zenon. Characters Team Samurai Sam Collins / Servo (portrayed by Matthew Lawrence) - The frontman and guitarist of his band, Team Samurai. He is always willing to help anyone in need or be their friend and often vies for the attention of cheerleader Jennifer Doyle, having been in and out of relationships with her, sometimes even competing with Malcolm Frink for her affections. He even goes so far as to try to be friends with Malcolm, although Malcolm never returns the favor. He is clever and easygoing. He also loves his (unseen) little sister, Elizabeth, though often feels pestered by her shenanigans. Tanker (portrayed by Kevin Castro) - Sam's best friend, the band's drummer, and a somewhat stereotypical athlete. He is particularly adept at school sports, especially football. He has a crush on fellow Team Samurai member Sydney, especially admiring her for her intelligence. He always seems to have a large appetite, and as evidenced in the episode "A Break in the Food Chain", he goes crazy if he eats nothing for a duration. He also holds a particularly strong dislike for Malcolm Frink. In Syberspace, Tanker's uniform was a black biker suit with a black helmet and a see-through visor. Sydney "Syd" Forrester (portrayed by Robin Mary Florence) - The band's keyboard player and the brains of the group. She is also a good singer, as shown in "His Master's Voice". She is one of North Valley High's brightest students, and often displayed a caring personality. She is the object of Tanker's affection, and the two enjoy being together. Sydney's Syberspace uniform was a pink biker suit with a gold helmet and a see-through visor. Amp Ere (portrayed by Troy Slaten) - The team's so-called "space cadet" and the band's bass player. He becomes the band's bass player after revealing to them that his brother (who was originally intended to join them instead of Amp) was going back to college. His intelligence is curious, as he either is clueless to his surroundings or displays some unusual intellect. He has an unorthodox way of performing tasks, such as writing in a notebook using his toes or studying by eating book pages with milk and sugar. To enter Syberspace, he always uses a different phrase to be humorous. Amp's uniform consisted of a helicopter helmet and leather jacket. It is later revealed that Amp is really an alien and returns to his own planet with his parents off-screen. Lucky London (portrayed by Rembrandt Sabelis) - A surfer and Amp's replacement in Team Samurai. His attitude was often laid-back, sometimes to the dismay of Principal Pratchert. In Syberspace, Lucky's uniform is a red and white jet ski helmet with a black visor and life jacket. Supporting characters Jennifer "Jen" Doyle (portrayed by Jayme Betcher) - Sam's on-again/off-again girlfriend and a cheerleader at North Valley High School. Malcolm tries to compete for her affections. In the alternate version, Jen is a genius who liked Malcolm befriended and helps Sam get back to his normal universe. Principal Pratchert (portrayed by John Wesley) - The school principal who is usually strict, particularly when dealing with antics caused by either Sam and/or Malcolm. When his daughter loses the student council presidency to Malcolm in "The President's a Frink", he initially refuses to recount the votes when Malcolm comments it would make the Education Board suspect he was playing favorites and could do so only when he agreed that Malcolm could use his parking space should he remain victor. Pratchert would reconsider recounting the votes when Mrs. Starkey wisely suggests he does so for not only the sake of his job but also his daughter Yolanda. He does and it is revealed Malcolm rigged the votes using Skorn so he can win, so Pratchert punishes him by making him move his car out of his parking space for an undisclosed time in detention. In "Pratchert's Radical Departure", it is revealed that Pratchert used to be a hippie when he was younger - Malcolm uses this to his advantage by creating a Mega-Virus monster that makes him think he was a hippie again, much to the delight of Lucky London (at least until the Mega-Virus's destruction by Servo). Mrs. Rimba "Cha-Cha" Starkey (portrayed by Diana Bellamy) - The cafeteria lunch lady who often cracks jokes relating to the poor quality of the food she serves, such as enjoying her hobby of riding her motorcycle and being married multiple times. She also seems to have an affinity for Dennis Quaid, as mentioned in a few episodes. She is the only faculty member in North Valley High School who doesn't like Malcolm. When Pratchert was depressed over Yolanda refusing to talk to him, Starkey wisely suggests he does the right thing and recount the votes and admits her suspicion that the voting was rigged. After the recount and although Mrs. Starkey pretends to forget suggesting it, she is touched that Pratchert does value her opinion. Yolanda "Yoli" Pratchert (portrayed by Kelli Kirkland) - The principal's daughter and Jennifer's closest friend. She's North Valley High School's student council president, a position she temporarily loses in "The President's a Frink" when Malcolm cheats his way into office (with the help of a virus) but she regains it when Principal Pratchert recounts the votes after Servo defeats the virus sent to change the results. In "What Rad Universe!", an alternate dimension version of Yolanda finds companionship in Kilokahn making her that dimension's version of Malcolm while the Malcolm of that dimension is a good person much like the real Yoli and Sam. Elizabeth "Liz" Collins (voiced by Kath Soucie) - Sam's "unseen" younger sister who communicates with her brother off-screen through a laundry chute connected from the upstairs. She always plays pranks on her older brother, usually dropping things on top of him through the chute. Liz has shown she does care for Sam and drops a ton of cookies to share with him. Villains Kilokahn (voiced by Tim Curry) - Short for "Kilometric Knowledge-base Animate Human Nullity," Kilokahn is a military artificial intelligence program who unleashes computer viruses to attack major computer systems. He derisively refers to humans as "meat-things." He considers himself the ruler of the digital world and also wishes to take over the real world (Earth) starting with its computer network. Malcolm Frink (played by Glen Beaudin) - A loner who dresses in black garb and also attends North Valley High School. He only finds companionship in Kilokahn. Using a special program, he designs the Mega-Viruses which are brought to life by Kilokahn and sent into a specific electronic object. Most of the Mega-Viruses are either designed on his computer or have their drawings image-scanned into his computer so that Kilokhan brings them to life. Sam sees that Malcolm is alone and tries to strike up a friendship with him, but Malcolm rejects his offers stating that he likes being alone. The only exceptions to that rule are in "His Master's Voice" when Malcolm was touched by Sydney's apology and in "Kilo is Coming to Town" when he finally realizes that his selfish nature nearly cost Sam his life. Malcolm has a very strong dislike of Tanker and is intimidated by Mrs. Starkey. He derives enjoyment out of hurting others with his computer viruses which he creates and Kilokahn brings to life. In spite of Kilokahn's regular betrayal of him and lacking any other 'friends', Malcolm always comes back to him. He even purposely restores Kilokahn to his sociopathic self after he is temporarily rendered harmless. In the alternate universe version, Malcolm is a generous and caring person who likes to help people unlike the alternate Yoli who takes pleasure from harming others. Mega-Virus Monsters The Mega-Virus Monsters are Kaiju-style computer viruses produced by Malcolm and given sentience by Kilokhan. Malcolm creates the Mega-Viruses since Kilokhan lacks the ability. Only a few Mega-Viruses have the power of speech. Kathod - The very first virus that Servo ever fought after Malcolm first met Kilokhan. Kathod was a long-tailed turtle-style creature who mostly crawled on four legs (though once during the battle, he was up on two) and had two volcano holes built into its shell, from which he could erupt powerful fireballs. He could also breathe fireballs at will. Kathod was brought to life by Kilokhan in "To Protect and Servo" to tamper with phone communications on Malcolm's request so that Sam would not call Jennifer. Around the same time, a freak accident with Sam's computer turns him into Servo for the first time and is transported to where Kathod is causing havoc. After a fierce battle, Kathod was destroyed by Servo's Grid Power Punch. In "Just Brown and Servo," Kathod went into every car engine to tamper with it until it was destroyed by Servo. Blink - Blink is an armored cycloptic virus with skilled fighting techniques. He was sent into the police files in "Samurize" in order to create false arrests including one on Jennifer during her date with Sam as the police were looking for Sam. He wielded dual metal combat sticks with pointy ends. When he proved to be too much for Servo, Tanker boarded Drago and went into the system to aid Servo. By combining him with Drago, they became Servo's upgrade known as Phormo. During this part of the battle, Blink took his two sticks and merged them into one long, double-edged combat stick. It would do him no good in the end though, because he ended up getting taken out of the picture by Phormo, who used a special Grid Power Fire Stream that erupted from the gem on Phormo's chest. Trembulor - This was a large black spike-covered virus who, hence his name, could shake wildly, causing tremors in Syberspace. He was sent into the government's defense satellites in "Samurize, Guys!" to create an electrical barrier around Sam's house, trapping Sam and the rest of Team Samurai inside when they were to appear for a band performance at North Valley High School that night. As Servo, Sam went in to battle the virus. This monster demonstrated the ability to suck up Servo's main power right through the gem on Servo's chest. Servo lost 80% of his power after this move, so Sydney tried to program some help for him. She, Tanker and Amp were thrust into their vehicles (Borr, Tracto, and Vitor, respectively) for the very first time. They used their vehicles to aid Servo in battle, and soon merged with him to create the Synchro program. Synchro's formation turned the battle back in Team Samurai's favor, and he proceeded to destroy the virus by launching his Shoulder Drill Missiles directly at him. Servo faced him twice later on in the series, and both times he used the Synchro program to win the battles. In "Foreign Languages," Trembulor is upgraded with armor where he is sent to make sure that nobody will understand Servo when he calls for help. Thorned Virus - A pinkish-colored plant/dinosauric monster with leaf veins patterned throughout its body. It shot white powder from its mouth. Servo and Tanker in Drago battled with it in "Out of Sight, Out of Time." In "My Virus Ate My Homework," it was formerly introduced where Kilokhan wanted to send it into the nuclear missiles. When Malcolm was against this plot, Kilokhan brought it to life before Malcolm could delete it. Servo entered the internet through Sydney's laptop. As Tanker controlled Drago, he helped Servo fight the Thorned Virus. Servo was able to destroy the Thorned Virus and the missile was averted. Krono - A diamond-backed dinosauric Mega-Virus monster. Krono can shoot a blue beam from its mouth. In its first appearance in "Out of Sight, Out of Time," Malcolm created Krono and had Kilokhan send it into an atomic clock in England to render time in the world meaningless. Upon learning this, Sam became Servo to fight Krono. Sydney joined the battle in Vitor and was hit by Krono's attack. Tanker and Amp joined the battle and helped to form Synchro. He was destroyed by Synchro's Grid Power Punch. In "To Sleep Perchance To Scream," he was sent into the tracking satellite that the Navy sent up. Servo (as Synchro) destroyed him again, but it turned out that it was only the first in a series of nightmares Sam would have throughout the episode. In "Syber-Dunk," Krono creates a barrier that prevents Team Samurai and basketball star Charles "High Jump" Johnson from leaving the school when the latter had a basketball game to play in that night. With the unlikely help of High Jump (who was forced to board Borr), Synchro destroyed Krono again with his Grid Power. Plexton - A dinosauric fire virus. He appeared several different times. He first appeared in "Some Like it Scalding" where Kilokhan sent him into the school's thermostat to make the temperature scorching hot. Servo fought him in a hot battle until Sydney creates the Samurai Sword and a shield for Servo to use. After Servo beheads and betails Plexton, Malcolm prepared a backup disk so Kilokhan can withdraw Plexton from battle. He later appeared in "The Cold Shoulder" alongside his sister Gramm and battled Servo. Servo tricked the two monsters into attacking each other causing them to turn against each other and it seemed that Plexton had the upper hand in their sibling rivalry. They were taken down by Servo's sword (which was powered up by his Grid Power) while the two monsters were distracted. In "Rock 'n' Roll Virucide," Servo fought Plexton and Gramm again. Servo defeats them with the same tactic. He then appeared with an armor upgrade and went inside Sam's blow dryer in the episode "Hair I Stand, Hand to Hand." With the aid of Sydney and Tanker (who were in the vehicles Jamb and Torb), Servo was able to knock him out this time with his main Grid Power Punch. Servo met him once more in "Hide and Servo" when Kilokhan chose him among Malcolm's drawings to see which Mega-Virus Monster will combat Servo after he is uncontrollably thrusted throughout cyberspace. After the battle, Plexton once again retreated (Plexton seemingly lost a portion of his tail while retreating). Chronic - A virus made from chromium alloy with a bladed left arm and a pincerlike right arm. On its chest was a red jewel that unleashed a barrier when attacked by beam weapons. First appearing in "Money For Nothin' and Bits for Free," Chronic tapped into the city's bank accounts making Malcolm rich. Servo then engages Chronic. After Tanker goes in and helps to get Servo unpinned from Chronic while destroying its red jewel, Servo destroyed it. In "Romeo and Joule-Watt," Malcolm revamps Chronic and sends him into the school's stage lights during the school's play of Romeo and Juliet. In "It's Magic," Chronic returned to disrupt Sam and Lucky London's magic show. Skorn (voiced by Neil Ross) - A ninja-style virus. Deemed as perhaps the smartest of viruses, second-in-command to Kilokahn, and one of the only few who could actually speak. His combat skills were similar in technique to both a samurai and a ninja and were up to par with Servo's, and he wielded different weapons in battle, such as a sword, and a pair of nunchaku (which he also turned into a bo). Skorn also had the ability to create multiple clones of himself. Malcolm created him in "His Master's Voice" to send into the keyboard synthesizer to swap Sydney's voice with Tanker's voice. Sam transformed into Servo and fought Skorn. Tanker piloted Drago to help Servo. They formed Phormo and used its Lightning Grid Power to destroy Skorn. In "An Unhelping Hand," Malcolm revived Skorn and has Kilokhan send him into Sydney's wristwatch and controlled her hand. Skorn used this to hinder the attempts of Servo's friends to help him. Eventually, with Tanker holding back Sydney's hand and Amp typing in the computer for Sydney, they managed to send Servo his shield and sword programs to help him when Skorn clones himself. He was killed by Servo when Servo launched his sword/shield directly through Skorn. In "The President's a Frink," Malcolm uses him to fix the school presidential election. Here, he is destroyed again by Servo's shield/sword combo. In "Loose Lips Sink Microchips," Malcolm revives Skorn and sends him into Sam's school radio show at school to reveal embarrassing secrets about people. Here, he is again destroyed by Phormo. Kord - A giant reptilian monster with a camera-style eye, blinding floodlights built into his back, two tails, and is covered in magnetic plates. Kord had the ability to shoot fireballs of up to 10,000 degrees Celsius (as measured by Sydney) and could also make his shell super-heated, making physical contact with him dangerous. He could also restrain his opponents, leaving them vulnerable to his fireball attacks. He was responsible for trapping Sam inside a video camera in "Lights, Camera, Action." Servo had the initiative early on but then got burned when touching Kord's high-temperature eye/camera. Kord proceeded to have Servo shackled where he was burned by Kord's fireballs. Eventually, with the help of Xenon, who smashed the virus's camera eye, Servo was able to defeat Kord with his Grid Power slicer, which he launched at the virus, slicing him up before turning orange and disintegrating. In "Little Ditch, Big Glitch," Kord was later given some upgrades by Malcolm. This time, he had new black armor, no camera-style eye, and also had the ability to suck out Servo's main power. Servo was able to defeat him again with the Grid Power Punch after being aided in battle by Xenon. In "Give 'Til It Megahertz," Malcolm resurrects the Kord virus in its upgraded form and uses him to turn the people of Earth into a bunch of overly generous people including Team Samurai. In "Lucky's Unlucky Adventure," Kord is sent into Mrs. Starkey's cash register to make sure that the money comes up short enough to frame Lucky for the missing money. With help from Xenon, Servo destroys Kord with his Grid Power slicer. Gramm - A dinosauric ice virus and the sister of Plexton. She was sent into Sam's air conditioner in "The Cold Shoulder" causing Sydney, Tanker and Amp to turn cold and against one another. She was destroyed when Servo thrust his sword through the gem on her chest, just moments after Servo wiped out Plexton in the fight. She later teamed up once again with her brother against Servo, but was again tricked into attacking each other, causing the two of them to fight. Servo then took this opportunity to destroy them with his sword. In "Rock 'n' Roll Virucide," Servo fought Plexton and Gramm again. Servo defeats them with the same tactic. Sucker Virus - A dinosauric virus with a large mouth (with coils built into it). In "Amp Loves You, Yeah, Yeah, Yeah!," the Sucker Virus was sent to suck in all the power from every nuclear power plant. In battle, he tried to suck Servo inside of him until Sydney rushed in with Borr and bailed him out. However, Sydney took some damage, but Tanker and Amp turned the tide by coming in on Tracto and Vitor, respectively. They all merged with Servo to create Synchro, who then proceeded to wipe out the virus with a Grid Power Punch. Skeleton Virus - A Ceratopsian-headed dinosauric Mega-Virus monster. It had an armor that made him resemble a skeleton and he had something of a loud howl. While this virus was introduced in "Lights, Camera, Action", he had no formal introduction as the team fought him at the beginning of the episode. Plus, that battle was mirror-imaged. First appearing in "Amp Loves You, Yeah, Yeah, Yeah!," Kilokahn sent him to damage all the communication networks. The damage was shorting machinery circuits everywhere, jamming TV signals, and created an earthquake in the high school. Sam went into the digital realm as Servo to take this menace on. Servo had his way early on, but was sent reeling by fireballs that the virus shot from his mouth. Sydney sent Servo his Battle Shield to protect him from the fiery projectiles. The virus had another trick up his sleeve - he also had arrows built right into his armor, which he then proceeded to shoot right at Servo! Then, following another fireball attack, Servo's damage levels overloaded to the point that he was sent out of SyberSpace and the next second, Sam was back in his basement again! This time, Sam and Sydney, as well as Tanker and Amp, all went in to confront the virus again, and using Tracto, Borr and Vitor, they quickly merged with Servo to make the Synchro program. Synchro gave the virus a good thrashing, and neither the virus's fireballs nor arrows could faze Synchro. Synchro then used his Shoulder Drill Missile attack on the virus, and afterwards (in a move similar to the finisher that Phormo executed on Blink) finished off this virus by using the Grid Power Fire Stream from the gem on his chest. Servo later encountered this virus in the episode "Forget You!" where he was the cause of widespread amnesia in the real world and again came out the victor by using his Synchro powers. Unnamed Virus #1 - This draconic virus was sent into the fortune-telling game in "Que Sera Servo" that would turn everyone into the opposite of what their future would be. While fighting a cowardly Servo (as a result of Sam becoming a coward because of the spell), he used a spell that turned Servo evil. When Xenon came in to attack the virus, he had the upper hand until Kilokahn intervened using his powers to throw Team Samurai back out of cyberspace. Amp (who had become a genius because of the effects of the virus) took one of Sydney's chains and smashed the game, with the shock caused in Syberspace rocking Servo and turning him good once again, turning the tide of the battle back in his favor. He then used a powerful Grid Power kick on the virus, which caused him to actually go right through, and ultimately destroy, the virus. Hock - One of the deadliest viruses drawn by Malcolm and brought to life by Kilokhan. This green aardvark/dinosauric virus had tough armor and was armed with a pair of sword gauntlets on both arms. First appearing in "A Break in the Food Chain," Malcolm and Kilokhan sent him into the food factories for the purpose of stopping shipments of food to the world that needs it. In battle, he was actually able to block Servo's Grid Power punch, but as the battle was turned back in Servo's favor (after Servo was able to strike him with his sword/Grid Power combination), the virus retreated. Servo took him on again in "Sweet and Sour Kilokahn" at the time when Sydney used a program that made Kilokahn good causing him to not bring Hock to life for Malcolm. When that was eliminated from Kilokhan, Servo fought Hock and the virus again retreated. He later reappeared in "The Taunt Heard Round the World" with some upgraded armor and was redubbed "The Hockinator". He had a target system in one eye and threw powerful boomerangs at Servo that would knock out at least 12% of his power per hit. After Sydney sent him the Dragon Cannon, Servo used it to turn the tide of the battle back in his favor, and then soon afterwards, Hock was finally eliminated by Servo's Grid Power. The normal Hock did appear again in the episode "Truant False", inside the school's computer, but was destroyed by an anti-virus system that appeared similar to Manfu. Then in the episode "What Rad Universe!", Sam, who was in an alternate dimension, became Servo with the help of Malcolm (who in that dimension was a nice guy and a computer whiz) to fight against the Hockinator once again (who was brought to life by Yolanda Pratchert instead of Malcolm Frink), and defeated him again with the Grid Power. Sybo (voiced by Glen Beaudin in the first appearance) - When Kilokahn took over Malcolm's body in "Mal-Khan-Tent," he stuck Malcolm inside this orange dinosauric virus (who still had Malcolm's voice) with six long spikelike appendages. He tried to break out of Syberspace but he was stopped by Servo and was destroyed by his Grid Power. After that Malcolm was back in his old body again. Some episodes later Servo fought and defeated this virus again (but obviously without Malcolm attached to him). In "Just Brown & Servo," this virus was re-designed by Malcolm with some upgraded armor and a mace ball on both his tail and left hand. He was destroyed by the Grid Power of Synchro this time. In "Beep My, Beep My Baby," Sybo is sent into Jennifer's pager to give off a hypnotic beep to maker her love Malcolm and not Sam. When Sybo assumes its upgraded form, Synchro defeats him in the same manner. Troid - A blue Pterodactylus-style virus who had the ability to fly and attack from the air. Troid also had the ability to turn invisible while attacking, initially leaving Servo helpless to fight back until he summoned his computer's art program to help him, covering it in paint and leaving it visible. He first appeared inside Jennifer's Pom-Poms in "Ashes to Ashes, Disk to Disk," causing whoever held them to be trapped inside SyberSpace where they were stored on floppy disks. After a battle with Servo, he retreated. Servo fought him again at the beginning of "My Virus Ate My Homework", and it was very much the same battle. Only this time, Servo destroyed him with his Grid Power Punch. Unnamed Virus #2 - A robotic dinosauric virus used by Malcolm to knock out the world's electricity in "Hello Darkness, My Old Friend." After an argument with Malcolm about who created the Internet, Kilokhan went behind Malcolm's back and brought this virus to life in order to have the world's electricity flow to him. Malcolm did not like this plan when he found out about it. Because of the power outage, Amp had to operate the exercise bike to provide power so that Sam can become Servo and fight this virus. With help from Tanker and Drago, Servo was able to defeat this virus with the Grid Power Punch and restore power to the world. In "Portrait of the Artist as a Young Virus," Malcolm recreates this virus to alter the high school schedules so that he would be put in the same class as Jennifer. Like before, Servo and Drago defeated this virus. In "Pratchert's Radical Departure," this virus is sent into Principal Pratchert's bullhorn to regress him back to his hippie days. Sydney's Virus - As Kilokahn decided he no longer wanted Malcolm creating viruses for him in "Pride Goeth Before a Brawl," he decided to enlist the aid of another computer user with that person being Sydney. He was able to coerce her to make a virus, which looked like nothing more than a peach-colored, timid creature (with a head actually found on her abdomen) who was not exactly evil and had a voice similar to Sydney's. Servo went in to face her, but the confused virus was not in a fighting mood and apologized for the rampage. Later, Kilokahn showed Malcolm what Sydney created, and Malcolm made the virus truly evil this time (giving her an upgrade in the form of a black, rhinocerotiform face), and Servo would struggle against her this time. Sydney, feeling responsible, offered to pilot Drago in Tanker's place (to which Tanker agreed) and combined Drago with Servo to bring in Phormo. Towards the end of the battle the virus tried to apologize again, but to no avail. She was eliminated by Phormo's Lightning Grid Power Punch. Nightmare Virus - This stag beetle/dinosauric virus had a large shell, a mouth with several jagged teeth, and two long, powerful tentacles for arms. It briefly fought Servo in "An Un-helping Hand." Sydney and Tanker boarded Tracto and Borr to help Servo. In "To Sleep, Perchance to Scream", the Nightmare Virus was sent into Sam's digital clock to give him numerous nightmares. When the morning seemingly came, and Sam's friends were able to find out where the virus is, it was destroyed by Servo with his Grid Power Punch with help from Jamb & Torb. After that, Sam woke up....it all seemed to be a dream. Rock n' Roll Virus (voiced by Jess Harnell) - A musical delinquent-themed dinosauric virus responsible for turning Mrs. Starkey into a heavy rock and roll maniac in "Rock n' Roll Virucide." Smog Virus - This tall, black dinosauric virus had two tails from which he could emit deadly red smog. After a new floppy drive ate Malcolm's disk and nearly ate his hand (causing him to only be able to type with one hand), he used this virus to get revenge on the people of Tokyo, where the floppy drive was presumably made in "Born with a Jealous Mind." The smog from the virus caused the residents of Tokyo to collapse everywhere in the streets. Sam told Tanker of this danger as he tried to convince Tanker to not be so jealous of the fact that Sydney was on a date with superstar Chad Williams. Sam then became Servo and went in to confront the virus. He had the upper-hand early on, but once the Smog Virus became too powerful for him - especially when he started using his smog against Servo - Tanker boarded Drago and went in to help his buddy. After he had Drago breathe fire at the virus, Tanker used Drago to merge with Servo and create the Phormo program. Phormo manhandled the virus and soon put him out with the Lightning Grid Power Punch. The Smog Virus was later seen in the episode "Do Not Reboot 'Til Christmas", where he was used to make all battery-powered toys explode at midnight on Christmas Eve. With the help of Phormo (piloted by Tanker), Servo was able to destroy him again, a split second before midnight. Stupid Virus (voiced by Neil Ross) - A black dinosauric Mega-Virus similar to Skeleton who tampered with the national test scores at high school in "Cheater, Cheater, Megabyte Eater." Its key power and weakness was its third eye, which was taken out by Servo after being pinned down by Xenon. Earlier in the battle, when Xenon fired its missile fists, the third eye was able to redirect the fists to hit Servo instead. In "Over the River and Through the Grid," the Stupid Virus was sent to cut Mrs. Starkey's utilities making her unable to attend the annual Thanksgiving motorcycle trip with her friends as it was instructed to immobilize any electronic device within a certain radius. Servo and Xenon defeated it with the same tactic. Manfu - A dark green hunchbacked Mega-Virus whose head is embedded in his abdomen, and on that head he had two horns from which he could emit blasts of electricity. This was one of Malcolm's deadliest viruses. Servo encountered him three different times. He first was sent into the school's water fountain in "Stiff as a Motherboard" and when Sam tried to drink from it, he became completely paralyzed, unable to move or speak. With the help of his friends, Sam was able to go into the computer and become Servo. In battle, Manfu trapped Servo (encasing him in computer chips from the waist down), and while Servo managed to hold back Manfu for some time with his shield and sword, it was not enough and it prompted the others to form Xenon to help Servo. However, Manfu proved to be too much for Xenon (first shooting off its arms and then finishing it off with a second blast, causing Sydney, Tanker and Amp to be thrust back out of Cyberspace and into the real world again). Servo then escaped by reflecting back the energy at him. He then broke off Manfu's horns, and then tossed him high in the air. While he was still spinning in the air, Servo eliminated him with his Grid Power Punch. In "Syberteria Combat," Manfu created a barrier around the school at night. The battle went similarly, though this time Manfu took out Xenon in one blast, but Servo emerged victorious again. In "Truant False," the school computer sent an anti-virus after Hock, which Malcolm used to try to alter his attendance records so he could skip summer school. After the anti-virus destroyed Hock, Malcolm decided to corrupt the anti-virus, modifying its look to make it look exactly like Manfu, and Servo went to battle it. Sydney reminded Servo that he could not destroy it since it was actually an anti-virus, and destroying it would crash the computer system. The anti-virus very nearly destroyed Servo, but Sydney was able to reprogram it just in time, turning it back into the anti-virus and making it good again. Nixtor - A Hercules Beetle/dinosauric Mega-Virus used by Malcolm in "Water You Doing?" to turn the city's water supply into hydrochloric acid. He was brown in color, crawled on four legs, and had two pincerlike horns sticking out of his head. When Servo first encountered him, Nixtor made the ground below Servo melt, causing him to fall one level to where Nixtor was actually hiding. He then had Servo locked up in chains and caused deadly gas to spray at Servo. The gang prepared to aid Servo, but Tanker could not go in the end due to being sick in his stomach. Sydney and Amp boarded Borr and Vitor to aid Servo in battle. They freed him and helped turn the tide of the battle back in his favor. In the end Servo managed to win by wiping out the virus with his Grid Power Punch. This virus would be used again in "Tanks For the Memories" where he would be sent into Tanker's Walkman, completely messing with Tanker's personality by distracting him and making him say obscure quotes. The battle was the same as before, with Tanker's impairment keeping him from joining the battle this time. Raedon - A Mega-Virus that resembles an armor upgraded version of Troid. Its wings can emit deadly lasers or be used to fly. In "Starkey in Syberspace," Kilokahn and Malcolm realized that typically their monsters were stronger than Servo, but he managed to win because of the others providing back-up in their vehicles. Kilokahn created an Input/Output barrier so that the next time Servo went into Syberspace to battle Raedon, he would be cut off from the help of Team Samurai. However, Mrs. Starkey had accidentally transported herself into the digital world before the barrier was up when she mistook a picture of the Drago Jet on Sydney's laptop for a video game. Per Kilokahn's expectations, this virus proved to be super strong and almost had Servo beaten. However, Mrs. Starkey (who was obviously unaware of Servo's origins or identity) helped Servo out against the monster. After getting fired at a few times by the Starkey-piloted Drago Jet, Raedon was finally eliminated by Servo's Grid Power Punch. Unnamed Virus #3 - This virus was not officially named on the show, but because he very much resembled a non-armored version of Krono. This version of him had a light bulb-like structure on his head. Upon reluctantly agreeing to save Malcolm from Mrs. Starkey (who was under the love spell that came from the music box that the virus was placed into) in "Love Me Don't," Sam went in as Servo and made short work of the monster, tossing him about, breaking the light bulb on his head, and destroying him with his Grid Power. This was also the virus featured in "Take a Hike" where he was modified to knock out all the electricity in the world. As Sam was not present due to him being out of town, Tanker had to become Servo to take this virus on. He did and he manhandled the virus (Sydney remarked that Tanker fights viruses like he plays football) before destroying him with the Grid Power Punch. Arsenal programs Zenon program Zenon: A powerful humanoid created when the Vitor, Tracto, and Borr programs combine. Its fists can execute a ranged rocket punch, as in the battle against Kord, which it almost defeats on its own. Borr: An orange and black twin-drilled tank that can fly or burrow underground. Its driver is Sydney. Also, a famous basketball player - Charles "High Jump" Johnson - is dragged along to fight a Mega-Virus monster as a temporary driver for Borr when the band use the vehicles to escape the locked high school (in "Syber-Dunk"). Borr forms either Synchro's gauntlets and shoulder armor or Zenon's lower torso and upper legs. Tracto: A blue laser-equipped mini-tank. Its driver is Tanker. Tracto forms either Synchro's boots or Zenon's legs. Vitor: A red fighter jet armed with lasers, missiles, and a rig to restrain monsters. Its pilot is Amp and later Lucky. On one occasion it is piloted by Sydney, who has trouble flying it. Vitor forms either Synchro's helmet and body armor or Zenon's head, arms, and upper. In an odd occurrence, Zenon fights Servo in "Que Sera Servo" when a Mega-Virus places Servo under a spell which has him obey solely Kilokahn, until Amp is able to break the virus' hold by using Syd's belt to reboot him. When Borr, Tracto, and Vitor combine with Servo, they form Servo's upgrade known as Synchro, which is armed with a pair of shoulder drill missiles. Drago program Drago: A wingless dragon assembled when Jamb and Torb combine. It is piloted by almost always Tanker and occasionally Sydney. Later in the series, Jamb and Torb would just appear as one single jet fighter that is piloted by either Sydney or Tanker and eventually just transforms into Drago. Neither Amp or Lucky pilot Drago during the series' run. While Tanker and Syd are the default pilots for the "Drago Jet", Mrs. Starkey did act as the pilot in "Starkey in Syberspace". Jamb: A dragon-head-shaped mini-jet which also is the bazooka-like flamethrower Dragon Cannon that is used by Servo. Its main pilot is Sydney. Torb: A jet with various weaponry. Its main pilot is Tanker. When Drago combines with Servo, they form Servo's second upgrade known as Phormo, which is armed with a pair of laser gauntlets. Production Superhuman Samurai Syber-Squad was originally created by Tsuburaya Productions, Ultracom Inc. and DIC Productions, L.P. and was originally going to be named PowerBoy, but was renamed during production to avoid confusion with Saban Entertainment's American tokusatsu series Mighty Morphin Power Rangers. The series was made to capitalize on the upsurge in popularity of imported Japanese monster-robot shows which could be adapted with new, regionalized live-action footage. The series' development mirrored the creative construct established earlier with the Teenage Mutant Ninja Turtles. The master toy licensee, Playmates Toys, funded the series, interpolated American development via toy licensing rights, and did a commercial buy-in on the Fox network, where Haim Saban had established a kids block, with programs such as Mighty Morphin' Power Rangers and the 1992 X-Men cartoon. Playmates called upon the development team at DIC—which, coincidentally, was working with Pangea Corporation, which assisted in the development of DIC's New Kids on the Block and Playmates's earlier hit, Teenage Mutant Ninja Turtles. DIC, Pangea, and Playmates's marketing group created an ensemble of character names, traits and profiles, which were spun into a series offering. Under a product placement deal, Compaq computers were prominently featured in the series and were used to generate the show's computer-generated graphics. Elements of this series are used in the anime series adaptation of Gridman the Hyper Agent, SSSS.Gridman. The "SSSS" abbreviation in the title references Superhuman Samurai Syber-Squad. Episodes Home media release In 1995, Buena Vista Home Video (under the DIC Toon Time Video label) released the series on three two-episode VHS cassettes. On February 19, 2013, Mill Creek Entertainment releases the series' first DVD volume in Region 1 for the very first time. The three-disc set features the first 28 episodes of the series. On October 1 in that year, Mill Creek releases the second DVD volume which features the remaining 25 episodes. Online distribution Recently, five episodes (new episodes were added and old episodes were removed on Wednesdays) were available on Jaroo, which was an online video site then operated by Cookie Jar Entertainment with which DIC later merges. In or after 2013, Cookie Jar was taken over by DHX Media. The Jaroo site closes as a result, but DHX Media mentions that it planned to re-locate the site, and its shows, for online distribution. As of February 2016, the series could be streamed through the Pluto TV app on the "After School Cartoons" channel 370. See also Gridman the Hyper Agent - the Japanese tokusatsu counterpart of origin of this series Mighty Morphin Power Rangers- a 90's series also adapted from a Japanese tokusatsu.SSSS.Gridman - the anime series adaptation of Gridman the Hyper AgentSSSS.Dynazenon - the sequel to SSSS.Gridman'' References External links Superhuman Samurai Syber-Squad at DHX Media Ltd. 1990s American children's television series 1990s American comic science fiction television series 1990s American high school television series 1994 American television series debuts 1995 American television series endings Television series by DIC Entertainment Television series by Fremantle (company) Teen superhero television series American Broadcasting Company original programming American children's action television series American children's adventure television series American children's fantasy television series American comic science fiction television series American television series based on Japanese television series English-language television shows Cyberpunk television series Martial arts television series Playmates Toys Tsuburaya Productions Television shows about virtual reality Ultra television series Television series about teenagers Japan in non-Japanese culture Television series created by Jymn Magon
16476456
https://en.wikipedia.org/wiki/3317%20Paris
3317 Paris
3317 Paris, provisional designation , is a large Jupiter trojan from the Trojan camp, approximately in diameter. It was discovered on 26 May 1984 by American astronomer couple Carolyn and Eugene Shoemaker at Palomar Observatory in California, United States. The unusual and likely spherical T-type asteroid is one of the largest Jupiter trojans and has a rotation period of 7.1 hours. It was named after Trojan prince Paris from Greek mythology. Orbit and classification Paris is located in the Lagrangian point, 60° behind Jupiter in the so-called Trojan camp. It is also a non-family asteroid of the Jovian background population. It orbits the Sun at a distance of 4.6–5.9 AU once every 11 years and 11 months (4,359 days; semi-major axis of 5.22 AU). Its orbit has an eccentricity of 0.13 and an inclination of 28° with respect to the ecliptic. The body's observation arc begins as at Goethe Link Observatory in August 1963, more than 20 years prior to its official discovery observation at Palomar. Physical characteristics In the SMASS classification, Paris is a rare T-type asteroid, while in the Bus–DeMeo classification it is a dark D-type asteroid, the most common type among the Jupiter trojans. Its V–I color index of 0.95 is typical for D-type asteroids. Rotation period Several rotational lightcurve have been obtained since November 1990, when the first photometric observations of Paris – made by Italian astronomer Stefano Mottola, using the ESO 1-metre telescope at La Silla Observatory in Chile – gave a rotation period of hours with a brightness variation of magnitude. In July 1998, Mottola measured an identical period with an amplitude of 0.10 at Calar Alto Observatory in Spain (). Follow-up observations by Robert Stephens at the Center for Solar System Studies during 2016–2017 measured a period of 7.048 and 7.091 hours, each with an amplitude of 0.11 magnitude (), superseding a period of 7.08 hours by René Roy and Federico Manzini reported in 2008 and 2009, respectively (). The low brightness variation measured in all photometric observations is also indicative of a spherical, rather than elongated shape. Diameter and albedo An occultation of a star by Paris was measured on 17 August 2010, and gave a major and minor occultation axis of kilometers (poor fit). According to the surveys carried out by the Infrared Astronomical Satellite IRAS, the Japanese Akari satellite and the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, Paris measures between 116.26 and 120.45 kilometers in diameter and its surface has an albedo between 0.055 and 0.0626. The Collaborative Asteroid Lightcurve Link derives an albedo of 0.0625 and adopts a diameter of 116.26 kilometers from IRAS, based on an absolute magnitude of 8.3. In the catalogs of the three mentioned surveys above, Paris is the 6th, 10th and 11th largest Jupiter trojan, respectively. Naming This minor planet was named from Greek mythology, after prince Paris, one of the many sons of King Priam of Troy. His abduction of Helen of Troy, wife of Menelaus, gave cause to the Trojan War. The official naming citation was published by the Minor Planet Center on 27 December 1985 (). In culture In Schlock Mercenary several characters are held hostage by the mob in some corporate offices located on 3317 Paris. Notes References External links Asteroid Lightcurve Database (LCDB), query form (info ) Dictionary of Minor Planet Names, Google books Asteroids and comets rotation curves, CdR – Observatoire de Genève, Raoul Behrend Discovery Circumstances: Numbered Minor Planets (1)-(5000) – Minor Planet Center 003317 Discoveries by Eugene Merle Shoemaker Discoveries by Carolyn S. Shoemaker Minor planets named from Greek mythology Named minor planets 003317 19840526
60723450
https://en.wikipedia.org/wiki/Evan%20Mobley
Evan Mobley
Evan Mobley (born June 18, 2001) is an American professional basketball player for the Cleveland Cavaliers of the National Basketball Association (NBA). He played college basketball for the USC Trojans and was selected third overall by the Cleveland Cavaliers in the 2021 NBA draft. Early life and high school career Mobley, along with his older brother, Isaiah, began playing basketball from an early age under the guidance of their father, Eric, a former basketball player. Evan was initially reluctant to play basketball but became more interested in the sport in eighth grade, when he stood 6'4. Mobley began playing high school basketball as a freshman at Rancho Christian School in Temecula, California. In his first three years, he was teammates with Isaiah, a five-star recruit in the 2019 class. As a junior at Rancho Christian, Mobley averaged 19.2 points, 10.4 rebounds, and 4.7 blocks per game. He was named California Gatorade Player of the Year and The Press-Enterprise player of the year. In his senior season, Mobley averaged 20.5 points, 12.2 rebounds, 5.2 blocks, and 4.6 assists per game, leading Rancho Christian to a 22–8 record. He repeated as California Gatorade Player of the Year, joining Jrue Holiday as the award's only two-time winners. Mobley was named Morgan Wootten National Player of the Year. He was also selected to play in the McDonald's All-American Game, Jordan Brand Classic, and Nike Hoop Summit, but all three games were canceled due to the COVID-19 pandemic. Recruiting Mobley was considered a consensus five-star recruit and one of the top three players in the 2020 recruiting class and at one point ahead of Cade Cunningham . On August 5, 2019, he committed to play college basketball for USC over offers from UCLA and Washington, among other major NCAA Division I programs. Mobley became one of the highest-ranked players to join the program. College career In his college debut for USC on November 25, 2020, Mobley scored 21 points and had nine rebounds in a 95–87 overtime win against California Baptist. On March 11, 2021, at the Pac-12 Tournament quarterfinals, he posted a career-high 26 points, nine rebounds and five blocks in a 91–85 double overtime victory over Utah. In a 72–70 semifinals loss to Colorado, Mobley scored 26 points for a second time, while recording nine rebounds and five blocks. As a freshman, he averaged 16.4 points, 8.7 rebounds, 2.8 blocks and 2.4 assists per game. Mobley was named the Pac-12 Player of the Year, Defensive Player of the Year and Freshman of the Year. He became the second player from a major conference to win the trio of awards, joining Anthony Davis of the Southeastern Conference in 2012. On April 16, 2021, Mobley declared for the 2021 NBA draft, forgoing his remaining college eligibility. Mobley was seen by many as the second best prospect in the 2021 NBA draft behind Cade Cunningham. Professional career Cleveland Cavaliers (2021–present) Mobley was selected third overall in the 2021 NBA draft by the Cleveland Cavaliers. On August 3, 2021, Mobley signed with the Cavs. On August 8, 2021, he made his summer league debut in a 84–76 loss against the Houston Rockets where he posted 12 points, five rebounds, and three blocks in 28 minutes. On October 20, Mobley made his NBA debut, putting up 17 points, nine rebounds, and six assists in a 132–121 loss to the Memphis Grizzlies. On November 15, Mobley suffered a sprained right elbow in a 98–92 loss to the Boston Celtics. Mobley was named the NBA Eastern Rookie of the Month for games played in October/November. On December 8, Mobley became the Cavs first rookie since LeBron James in March 2004 to record 5 blocks in an NBA game. National team career Mobley played for the United States at the 2018 FIBA Under-17 World Cup in Argentina. In seven games, he averaged 9.3 points, 5.6 rebounds and 2.6 assists per game, helping his team win the gold medal. Mobley joined the United States for the 2019 FIBA Under-19 World Cup in Heraklion, Greece, but he was limited to playing two games and a total of seven minutes in the tournament due to back spasms. His team won the gold medal despite his absence. Career statistics College |- | style="text-align:left;"| 2020–21 | style="text-align:left;"| USC | style="background:#cfecec;"|33* || style="background:#cfecec;"|33* || 33.9 || .578 || .300 || .694 || 8.7 || 2.4 || .8 || 2.9 || 16.4 Personal life Mobley's father Eric played college basketball for Cal Poly Pomona and Portland and played professionally in China, Indonesia, Mexico and Portugal. He later coached Amateur Athletic Union (AAU) basketball for 11 years. In 2018, he was hired as assistant basketball coach for USC. Mobley's older brother Isaiah Mobley also plays for USC. His mother, Nicol, is an elementary school teacher. Mobley grew up with three foster siblings, including a Chinese exchange student named Johnny. References External links USC Trojans bio USA Basketball bio 2001 births Living people All-American college men's basketball players American men's basketball players Basketball players from San Diego Centers (basketball) Cleveland Cavaliers draft picks Cleveland Cavaliers players McDonald's High School All-Americans Power forwards (basketball) USC Trojans men's basketball players
33417844
https://en.wikipedia.org/wiki/FUJITSU%20Cloud%20IaaS%20Trusted%20Public%20S5
FUJITSU Cloud IaaS Trusted Public S5
FUJITSU Cloud IaaS Trusted Public S5 is a Fujitsu cloud computing platform that aims to deliver standardized enterprise-class public cloud services globally. It offers Infrastructure-as-a-Service (IaaS) from Fujitsu's data centers to provide computing resources that can be employed on-demand and suited to customers' needs. The service ensures a high level of reliability that is sufficient for deployment in mission-critical systems. In Japan, the service was offered as the On-Demand Virtual System Service (OViSS) and was then launched globally as Fujitsu Global Cloud Platform/S5 (FGCP/S5). Since July 2013 the service has been called IaaS Trusted Public S5. Globally, the service is operated from Fujitsu data centers located in Australia, Singapore, the United States, the United Kingdom and Germany. Fujitsu has also launched a Windows Azure powered Global Cloud Platform in a partnership with Microsoft. This is a Platform-as-a-Service (PaaS) offering that was known as FGCP/A5 in Japan but has since been renamed FUJITSU Cloud PaaS A5 for Windows Azure. It is operated from a Fujitsu data center in Japan. It offers a set of application development frameworks, such as Microsoft .NET, Java and PHP, and data storage capabilities consistent with the Windows Azure platform provided by Microsoft. The basic service consists of compute, storage, Microsoft SQL Azure, and Windows Azure AppFabric technologies such as Service Bus and Access Control Service, with options for interoperating services covering implementation and migration of applications, system building, systems operation, and support. In 2015, Fujitsu launched its next generation Cloud Service K5 and was deployed globally. In October 2018, Fujitsu announced that it was discontinuing K5 in all regions except Japan. On October 16, 2018 the company stated that it will hire 10,000 employees and train them to use Microsoft Azure in order to "address what we see as an industry-wide shortage in cloud related skills, so that we can help clients address their execution gap in the provision of services which support operational efficiency, digital co-creation and multi-cloud management.” History Fujitsu launched its global cloud strategy in April 2010. Provision of services from this platform was offered on a trial basis to 200 companies in Japan from the following month. Fujitsu announced general availability of the IaaS service in Japan, under the name On-Demand Virtual System Service (OViSS), starting on 1 October 2010. As part of the service's global rollout, it was launched in Australia under the name Fujitsu Global Cloud Platform (FGCP) in February 2011. This was followed by launches in March 2011 in Singapore and in May 2011 in the United Kingdom, Germany and the United States of America. In July 2012, Fujitsu added a center in western Japan to bring the total number to seven. In July 2013 Fujitsu announced the FUJITSU Cloud Initiative globally which also announced the new name as FUJITSU Cloud IaaS Trusted Public S5. In October 2018, Fujitsu announced that it was discontinuing K5 in all regions except Japan. Features Virtual system The basic component in the IaaS Trusted Public S5 is called a Virtual System. It consists of a firewall and one or multiple network segments in which customers can host their virtual machines. Once signed up, customers can deploy multiple systems within their environment, much like a virtual data center. Templates IaaS Trusted Public S5 employs templates to allow customers to quickly deploy virtual systems. A template consists of a firewall, network segment definitions and virtual servers with OS/middleware installed and set up. This allows quick selection of for example a system of three network zones, with pre-installed a web server in the DMZ, an application server in the secure zone and a database in another secure zone. A virtual system includes a firewall to control access between the network segments and from and to the Internet and intranet. The intranet connection is for communication between the virtual servers and customers' existing servers hosted in the same Fujitsu data center. Virtual servers Virtual servers are chosen from a list of pre-defined images, which have only an operating system installed or also additional software. Servers with different computational resources are offered. Operating systems Operating systems offered globally include Windows Server 2008, Windows Server 2012, Red Hat Enterprise Linux, CentOS and Ubuntu. Microsoft Office Fujitsu has claimed that licensing is the only issue that prevents delivery of Microsoft Office as a cloud service to web based devices, in a manner similar to Google Docs. Storage A virtual server has a system disk of a pre-defined size. Additional disks, with sizes configurable from 10 GB to 10 TB, can be attached to a server when it is not running. Network Customers can provision global IP addresses and assign them to virtual servers. This is similar to Amazon's Elastic IP Address feature. Service portal Users provision and manage their virtual systems through a self-service portal. It can also be used to initiate a VPN connection to a virtual zone. APIs All operations offered through the service portal are also offered through a cloud API. The API is XML-RPC based, using SSL to encrypt its messages and certificates for authentication. Operations are also possible through multi-cloud API Apache Deltacloud. Fujitsu has submitted their cloud API specification to the Distributed Management Task Force's Open Cloud Standards Incubator to promote open standards for cloud interoperability, and contributed to the DMTF's Cloud Infrastructure Management Interface (CIMI) standard. In July 2013 Fujitsu demonstrated a client using CIMI to manage a system in IaaS Trusted Public S5 at a Management Developers Conference. Server locations Servers are physically located in Fujitsu's Tier III data centers in Japan, Australia, Singapore, UK, US and Germany. References External links FUJITSU Cloud IaaS Trusted Public S5 General Information FUJITSU Cloud IaaS Trusted Public S5 General Documents and How-To Guides IaaS Trusted Public S5 Service Portal for East Japan IaaS Trusted Public S5 Service Portal for West Japan IaaS Trusted Public S5 Service Portal for Australia and New Zealand IaaS Trusted Public S5 Service Portal for Singapore, Malaysia, Indonesia, Thailand and Vietnam IaaS Trusted Public S5 Service Portal for the UK and Ireland IaaS Trusted Public S5 Service Portal for the Americas IaaS Trusted Public S5 Service Portal for Central Europe (CEMEA (Central Europe, Middle East, Eastern Europe, Africa) & India) API Design for IaaS Cloud Computing Service - IaaS Trusted Public S5's Cloud API Specification Fujitsu products Cloud platforms Cloud infrastructure Cloud computing providers
18388
https://en.wikipedia.org/wiki/Laoco%C3%B6n
Laocoön
Laocoön (; , , gen.: Λαοκόοντος), the son of Acoetes, is a figure in Greek and Roman mythology and the Epic Cycle.<ref>"Laocoon, son of Acoetes, brother of Anchises, and priest of Apollo…" (Hyginus, Fabula 135.</ref> He was a Trojan priest who was attacked, with his two sons, by giant serpents sent by the gods. The story of Laocoön has been the subject of numerous artists, both in ancient and in more contemporary times. Death The most detailed description of Laocoön's grisly fate was provided by Quintus Smyrnaeus in Posthomerica, a later, literary version of events following the Iliad. According to Quintus, Laocoön begged the Trojans to set fire to the Trojan horse to ensure it was not a trick. Athena, angry with him and the Trojans, shook the ground around Laocoön's feet and painfully blinded him. The Trojans, watching this unfold, assumed Laocoön was punished for the Trojans' mutilating and doubting Sinon, the undercover Greek soldier sent to convince the Trojans to let him and the horse inside their city walls. Thus, the Trojans wheeled the great wooden horse in. Laocoön did not give up trying to convince the Trojans to burn the horse, and Athena made him pay even further. She sent two giant sea serpents to strangle and kill him and his two sons. In another version of the story, it was said that Poseidon sent the sea serpents to strangle and kill Laocoön and his two sons. According to Apollodorus, it was Apollo who sent the two sea serpents. Laocoön had insulted Apollo by sleeping with his wife in front of the "divine image". Virgil used the story in the Aeneid. According to Virgil, Laocoön advised the Trojans to not receive the horse from the Greeks. They disregarded Laocoön's advice and were taken in by the deceitful testimony of Sinon. The enraged Laocoön threw his spear at the Horse in response. Minerva then sent sea serpents to strangle Laocoön and his two sons, Antiphantes and Thymbraeus, for his actions. "Laocoön, ostensibly sacrificing a bull to Neptune on behalf of the city (lines 201ff.), becomes himself the tragic victim, as the simile (lines 223–24) makes clear. In some sense, his death must be symbolic of the city as a whole," S. V. Tracy notes. According to the Hellenistic poet Euphorion of Chalcis, Laocoön is in fact punished for procreating upon holy ground sacred to Poseidon; only unlucky timing caused the Trojans to misinterpret his death as punishment for striking the horse, which they bring into the city with disastrous consequences. The episode furnished the subject of Sophocles' lost tragedy, Laocoön. In Aeneid, Virgil describes the circumstances of Laocoön's death: From the AeneidIlle simul manibus tendit divellere nodosperfusus sanie vittas atroque veneno,clamores simul horrendos ad sidera tollit:qualis mugitus, fugit cum saucius aramtaurus et incertam excussit cervice securim.Literal English translation:At the same time he stretched forth to tear the knots with his handshis fillets soaked with saliva and black venomat the same time he lifted to heaven horrendous cries:like the bellowing when a wounded bull has fled from the altarand has shaken the ill-aimed axe from its neck.John Dryden's translation:With both his hands he labors at the knots;His holy fillets the blue venom blots;His roaring fills the flitting air around.Thus, when an ox receives a glancing wound, He breaks his bands, the fatal altar flies,And with loud bellowings breaks the yielding skies.Classical descriptions The story of Laocoön is not mentioned by Homer, but it had been the subject of a tragedy, now lost, by Sophocles and was mentioned by other Greek writers, though the events around the attack by the serpents vary considerably. The most famous account of these is now in Virgil's Aeneid where Laocoön was a priest of Neptune (Poseidon), who was killed with both his sons after attempting to expose the ruse of the Trojan Horse by striking it with a spear. Virgil gives Laocoön the famous line "Equō nē crēdite, Teucrī / Quidquid id est, timeō Danaōs et dōna ferentēs", or "Do not trust the Horse, Trojans / Whatever it is, I fear the Greeks even bearing gifts." This line is the source of the saying: "Beware of Greeks bearing gifts." In Sophocles, however, he was a priest of Apollo who should have been celibate but had married. The serpents killed only the two sons, leaving Laocoön himself alive to suffer. In other versions, he was killed for having committed an impiety by making love with his wife in the presence of a cult image in a sanctuary, or simply making a sacrifice in the temple with his wife present. In this second group of versions, the snakes were sent by Poseidon and in the first by Poseidon and Athena, or Apollo, and the deaths were interpreted by the Trojans as proof that the horse was a sacred object. The two versions have rather different morals: Laocoön was either punished for doing wrong, or for being right. Later depictions The death of Laocoön was famously depicted in a much-admired marble Laocoön and His Sons, attributed by Pliny the Elder to the Rhodian sculptors Agesander, Athenodoros, and Polydorus, which stands in the Vatican Museums, Rome. Copies have been executed by various artists, notably Baccio Bandinelli. These show the complete sculpture (with conjectural reconstructions of the missing pieces) and are located in Rhodes, at the Palace of the Grand Master of the Knights of Rhodes, Rome, the Uffizi Gallery in Florence and in front of the Archaeological Museum, Odessa, Ukraine, amongst others. Alexander Calder also designed a stabile which he called Laocoön in 1947; it's part of the Eli and Edyth Broad collection in Los Angeles. The marble Laocoön provided the central image for Lessing's Laocoön, 1766, an aesthetic polemic directed against Winckelmann and the comte de Caylus. Daniel Albright reengages the role of the figure of Laocoön in aesthetic thought in his book Untwisting the Serpent: Modernism in Literature, Music, and Other Arts. In addition to other literary references, John Barth employs a bust of Laocoön in his novella, The End of the Road. The R.E.M. song "Laughing" references Laocoön, rendering him female ("Laocoön and her two sons"), they also reference Laocoön in the song "Harborcoat". The marble's pose is parodied in the comic book Asterix and the Laurel Wreath. American author Joyce Carol Oates also references Laocoön in her 1989 novel American Appetites. In Stave V of A Christmas Carol, by Charles Dickens (1843), Scrooge awakes on Christmas morning, "making a perfect Laocoon of himself with his stockings". Barbara Tuchman's The March of Folly begins with an extensive analysis of the Laocoön story. The American feminist poet and author Marge Piercy includes a poem titled, "Laocoön is the name of the figure", in her collection Stone, Paper, Knife (1983), relating love lost and beginning. John Steinbeck references Laocoön in his American literary classic East of Eden, referring to a picture of “Laocoön completely wrapped in snakes” when describing artwork hanging in classrooms at the Salinas schoolhouse. In Hector Berlioz's opera Les Troyens, the death of Laocoön is a pivotal moment of the first act after Aeneas' entrance, sung by eight singers and a double choir ("ottetto et double chœur"). It begins with the verse "Châtiment effroyable" ("frightful punishment"). Namesake 3240 Laocoon, an asteroid named after Laocoön Notes References Sources Boardman, John ed., The Oxford History of Classical Art, 1993, OUP, Gall, Dorothee and Anja Wolkenhauer (hg). Laokoon in Literatur und Kunst: Schriften des Symposions "Laokoon in Literatur und Kunst" vom 30.11.2006, Universität Bonn (Berlin; New York: Walter de Gruyter, 2009) (Beiträge zur Altertumskunde, 254). Smith, R.R.R., Hellenistic Sculpture, a handbook, Thames & Hudson, 1991, Classical sources Compiled by Tracy, 1987:452 note 3, which also mentions a fragmentary line possibly by Nicander. Arctinus, OCT Homer 5.107.23 Dionysius of Halicarnassus, Roman Antiquities 1.48.2 Hyginus, Fabula 135 Petronius 89; Servius on Aeneid 2.201 pseudo-Apollodorus, Epitome 5.18 Quintus Smyrnaeus, Posthomerica 12.445ff John Tzetzes, Ad Lycophron'' 347 External links Laocoon in the Digital Sculpture Project Towards a Newest Laocoön Mythological Greek seers Characters in the Aeneid Trojans Characters in Greek mythology Deeds of Apollo Deeds of Poseidon
6533945
https://en.wikipedia.org/wiki/Leaky%20abstraction
Leaky abstraction
In software development, a leaky abstraction is an abstraction that leaks details that it is supposed to abstract away. As coined by Joel Spolsky, the Law of Leaky Abstractions states: This statement highlights a particularly problematic cause of software defects: the reliance of the software developer on an abstraction's infallibility. Spolsky's article gives examples of an abstraction that works most of the time, but where a detail of the underlying complexity cannot be ignored, thus leaking complexity out of the abstraction back into the software that uses the abstraction. History The term "leaky abstraction" was popularized in 2002 by Joel Spolsky. An earlier paper by Kiczales describes some of the issues with imperfect abstractions and presents a potential solution to the problem by allowing for the customization of the abstraction itself. Effect on software development As systems become more complex, software developers must rely upon more abstractions. Each abstraction tries to hide complexity, letting a developer write software that "handles" the many variations of modern computing. However, this law claims that developers of reliable software must learn the abstraction's underlying details anyway. Examples Spolsky's article cites many examples of leaky abstractions that create problems for software development: The TCP/IP protocol stack is the combination of TCP, which tries to provide reliable delivery of information, running on top of IP, which provides only 'best-effort' service. When IP loses a packet TCP has to retransmit it, which takes additional time. Thus TCP provides the abstraction of a reliable connection, but the implementation details leak through in the form of potentially variable performance (throughput and latency both suffer when data has to be retransmitted). Iterating over a large two-dimensional array can have radically different performance if done horizontally rather than vertically, depending on the order in which elements are stored in memory. One direction may vastly increase cache misses and page faults, both of which greatly delay access to memory. The SQL language abstracts away the procedural steps for querying a database, allowing one to merely define what one wants. But certain SQL queries are thousands of times slower than other logically equivalent queries. On an even higher level of abstraction, ORM systems, which isolate object-oriented code from the implementation of object persistence using a relational database, still force the programmer to think in terms of databases, tables, and native SQL queries as soon as performance of ORM-generated queries becomes a concern. Although network file systems like NFS and SMB let one treat files on remote machines as if they were local, the connection to the remote machine may slow down or break, and the file stops acting as if it were local. The ASP.NET web forms programming platform, not to be confused with ASP.NET MVC, abstracts away the difference between HTML code to handle clicking on a hyperlink (<a>) and code to handle clicking on a button. However, ASP.NET needs to hide the fact that in HTML there is no way to submit a form from a hyperlink. It does this by generating a few lines of JavaScript and attaching an onclick handler to the hyperlink. However, if the end user has JavaScript disabled, the ASP.NET application malfunctions. Furthermore, one cannot naively think of event handlers in ASP.NET in the same way as in a desktop GUI framework such as Windows Forms; due to the asynchronous nature of the Web, processing event handlers in ASP.NET requires exchanging data with the server and reloading the form. Git's interface is another example for leaky abstraction as explained in this article. See also Abstraction inversion Dependency inversion principle Essential complexity Modular programming Separation of concerns References Abstraction
1868140
https://en.wikipedia.org/wiki/Unreal%20Tournament%203
Unreal Tournament 3
Unreal Tournament 3 (UT3) is a first-person arena shooter video game developed by Epic Games and published by Midway Games. Part of the Unreal franchise, it is the fourth game in the Unreal Tournament series, and the eighth game overall; its name is in reflection of the game being the first in the franchise to use Unreal Engine 3. It was released on November 19, 2007, for Microsoft Windows, December 11 for the PlayStation 3, and on July 3, 2008, for the Xbox 360. OS X and Linux ports were planned, but they were eventually cancelled. Similar to its predecessors, Unreal Tournament 3 is primarily an online multiplayer title. There are eight modes, including Deathmatch, Capture the Flag, as well as modes like Duel, Warfare, Betrayal and Greed. In vehicle maps, the player is equipped with a hover board, which allows players to quickly traverse large maps and grapple onto other teammates' vehicles. The game's single-player campaign does not follow a plot based around the eponymous tournament, but rather a Necris attack that occurs on a colony on an unknown planet, releasing armed Kralls, a warlike race of aliens, on the humans. The game received positive reviews from critics, and sold more than 1 million copies worldwide. Gameplay Similar to the prior entries of the series, the game is primarily an online multiplayer title offering several game modes, including large-scale Warfare, Capture the Flag, and Deathmatch. It also includes an extensive offline multiplayer game with an in-depth story, beginning with a simple tournament ladder and including team members with unique personalities. The following game modes are included: Deathmatch Team Deathmatch Capture the Flag Duel – A one versus one game mode. It uses a queuing system: the winner stays, and the loser goes back to the end of the queue. A typical match lasts fifteen minutes with the winner being the player with the most kills. Warfare – A mix of Onslaught and Assault game modes. While basic game rules are equal to those of Onslaught, Warfare adds countdown nodes (which, after being captured and defended for a certain period of time, create a vehicle or trigger an event helpful to the capturing team) as well as the orb, which can be used to instantly capture and defend nodes. Vehicle Capture the Flag – Capture the Flag, with vehicles as part of the map; this game mode is distinct from the standard Capture the Flag mode. Also, players are given a hoverboard rather than a translocator. Betrayal – This game type places freelance players on teams, and when the members of each team kill enemies, the pot for that team grows. Anybody on a team with a pot can betray the rest of the team by shooting them, thus taking the pot, but they must defend themselves from the betrayed teammates for 30 seconds after that, or the teammates receive extra points. Greed – Greed is a game that (like the UT2004 mod of the same name) focuses on collecting skulls dropped from dead players and capturing them in the opposing team's base. For Greed, the game uses all Capture the Flag and Vehicle Capture the Flag maps. Modes not returning from the prior Unreal Tournament games include Invasion, Mutant (having been later on partially replaced by the Titan mutator in the UT3 Titan Pack), Onslaught (replaced by Warfare), Bombing Run, Last Man Standing, Domination, and Double Domination. Assault was removed from the game during production. In this installment of Unreal Tournament, the vehicles are split into two factions, the Axon vehicles and Necris vehicles. The Axon vehicles are the same vehicles from Unreal Tournament 2004, but several have significant game play changes. In addition, on vehicle maps every player is equipped with a personal hover board, a skateboard-like device that allows players to quickly traverse large maps and grapple onto other teammates' vehicles. The hover board is very vulnerable to attack, and any hit will knock the player off the board and disable him or her for several seconds, leaving the player exposed and vulnerable. The player cannot use any weapons while on the board. Plot Unlike the prior Unreal Tournament games, the single-player campaign does not follow a plot based around the Tournament Grand Championship, and therefore several of the teams within Unreal Tournament 3 are not Tournament competitors. The five playable factions are: Iron Guard, a team of human mercenaries led by former Tournament champion Malcolm; the Ronin, a band of four survivors of a Skaarj attack on a human colony; Liandri, a series of advanced humanoid robots custom-built or retrofitted for combat; the Krall, a warlike race of aliens formerly under the leadership of the Skaarj, returning from their initial appearance in the original Unreal; and the Necris, warriors who have undergone the process of the same name, making them stronger at the expense of replacing their biological processes with "Nanoblack", effectively turning them into undead soldiers (hence the name, Necris). In the Campaign, players control members of the Ronin, and the Necris serve as the chief antagonists. In the game's story a Necris attack occurs on a colony on unknown planet, releasing armed Krall on the humans. The colony is defenseless, but a group of Ronins arrives on the scene, defending the survivors. Reaper, the group's leader, advises his second-in-command warrior Othello and his sister Jester to destroy the orbital Necris blockade with a fighter, and orders team's sniper expert, Bishop, to provide cover as he swarms to save the colony. Suddenly, he is caught in the explosion of an incoming rocket missile and passes out, but not before seeing an unknown Necris woman shooting a soldier next to him. Reaper is rescued by Othello and Jester and wakes up in the base of the Izanagi, a guerrilla force that fights against Necris and Axon, and he meets with the leader, revealed to be Malcolm, who also leads the Iron Guard as the Izanagi's army. He explains that the Necris attack was masterminded by Liandri, who also turn some of the Krall, into Necris, controlled undead soldiers. The unknown woman who Reaper saw turns out to be Akasha, the Necris operative who destroyed the colony and also leads the Necris forces. Reaper wants to kill her, but Malcolm tells him that he needs to prove himself first. Development and release The game was announced on May 9, 2005, as Unreal Tournament 2007 for a 2006 release. In August 2006, the game was delayed until the first half of 2007. The game was renamed to Unreal Tournament 3. The original Unreal Tournament uses the first Unreal Engine, while UT2003 and UT2004 use Unreal Engine 2. Since 2004 incorporates all of the content from 2003, they are regarded as part of the same generation. UT3 is the third generation, as it runs on Unreal Engine 3 and does not reuse any content. The game also uses motion blur effects. Windows version A limited collector's edition of the game features an exclusive collector's edition tin and a hardcover art book. A bonus DVD is also included, featuring more than twenty hours of Unreal Engine 3 tool kit video tutorials, the history of the Unreal Tournament series, and behind-the-scenes footage of the making of Unreal Tournament 3. The Limited Collector's Edition was sold in the United States, Canada, Latin America, Europe, South Africa, Australia and most other territories. PlayStation 3 version The PlayStation 3 version supports community-made mods that can be uploaded and downloaded to the PS3's HDD or external media, as well as mouse and keyboard inputs. The 1.1 patch was released on March 21, 2008. It adds the ability for players using the North American and European versions to play together, fixes problems with some USB headsets, and displays the lowest pinging servers at the top of the server list. Some updates are only applied on the North American version, since the PAL version released in March 2008 was already partially updated. The 2.0 patch was released on March 5, 2009, and adds better PC mod support, split screen, smarter AI, forty-eight obtainable Trophies, server-side improvements, an improved map vote, local multiplayer, and a new user interface. Online and LAN multiplayer for this version was terminated in July 2014, following the shutdown of all GameSpy servers. Xbox 360 version Upon release, the Xbox 360 version had five exclusive maps, two exclusive characters, a two-player split screen mode, and all the downloadable content released by Epic already on the disc. With the release of the PS3 and PC "Titan Upgrade" patch on March 5, these versions offered the formerly exclusive Xbox 360 content, as well as other content. The Xbox 360 version does not support user-generated mods, as additional content has to be verified by Microsoft before being released. It is the only version to support controllers only. Cancelled Linux and Mac OS X versions The Linux and Mac OS X versions of the game were planned to be released as downloadable installers that work with the retail disc. Ryan C. Gordon has uploaded screenshots of the game, dating from September 2008, running on both platforms. On May 22, 2009, Ryan stated that the UT3 port for Linux was still in process, but later in December 2010, Steve Polge announced that the Linux port would never be released, making it the first Unreal Tournament game not to be released on Linux. Soundtrack Unreal Tournament 3: The Soundtrack is primarily based on the original Unreal Tournament score, which was composed by Straylight Productions and Michiel van den Bos. Jesper Kyd and Rom Di Prisco remixed many of UT99's tracks and composed several other original tracks, which were released on November 20, 2007, by Sumthing Else. Sandhya Sanjana was featured as a guest vocalist. Kevin Riepl did also contribute in music production for the game, scoring the cutscenes as well as a few in-game music tracks. Titan Pack and Black Edition A free update titled Titan Pack was released for the PC in March 2009; the PS3 version of the pack was released on March 19. The pack includes five maps and two characters that were formerly exclusive to the Xbox 360 version, along with eleven brand-new maps, two new game modes ("Greed" and "Betrayal"), and the Titan Mutator. The Titan Mutator causes a player to grow in size as they do better, while carrying alternative weapons and power-ups. The expansion also includes a new power-up, a new vehicle, two new deployables, and the addition of stinger turrets. A new patch was also released in conjunction with the Titan Pack, which allowed for various AI improvements (especially in vehicle modes), networking performance upgrades and added support for Steam Achievements (PC) and Trophies (PS3). It also adds a two-player split screen mode (formerly exclusive to the 360 version) and mod browsing for the PS3 version. The Black Edition is a complete Unreal Tournament 3 package—included is the complete UT3 (with patch 2.0) as well as the Titan Pack. The Titan Pack gives players a substantial amount of enhanced features and new content, including many original environments, new gametypes, the namesake Titan mutator, powerful deployables and weapons, new characters, and the Stealthbender vehicle. Reception Unreal Tournament 3 received positive reviews from critics. Xbox Magazine rated it 8.5 out of 10. PlayStation: the Official Magazine gave it 5 stars out of 5 in its February 2008 issue and stated, "UT3 looks great, but it's every bit the stunner under the surface". In March 2008, Midway announced that UT3 had sold over a million copies worldwide. References External links Unreal Tournament 3 at MobyGames 2007 video games 2008 video games Cancelled macOS games Cancelled Linux games Epic Games games Esports games First-person shooters Arena shooters Multiplayer and single-player video games Multiplayer online games PlayStation 3 games Split-screen multiplayer games Unreal (video game series) Unreal Engine games Video game sequels Video games about death games Video games scored by Jesper Kyd Video games scored by Rom Di Prisco Video games developed in the United States Video games set in the 24th century Video games about revenge Video games using PhysX Video games with expansion packs Video games with user-generated gameplay content Windows games Xbox 360 games
47973082
https://en.wikipedia.org/wiki/Jane%20Veeder
Jane Veeder
Jane Veeder (born 1944) is an American digital artist, filmmaker and educator. She is a professor at San Francisco State University in the Department of Design and Industry, at which she held the position of chair between 2012 and 2015. Veeder is best known for her pioneering work in early computer graphics, however she has also worked extensively with traditional art forms such as painting, ceramics, theatre, and photography. Veeder moved away from traditional art making and began her work in the digital arts in 1976 after her enrollment in the graduate program at the School of the Art Institute of Chicago (SAIC) where she first discovered video as an artistic medium. In 1982, her video 'Montana' became the first computer graphics piece to be featured in the video collection of the Museum of Modern Art in New York. Her video work typically involves working with a computer to create the images, rather than a video recorder, to achieve a more direct relationship between the artist and the piece. Many pieces are meant to involve participation between the viewer and the work itself. Veeder's work marks some of the significant steps that took digital technology into the fine arts, which never had been done previously. Early life and education Both of Jane Veeder's parents were artists, her mother was a painter and her father was a photographer. From 1967 until 1969, Veeder studied ceramic sculptures and photography at California College of Arts and Crafts (now known as the California College of the Arts) and graduated with a BFA degree. In the early 1970s, Veeder moved from California to the neighborhood of Pilsen in Chicago, Illinois. From 1975 until 1977, Veeder pursued her MFA degree at The School of the Art Institute of Chicago (SAIC) where she studied video and filmmaking. While studying at SAIC in 1976, she first met Phil Morton, the founder of the Video Department at SAIC. Soon after meeting, their individual art practices became heavily influenced by each other. New technologies and artistic communities were emerging at this time. Their collaboration resulted in them creating a number of programs from scratch. After enrolling in the School of the Art Institute of Chicago's MFA program, Veeder began taking film classes. By the end of her first year at SAIC, Veeder had discovered video as an artistic medium and switched entirely from studying Ceramic Sculpture to studying Video and Film. Early career in computer graphics Veeder's knowledge of photography lead her to experiment with video art, eventually working across multiple program platforms. These included Bally Home Computer/Arcade, and ZGRASS computer language which was eventually combined with Sandin Image Processor. The burgeoning professional video game industry in Chicago gave Veeder an outlet to put her theories into practice. Veeder collaborated with Phil Morton to create the video art pieces "Program #7" and "Program #9" in 1978. Collaborative work Program #7 Between 1976 and 1982, Jane Veeder traveled the western mountains of the United States with Phil Morton. On these road trips the two would shoot video of their surroundings using a portable video recorder. Some of these video recordings of the western mountain terrain were used to produce the televised video piece known as Program #7. Program #7 was produced as a part of a larger group of videos known as The Electronic Visualization Center: A Television Research Satellite to the School of the Art Institute of Chicago. Program #7 was televised on Chicago Public Television as a part of a program which ran work by independent video creators. Program #7 was created using a Sandin Image Processor and a Bally Home Computer. Graphics generated using a Bally Home Computer would be overlaid overtop of the video recorded by Veeder and Morton using the Sandin Image Processor. The Sandin Image Processor would also be used to add varying patterns to the image. The Paint Problem Veeder coauthored the article titled The Paint Problem with Copper Giloth in 1985. The article, meant for IEEE Computer Graphics and Applications, analyzed the ways in which computer art programs were emulating real world processes digitally rather than making use of the unique capabilities that computers had to offer. Veeder and Giloth argued that Computer graphics was not just a tool to make your existing processes faster but rather an entirely new set of tools with an entirely new set of capabilities that had yet to be taken advantage of. Solo work Over the span of her career, Jane Veeder has worked on many independent projects, Several of these projects have been exhibited at the SIGGRAPH Art Show. In 1982, Veeder created several works utilizing the capabilities of the Datamax UV-1 Zgrass Graphics Computer. She continued to use the Datamax UV-1 for several more projects in the years to come. Veeder first exhibited her digitally synthesized work at the 1982 SIGGRAPH Art Show. At the 1982 conference, Veeder exhibited her works, Bubblespiral, Montana, Warpitpout, and Bustergrid. Her video piece titled Montana piece would go on to become the first computer graphics video piece to be featured in the Museum of Modern Art's Video Collection. Bubblespiral is a 2-Dimensional printed piece measuring 21.5 x 28 inches in size. Montana is an interactive piece incorporating computer synthesized graphics. The piece was displayed as a video and has a duration of 3:05 minutes. Warpitout is an interactive piece that incorporates realtime morphing of an image of the players face. The player could use the controls on the unit to distort the image of themselves in realtime. Bustergrid is another 2-Dimensional printed artwork created using computer graphics measuring 21.5 x 28 inches in size, the same dimensions as Bubblespiral One year later in 1983, Veeder only produced one artwork that would be shown at that years SIGGRAPH Art Show, the piece, titled Floater, is a 6:12 minuter long real-time computer generated video piece. Again, two years later, at the 1985 SIGGRAPH Art Show Veeder only exhibited one work. The work exhibited that year, titled Vizgame and was a computer generated interactive artwork. The piece allowed the player to build a real-time generated animation on a 16-square grid, allowing the player to control the animation of each block. In 2018 Veeder work was included in the Chicago New Media 1973-1992 exhibition, curated by jonCates. References Living people 20th-century American women artists American digital artists Women digital artists San Francisco State University faculty School of the Art Institute of Chicago alumni American video artists 21st-century American women artists 1944 births
38810153
https://en.wikipedia.org/wiki/Decompression%20practice
Decompression practice
The practice of decompression by divers comprises the planning and monitoring of the profile indicated by the algorithms or tables of the chosen decompression model, to allow asymptomatic and harmless release of excess inert gases dissolved in the tissues as a result of breathing at ambient pressures greater than surface atmospheric pressure, the equipment available and appropriate to the circumstances of the dive, and the procedures authorized for the equipment and profile to be used. There is a large range of options in all of these aspects. Decompression may be continuous or staged, where the ascent is interrupted by stops at regular depth intervals, but the entire ascent is part of the decompression, and ascent rate can be critical to harmless elimination of inert gas. What is commonly known as no-decompression diving, or more accurately no-stop decompression, relies on limiting ascent rate for avoidance of excessive bubble formation. Staged decompression may include deep stops depending on the theoretical model used for calculating the ascent schedule. Omission of decompression theoretically required for a dive profile exposes the diver to significantly higher risk of symptomatic decompression sickness, and in severe cases, serious injury or death. The risk is related to the severity of exposure and the level of supersaturation of tissues in the diver. Procedures for emergency management of omitted decompression and symptomatic decompression sickness have been published. These procedures are generally effective, but vary in effectiveness from case to case. The procedures used for decompression depend on the mode of diving, the available equipment, the site and environment, and the actual dive profile. Standardized procedures have been developed which provide an acceptable level of risk in the circumstances for which they are appropriate. Different sets of procedures are used by commercial, military, scientific and recreational divers, though there is considerable overlap where similar equipment is used, and some concepts are common to all decompression procedures. Decompression Decompression in the context of diving derives from the reduction in ambient pressure experienced by the diver during the ascent at the end of a dive or hyperbaric exposure and refers to both the reduction in pressure and the process of allowing dissolved inert gases to be eliminated from the tissues during this reduction in pressure. When a diver descends in the water column the ambient pressure rises. Breathing gas is supplied at the same pressure as the surrounding water, and some of this gas dissolves into the diver's blood and other fluids. Inert gas continues to be taken up until the gas dissolved in the diver is in a state of equilibrium with the breathing gas in the diver's lungs, (see: "Saturation diving"), or the diver moves up in the water column and reduces the ambient pressure of the breathing gas until the inert gases dissolved in the tissues are at a higher concentration than the equilibrium state, and start diffusing out again. Dissolved inert gases such as nitrogen or helium can form bubbles in the blood and tissues of the diver if the partial pressures of the dissolved gases in the diver gets too high above the ambient pressure. These bubbles and products of injury caused by the bubbles can cause damage to tissues known as decompression sickness, or "the bends". The immediate goal of controlled decompression is to avoid development of symptoms of bubble formation in the tissues of the diver, and the long-term goal is to also avoid complications due to sub-clinical decompression injury. A diver who exceeds the no-decompression limit for a decompression algorithm or table has a theoretical tissue gas loading which is considered likely to cause symptomatic bubble formation unless the ascent follows a decompression schedule, and is said to have a decompression obligation. Common procedures The descent, bottom time and ascent are sectors common to all dives and hyperbaric exposures. Descent rate Descent rate is generally allowed for in decompression planning by assuming a maximum descent rate specified in the instructions for the use of the tables, but it is not critical. Descent slower than the nominal rate reduces useful bottom time, but has no other adverse effect. Descent faster than the specified maximum will expose the diver to greater ingassing rate earlier in the dive, and the bottom time must be reduced accordingly. In the case of real-time monitoring by dive computer, descent rate is not specified, as the consequences are automatically accounted for by the programmed algorithm. Bottom time Bottom time is the time spent at depth before starting the ascent. Bottom time used for decompression planning may be defined differently depending on the tables or algorithm used. It may include descent time, but not in all cases. It is important to check how bottom time is defined for the tables before they are used. For example, tables using Bühlmann's algorithm define bottom time as the elapsed time between leaving the surface and the start of the final ascent at 10 metres per minute, and if the ascent rate is slower, then the excess of the ascent time to the first required decompression stop needs to be considered part of the bottom time for the tables to remain safe. Ascent rate The ascent is an important part of the process of decompression, as this is the time when reduction of ambient pressure occurs, and it is of critical importance to safe decompression that the ascent rate is compatible with safe elimination of inert gas from the diver's tissues. Ascent rate must be limited to prevent supersaturation of tissues to the extent that unacceptable bubble development occurs. This is usually done by specifying a maximum ascent rate compatible with the decompression model chosen. This will be specified in the decompression tables or the user manual for the decompression software or personal decompression computer. The instructions will usually include contingency procedures for deviation from the specified rate, both for delays and exceeding the recommended rate. Failure to comply with these specifications will generally increase the risk of decompression sickness. Typically maximum ascent rates are in the order of per minute for dives deeper than . Some dive computers have variable maximum ascent rates, depending on depth. Ascent rates slower than the recommended standard for the algorithm will generally be treated by a computer as part of a multilevel dive profile and the decompression requirement adjusted accordingly. Faster ascent rates will elicit a warning and additional decompression stop time to compensate. Monitoring decompression status The decompression status of the diver must be known before starting the ascent, so that an appropriate decompression schedule can be followed to avoid an excessive risk of decompression sickness. Scuba divers are responsible for monitoring their own decompression status, as they are the only ones to have access to the necessary information. Surface supplied divers depth and elapsed time can be monitored by the surface team, and the responsibility for keeping track of the diver's decompression status is generally part of the supervisor's job. The supervisor will generally assess decompression status based on dive tables, maximum depth and elapsed bottom time of the dive, though multi-level calculations are possible. Depth is measured at the gas panel by pneumofathometer, which can be done at any time without distracting the diver from their activity. The instrument does not record a depth profile, and requires intermittent action by the panel operator to measure and record the current depth. Elapsed dive time and bottom time are easily monitored using a stopwatch. Worksheets for monitoring the dive profile are available, and include space for listing the ascent profile including decompression stop depths, time of arrival, and stop time. If repetitive dives are involved, residual nitrogen status is also calculated and recorded, and used to determine the decompression schedule. A surface supplied diver may also carry a bottom timer or decompression computer to provide an accurate record of the actual dive profile, and the computer output may be taken into account when deciding on the ascent profile. The dive profile recorded by a dive computer would be valuable evidence in the event of an accident investigation. Scuba divers can monitor decompression status by using maximum depth and elapsed time in the same way, and can use those to either select from a previously compiled set of surfacing schedules, or identify the recommended profile from a waterproof dive table taken along on the dive. It is possible to calculate a decompression schedule for a multilevel dive using this system, but the possibility of error is significant due to the skill and attention required, and the table format, which can be misread under task loading or in poor visibility. The current trend is towards the use of dive computers to calculate the decompression obligation in real time, using depth and time data automatically input into the processing unit, and continuously displayed on the output screen. Dive computers have become quite reliable, but can fail in service for a variety of reasons, and it is prudent to have a backup system available to estimate a reasonable safe ascent if the computer fails. This can be a backup computer, a written schedule with watch and depth gauge, or the dive buddy's computer if they have a reasonably similar dive profile. If only no-stop diving is done, and the diver makes sure that the no-stop limit is not exceeded, a computer failure can be managed at acceptable risk by starting an immediate direct ascent to the surface at an appropriate ascent rate. No-decompression dives A "no-decompression", or "no-stop" dive is a dive that needs no decompression stops during the ascent according to the chosen algorithm or tables, and relies on a controlled ascent rate for the elimination of excess inert gases. In effect, the diver is doing continuous decompression during the ascent. No-decompression limit The "no-decompression limit" (NDL) or "no-stop limit" , is the time interval that a diver may theoretically spend at a given depth without having to perform any decompression stops while surfacing. The NDL helps divers plan dives so that they can stay at a given depth for a limited time and then ascend without stopping while still avoiding an unacceptable risk of decompression sickness. The NDL is a theoretical time obtained by calculating inert gas uptake and release in the body, using a decompression model such as the Bühlmann decompression algorithm. Although the science of calculating these limits has been refined over the last century, there is still much that is unknown about how inert gases enter and leave the human body, and the NDL may vary between decompression models for identical initial conditions. In addition, every individual's body is unique and may absorb and release inert gases at different rates at different times. For this reason, dive tables typically have a degree of conservatism built into their recommendations. Divers can and do suffer decompression sickness while remaining inside NDLs, though the incidence is very low. On dive tables a set of NDLs for a range of depth intervals is printed in a grid that can be used to plan dives. There are many different tables available as well as software programs and calculators, which will calculate no decompression limits. Most personal decompression computers (dive computers) will indicate a remaining no decompression limit at the current depth during a dive. The displayed interval is continuously revised to take into account changes of depth as well as elapsed time. Dive computers also usually have a planning function which will display the NDL for a chosen depth taking the diver's recent decompression history into account. Safety stop As a precaution against any unnoticed dive computer malfunction, diver error or physiological predisposition to decompression sickness, many divers do an extra "safety stop" in addition to those prescribed by their dive computer or tables. A safety stop is typically 1 to 5 minutes at . They are usually done during no-stop dives and may be added to the obligatory decompression on staged dives. Many dive computers indicate a recommended safety stop as standard procedure for dives beyond specific limits of depth and time. The Goldman decompression model predicts a significant risk reduction following a safety stop on a low-risk dive Continuous decompression Continuous decompression is decompression without stops. Instead of a fairly rapid ascent rate to the first stop, followed by a period at static depth during the stop, the ascent is slower, but without officially stopping. In theory this may be the optimum decompression profile. In practice it is very difficult to do manually, and it may be necessary to stop the ascent occasionally to get back on schedule, but these stops are not part of the schedule, they are corrections. For example, USN treatment table 5, referring to treatment in a decompression chamber for type 1 decompression sickness, states "Descent rate - 20 ft/min. Ascent rate - Not to exceed 1 ft/min. Do not compensate for slower ascent rates. Compensate for faster rates by halting the ascent." To further complicate the practice, the ascent rate may vary with the depth, and is typically faster at greater depth and reduces as the depth gets shallower. In practice a continuous decompression profile may be approximated by ascent in steps as small as the chamber pressure gauge will resolve, and timed to follow the theoretical profile as closely as conveniently practicable. For example, USN treatment table 7 (which may be used if decompression sickness has reoccurred during initial treatment in the compression chamber) states "Decompress with stops every 2 feet for times shown in profile below." The profile shows an ascent rate of 2 fsw every 40 min from 60 fsw (feet of sea water) to 40 fsw, followed by 2 ft every hour from 40 fsw to 20 fsw and 2 ft every two hours from 20 fsw to 4 fsw. Staged decompression Decompression which follows the procedure of relatively fast ascent interrupted by periods at constant depth is known as staged decompression. The ascent rate and the depth and duration of the stops are integral parts of the decompression process. The advantage of staged decompression is that it is far easier to monitor and control than continuous decompression. Decompression stops A decompression stop is the period a diver must spend at a relatively shallow constant depth during ascent after a dive to safely eliminate absorbed inert gases from the body tissues to avoid decompression sickness. The practice of making decompression stops is called staged decompression, as opposed to continuous decompression. The diver identifies the requirement for decompression stops, and if they are needed, the depths and durations of the stops, by using decompression tables, software planning tools or a dive computer. The ascent is made at the recommended rate until the diver reaches the depth of the first stop. The diver then maintains the specified stop depth for the specified period, before ascending to the next stop depth at the recommended rate, and follows the same procedure again. This is repeated until all required decompression has been completed and the diver reaches the surface. Once on the surface, the diver will continue to eliminate inert gas until the concentrations have returned to normal surface saturation, which can take several hours. Inert gas elimination is considered in some models to be effectively complete after 12 hours, while other models show it can take up to, or even more than 24 hours. The depth and duration of each stop is calculated to reduce the inert gas excess in the most critical tissues to a concentration which will allow further ascent without unacceptable risk. Consequently, if there is not much dissolved gas, the stops will be shorter and shallower than if there is a high concentration. The length of the stops is also strongly influenced by which tissue compartments are assessed as highly saturated. High concentrations in slow tissues will indicate longer stops than similar concentrations in fast tissues. Shorter and shallower decompression dives may only need one single short shallow decompression stop, for example, 5 minutes at . Longer and deeper dives often need a series of decompression stops, each stop being longer but shallower than the previous stop. Deep stops A deep stop was originally an extra stop introduced by divers during ascent, at a greater depth than the deepest stop required by their computer algorithm or tables. This practice is based on empirical observations by technical divers such as Richard Pyle, who found that they were less fatigued if they made some additional stops for short periods at depths considerably deeper than those calculated with the currently published decompression algorithms. More recently computer algorithms that are claimed to use deep stops have become available, but these algorithms and the practice of deep stops have not been adequately validated. Deep stops are likely to be made at depths where ingassing continues for some slow tissues, so the addition of deep stops of any kind can only be included in the dive profile when the decompression schedule has been computed to include them, so that such ingassing of slower tissues can be taken into account. Nevertheless, deep stops may be added on a dive that relies on a personal dive computer (PDC) with real-time computation, as the PDC will track the effect of the stop on its decompression schedule. Deep stops are otherwise similar to any other staged decompression, but are unlikely to use a dedicated decompression gas, as they are usually not more than two to three minutes long. A study by Divers Alert Network in 2004 suggests that addition of a deep (c. 15 m) as well as a shallow (c. 6 m) safety stop to a theoretically no-stop ascent will significantly reduce decompression stress indicated by precordial doppler detected bubble (PDDB) levels. The authors associate this with gas exchange in fast tissues such as the spinal cord and consider that an additional deep safety stop may reduce the risk of spinal cord decompression sickness in recreational diving. A follow-up study found that the optimum duration for the deep safety stop under the experimental conditions was 2.5 minutes, with a shallow safety stop of 3 to 5 minutes. Longer safety stops at either depth did not further reduce PDDB. In contrast, experimental work comparing the effect of deep stops observed a significant decrease in vascular bubbles following a deep stop after longer shallower dives, and an increase in bubble formation after the deep stop on shorter deeper dives, which is not predicted by the existing bubble model. A controlled comparative study by the Navy Experimental Diving Unit in the NEDU Ocean Simulation Facility wet-pot comparing the VVAL18 Thalmann Algorithm with a deep stop profile suggests that the deep stops schedule had a greater risk of DCS than the matched (same total stop time) conventional schedule. The proposed explanation was that slower gas washout or continued gas uptake offset benefits of reduced bubble growth at deep stops. Profile determined intermediate stops Profile-dependent intermediate stops (PDIS)s are intermediate stops at a depth above the depth at which the leading compartment for the decompression calculation switches from ongassing to offgassing and below the depth of the first obligatory decompression stop, (or the surface, on a no-decompression dive). The ambient pressure at that depth is low enough to ensure that the tissues are mostly offgassing inert gas, although under a very small pressure gradient. This combination is expected to inhibit bubble growth. The leading compartment is generally not the fastest compartment except in very short dives, for which this model does not require an intermediate stop. The 8 compartment Bühlmann - based UWATEC ZH-L8 ADT MB PMG decompression model in the Scubapro Galileo dive computer processes the dive profile and suggests an intermediate 2-minute stop that is a function of the tissue nitrogen loading at that time, taking into account the accumulated nitrogen from previous dives. Within the Haldanian logic of the model, at least three compartments are offgassing at the prescribed depth - the 5 and 10-minute half time compartments under a relatively high pressure gradient. Therefore, for decompression dives, the existing obligation is not increased during the stop. A PDIS is not a mandatory stop, nor is it considered a substitute for the more important shallow safety stop on a no-stop dive. Switching breathing gas mix during the ascent will influence the depth of the stop. The PDIS concept was introduced by Sergio Angelini. Decompression schedule A decompression schedule is a specified ascent rate and series of increasingly shallower decompression stops—often for increasing amounts of time—that a diver performs to outgas inert gases from their body during ascent to the surface to reduce the risk of decompression sickness. In a decompression dive, the decompression phase may make up a large part of the time spent underwater (in many cases it is longer than the actual time spent at depth). The depth and duration of each stop is dependent on many factors, primarily the profile of depth and time of the dive, but also the breathing gas mix, the interval since the previous dive and the altitude of the dive site. The diver obtains the depth and duration of each stop from a dive computer, decompression tables or dive planning computer software. A technical scuba diver will typically prepare more than one decompression schedule to plan for contingencies such as going deeper than planned or spending longer at depth than planned. Recreational divers often rely on a personal dive computer to allow them to avoid obligatory decompression, while allowing considerable flexibility of dive profile. A surface supplied diver will normally have a diving supervisor at the control point who monitors the dive profile and can adjust the schedule to suit any contingencies as they occur. Missed stops A diver missing a required decompression stop increases the risk of developing decompression sickness. The risk is related to the depth and duration of the missed stops. The usual causes for missing stops are not having enough breathing gas to complete the stops or accidentally losing control of buoyancy. An aim of most basic diver training is to prevent these two faults. There are also less predictable causes of missing decompression stops. Diving suit failure in cold water may force the diver to choose between hypothermia and decompression sickness. Diver injury or marine animal attack may also limit the duration of stops the diver is willing to carry out. A procedure for dealing with omitted decompression stops is described in the US Navy Diving Manual. In principle the procedure allows a diver who is not yet presenting symptoms of decompression sickness, to go back down and complete the omitted decompression, with some extra added to deal with the bubbles which are assumed to have formed during the period where the decompression ceiling was violated. Divers who become symptomatic before they can be returned to depth are treated for decompression sickness, and do not attempt the omitted decompression procedure as the risk is considered unacceptable under normal operational circumstances. If a decompression chamber is available, omitted decompression may be managed by chamber recompression to an appropriate pressure, and decompression following either a surface decompression schedule or a treatment table. If the diver develops symptoms in the chamber, treatment can be started without further delay. Delayed stops A delayed stop occurs when the ascent rate is slower than the nominal rate for a table. A computer will automatically allow for any theoretical ingassing of slow tissues and reduced rate of outgassing for fast tissues, but when following a table, the table will specify how the schedule should be adjusted to compensate for delays during the ascent. Typically a delay in reaching the first stop is added to bottom time, as ingassing of some tissues is assumed, and delays between scheduled stops are ignored, as it is assumed that no further ingassing has occurred. Accelerated decompression Decompression can be accelerated by the use of breathing gases during ascent with lowered inert gas fractions (as a result of increased oxygen fraction). This will result in a greater diffusion gradient for a given ambient pressure, and consequently accelerated decompression for a relatively low risk of bubble formation. Nitrox mixtures and oxygen are the most commonly used gases for this purpose, but oxygen rich trimix blends can also be used after a trimix dive, and oxygen rich heliox blends after a heliox dive, and these may reduce risk of isobaric counterdiffusion complications. Doolette and Mitchell showed that when a switch is made to a gas with a different proportion of inert gas components, it is possible for an inert component previously absent, or present as a lower fraction, to in-gas faster than the other inert components are eliminated (inert gas counterdiffusion), sometimes resulting in raising the total tissue tension of inert gases in a tissue to exceed the ambient pressure sufficiently to cause bubble formation, even if the ambient pressure has not been reduced at the time of the gas switch. They conclude that "breathing-gas switches should be scheduled deep or shallow to avoid the period of maximum supersaturation resulting from decompression". Oxygen decompression The use of pure oxygen for accelerated decompression is limited by oxygen toxicity. In open circuit scuba the upper limit for oxygen partial pressure is generally accepted as 1.6 bar, equivalent to a depth of 6 msw (metres of sea water), but in-water and surface decompression at higher partial pressures is routinely used in surface supplied diving operation, both by the military and civilian contractors, as the consequences of CNS oxygen toxicity are considerably reduced when the diver has a secure breathing gas supply. US Navy tables (Revision 6) start in-water oxygen decompression at 30 fsw (9 msw), equivalent to a partial pressure of 1.9 bar, and chamber oxygen decompression at 50 fsw (15 msw), equivalent to 2.5 bar. Repetitive dives Any dive which is started while the tissues retain residual inert gas in excess of the surface equilibrium condition is considered a repetitive dive. This means that the decompression required for the dive is influenced by the diver's decompression history. Allowance must be made for inert gas preloading of the tissues which will result in them containing more dissolved gas than would have been the case if the diver had fully equilibrated before the dive. The diver will need to decompress longer to eliminate this increased gas loading. Surface interval The surface interval (SI) or surface interval time (SIT) is the time spent by a diver at surface pressure after a dive during which inert gas which was still present at the end of the dive is further eliminated from the tissues. This continues until the tissues are at equilibrium with the surface pressures. This may take several hours. In the case of the US Navy 1956 Air tables, it is considered complete after 12 hours, The US Navy 2008 Air tables specify up to 16 hours for normal exposure. but other algorithms may require more than 24 hours to assume full equilibrium. Residual nitrogen time For the planned depth of the repetitive dive, a bottom time can be calculated using the relevant algorithm which will provide an equivalent gas loading to the residual gas after the surface interval. This is called "residual nitrogen time" (RNT) when the gas is nitrogen. The RNT is added to the planned "actual bottom time" (ABT) to give an equivalent "total bottom time" (TBT) which is used to derive the appropriate decompression schedule for the planned dive. Equivalent residual times can be derived for other inert gases. These calculations are done automatically in personal diving computers, based on the diver's recent diving history, which is the reason why personal diving computers should not be shared by divers, and why a diver should not switch computers without a sufficient surface interval (more than 24 hours in most cases, up to 4 days, depending on the tissue model and recent diving history of the user). Residual inert gas can be computed for all modeled tissues, but repetitive group designations in decompression tables are generally based on only the one tissue, considered by the table designers to be the most limiting tissue for likely applications. In the case of the US Navy Air Tables (1956) this is the 120-minute tissue, while the Bühlmann tables use the 80-minute tissue. Diving at altitude The atmospheric pressure decreases with altitude, and this has an effect on the absolute pressure of the diving environment. The most important effect is that the diver must decompress to a lower surface pressure, and this requires longer decompression for the same dive profile. A second effect is that a diver ascending to altitude, will be decompressing en route, and will have residual nitrogen until all tissues have equilibrated to the local pressures. This means that the diver should consider any dive done before equilibration as a repetitive dive, even if it is the first dive in several days. The US Navy diving manual provides repetitive group designations for listed altitude changes. These will change over time with the surface interval according to the relevant table. Altitude corrections (Cross corrections) are described in the US Navy diving manual. This procedure is based on the assumption that the decompression model will produce equivalent predictions for the same pressure ratio. The "Sea Level Equivalent Depth" (SLED) for the planned dive depth, which is always deeper than the actual dive at altitude, is calculated in inverse proportion to the ratio of surface pressure at the dive site to sea level atmospheric pressure. Sea level equivalent depth = Actual depth at altitude × Pressure at sea level ÷ Pressure at altitude Decompression stop depths are also corrected, using the ratio of surface pressures, and will produce actual stop depths which are shallower than the sea level stop depths. Stop depth at altitude = Stop depth at sea level × Pressure at altitude ÷ Pressure at sea level These values can be used with standard open circuit decompression tables, but are not applicable with constant oxygen partial pressure as provided by closed circuit rebreathers. Tables are used with the sea level equivalent depth and stops are done at the altitude stop depth. The decompression algorithms can be adjusted to compensate for altitude. This was first done by Bühlmann for deriving altitude corrected tables, and is now common on diving computers, where an altitude setting can be selected by the user, or altitude may be measured by the computer if it is programmed to take surface atmospheric pressure into account. Flying and ascent to altitude after diving Exposure to reduced atmospheric pressure during the period after a dive when the residual gas levels have not yet stabilized at atmospheric saturation levels can incur a risk of decompression sickness. Rules for safe ascent are based on extension of the decompression model calculations to the desired altitude, but are generally simplified to a few fixed periods for a range of exposures. For the extreme case of an exceptional exposure dive, the US Navy requires a surface interval of 48 hours before ascent to altitude. A surface interval of 24 hours for a Heliox decompression dive and 12 hours for Heliox no-decompression dive are also specified. More detailed surface interval requirements based on the highest repetitive group designator obtained in the preceding 24‑hour period are given on the US Navy Diving Manual Table 9.6, both for ascents to specified altitudes, and for commercial flights in aircraft nominally pressurized to 8000 ft. The first DAN flying after diving workshop in 1989 consensus guidelines recommended: wait for 12 hours before flying after up to two hours of no-stop diving within the previous 48 hours; wait for 24 hours before flying after multi-day, unlimited no-stop diving; wait for 24–48 hours before flying after dives that required decompression stops; do not fly with DCS symptoms unless necessary to obtain hyperbaric treatment. DAN later proposed a simpler 24-hour wait after any and all recreational diving, but there were objections on the grounds that such a long delay would result in lost business for island diving resorts and the risks of DCS when flying after diving were too low to warrant this blanket restraint. The DAN Flying after Diving workshop of 2002 made the following recommendations for flying after recreational diving: a 12-hour surface interval for uncertified individuals who took part in a "resort" or introductory scuba experience; an 18-hour surface interval for certified divers who make an unlimited number of no-decompression air or nitrox dives over multiple days; and substantially longer than 18 hours for technical divers who make decompression dives or used helium breathing mixes, as no specific evidence concerning decompression or helium diving was available. There is insufficient data to recommend a definite interval for this case. 24 hours is suggested, with the rider that the risk is unknown and that longer would be better. These recommendations apply to flying at a cabin pressure with an altitude equivalent of . At cabin or aircraft altitudes below the surface interval could theoretically be shorter, but there is insufficient data to make a firm recommendation. Following the recommendations for altitudes above would be conservative. At cabin altitudes between , hypoxia would be an additional stressor to reduced ambient pressure. DAN suggest doubling the recommended interval based on the dive history. NASA astronauts train underwater to simulate the weightlessness and occasionally need to fly afterwards at cabin altitudes not exceeding 10,000 feet (3,000 meters). Training dives use 46% Nitrox and can exceed six hours at a maximum depth of 40 ffw (12 mfw) for a maximum equivalent air depth (EAD) of 24 fsw (7 msw). NASA guidelines for EADs of 20–50 fsw (6–15 msw) with maximum dive durations of 100–400 minutes allow either air or oxygen to be breathed in the preflight surface intervals. Oxygen breathing during surface intervals reduces the time to fly by a factor of seven to nine times compared with air. A study by another military organization, the Special Operations Command also indicated that preflight oxygen might be an effective means for reducing DCS risk. Some places, (for example, the Altiplano in Peru and Bolivia, or the plateau around Asmara (where the airport is) in Eritrea, and some mountain passes), are many thousand feet above sea level and travelling to such places after diving at lower altitude should be treated as flying at the equivalent altitude after diving. The available data does not cover flights which land at an altitude above . These may be considered to be equivalent to flying at the same cabin altitude. Training sessions in a pool of limited depth are usually outside the criteria requiring a pre-flight surface interval. The US Navy air decompression tables allow flying with a cabin altitude of 8000 feet for repetitive group C, which results from a bottom time of 61 to 88 minutes at a depth of , or a bottom time of 102 to 158 minutes at a depth of . Any pool session that does not exceed these depth and time combinations can be followed by a flight without any requirement for a delay. There would also be no restrictions on flying after diving with an oxygen rebreather, as inert gases are flushed out during oxygen breathing. Technical diving Technical diving includes profiles that are relatively short and deep, and which are inefficient in terms of decompression time for a given bottom time. They also often lie outside the range of profiles with validated decompression schedules, and tend to use algorithms developed for other types of diving, often extrapolated to depths for which no formal testing has been done. Often modifications are made to produce shorter or safer decompression schedules, but the evidence relevant to these modifications is often difficult to locate when it exists. The widespread belief that bubble algorithms and other modifications which produce deeper stops are more efficient than the dissolved phase models is not borne out by formal experimental data, which suggest that the incidence of decompression symptoms may be higher for same duration schedules using deeper stops, due to greater saturation of slower tissues over the deeper profile. Specialised decompression procedures Gas switching It appears that gas switching from mixtures based on helium to nitrox during ascent does not accelerate decompression in comparison with dives using only helium diluent, but there is some evidence that the type of symptoms displayed is skewed towards neurological in heliox only dives. There is also some evidence that heliox to nitrox switches are implicated in inner ear decompression sickness symptoms which occur during decompression. Suggested strategies to minimise risk of vestibular DCS are to ensure adequate initial decompression, and to make the switch to nitrox at a relatively shallow depth (less than 30 m), while using the highest acceptably safe oxygen fraction during decompression at the switch. Deep technical diving usually involves the use of several gas mixtures during the course of the dive. There will be a mixture known as the bottom gas, which is optimised for limiting inert gas narcosis and oxygen toxicity during the deep sector of the dive. This is generally the mixture which is needed in the largest amount for open circuit diving, as the consumption rate will be greatest at maximum depth. The oxygen fraction of the bottom gas suitable for a dive deeper than about will not have sufficient oxygen to reliably support consciousness at the surface, so a travel gas must be carried to start the dive and get down to the depth at which the bottom gas is appropriate. There is generally a large overlap of depths where either gas can be used, and the choice of the point at which the switch will be made depends on considerations of cumulative toxicity, narcosis and gas consumption logistics specific to the planned dive profile. During ascent, there will be a depth at which the diver must switch to a gas with a higher oxygen fraction, which will also accelerate decompression. If the travel gas is suitable, it can be used for decompression too. Additional oxygen rich decompression gas mixtures may be selected to optimise decompression times at shallower depths. These will usually be selected as soon as the partial pressure of oxygen is acceptable, to minimise required decompression, and there may be more than one such mixture depending on the planned decompression schedule. The shallowest stops may be done breathing pure oxygen. During prolonged decompression at high oxygen partial pressures, it may be advisable to take what is known as air breaks, where the diver switches back to a low oxygen fraction gas (usually bottom gas or travel gas) for a short period (usually about 5 minutes) to reduce the risk of developing oxygen toxicity symptoms, before continuing with the high oxygen fraction accelerated decompression. These multiple gas switches require the diver to select and use the correct demand valve and cylinder for each switch. An error of selection could compromise the decompression, or result in a loss of consciousness due to oxygen toxicity. The diver is faced with a problem of optimising for gas volume carried, number of different gases carried, depths at which switches can be made, bottom time, decompression time, gases available for emergency use, and at which depths they become available, both for themself and other members of the team, while using available cylinders and remaining able to manage the cylinders during the dive. This problem can be simplified if staging the cylinders is possible. This is the practice of leaving a cylinder at a point on the return route where it can be picked up and used, possibly depositing the previously used cylinder, which will be retrieved later, or having a support diver supply additional gas. These strategies rely on the diver being reliably able to get to the staged gas supply. The staged cylinders are usually clipped off to the distance line or shotline to make them easier to find. Management of multiple cylinders When multiple cylinders containing different gas mixtures are carried, the diver must ensure that the correct gas is breathed for the depth and decompression management. Breathing a gas with inappropriate oxygen partial pressure risks loss of consciousness, and compromising the decompression plan. When switching, the diver must be certain of the composition of the new gas, and make the correct adjustments to decompression computer settings. Various systems have been used to identify the gas, the demand valve, and the source cylinder. One in general use and found by experience to be reliable, is to clearly label the cylinder with the maximum operating depth of the contents, as this is the most critical information, carry the demand valve on the cylinder, and leave the cylinder valve closed when the cylinder is not in use. This allows the diver to visually identify the mix as suitable for the current depth, select the demand valve at the cylinder, and confirm that it is the demand valve from that cylinder by opening the cylinder valve to release the gas. After the mix is confirmed the diver will switch over the computer to select the current gas, so that decompression computation can remain correct. It is not unusual for deep technical dives to require four gas mixtures aside from bottom gas, which is generally carried in back-mounted cylinders. There is a convention to carry the most oxygen-rich additional gases on the right side, and the lower oxygen gases on the left side. This practice reduces the chances of confusion at depth and in poor visibility, and saves a little time when looking for the correct gas. Several models of technical dive computer can be set before the dive with the gas mixtures to be used, and will indicate which one of them is most suitable for the current depth. Surface decompression Surface decompression is a procedure in which some or all of the staged decompression obligation is done in a decompression chamber instead of in the water. This reduces the time that the diver spends in the water, exposed to environmental hazards such as cold water or currents, which will enhance diver safety. The decompression in the chamber is more controlled, in a more comfortable environment, and oxygen can be used at greater partial pressure as there is no risk of drowning and a lower risk of oxygen toxicity convulsions. A further operational advantage is that once the divers are in the chamber, new divers can be supplied from the diving panel, and the operations can continue with less delay. A typical surface decompression procedure is described in the US Navy Diving Manual. If there is no in-water 40 ft stop required the diver is surfaced directly. Otherwise, all required decompression up to and including the 40 ft (12 m) stop is completed in-water. The diver is then surfaced and pressurised in a chamber to 50 fsw (15 msw) within 5 minutes of leaving 40 ft depth in the water. If this "surface interval" from 40 ft in the water to 50 fsw in the chamber exceeds 5 minutes, a penalty is incurred, as this indicates a higher risk of DCS symptoms developing, so longer decompression is required. In the case where the diver is successfully recompressed within the nominal interval, he will be decompressed according to the schedule in the air decompression tables for surface decompression, preferably on oxygen, which is used from 50 fsw (15 msw), a partial pressure of 2.5 bar. The duration of the 50 fsw stop is 15 minutes for the Revision 6 tables. The chamber is then decompressed to 40 fsw (12 msw) for the next stage of up to 4 periods on oxygen. A stop may also be done at 30 fsw (9 msw), for further periods on oxygen according to the schedule. Air breaks of 5 minutes are taken at the end of each 30 minutes of oxygen breathing. Surface decompression procedures have been described as "semi-controlled accidents". Data collected in the North Sea have shown that the overall incidence of decompression sickness for in-water and surface decompression is similar, but surface decompression tends to produce ten times more type II (neurological) DCS than in-water decompression. A possible explanation is that during the final stage of ascent, bubbles are produced that are stopped in the lung capillaries. During recompression of the diver in the deck chamber, the diameter of some of these bubbles is reduced sufficiently that they pass through the pulmonary capillaries and reach the systemic circulation on the arterial side, later lodging in systemic capillaries and causing neurological symptoms. The same scenario was proposed for type II DCS recorded after sawtooth profile diving or multiple repetitive diving. Dry bell decompression "Dry", or "Closed" diving bells are pressure vessels for human occupation which can be deployed from the surface to transport divers to the underwater workplace at pressures greater than ambient. They are equalized to ambient pressure at the depth where the divers will get out and back in after the dive, and are then re-sealed for transport back to the surface, which also generally takes place with controlled internal pressure greater than ambient. During and/or after the recovery from depth, the divers may be decompressed in the same way as if they were in a decompression chamber, so in effect, the dry bell is a mobile decompression chamber. Another option, used in saturation diving, is to decompress to storage pressure (pressure in the habitat part of the saturation spread) and then transfer the divers to the saturation habitat under pressure (transfer under pressure – TUP), where they will stay until the next shift, or until decompressed at the end of the saturation period. Saturation decompression Once all the tissue compartments have reached saturation for a given pressure and breathing mixture, continued exposure will not increase the gas loading of the tissues. From this point onwards the required decompression remains the same. If divers work and live at pressure for a long period, and are decompressed only at the end of the period, the risks associated with decompression are limited to this single exposure. This principle has led to the practice of saturation diving, and as there is only one decompression, and it is done in the relative safety and comfort of a saturation habitat, the decompression is done on a very conservative profile, minimising the risk of bubble formation, growth and the consequent injury to tissues. A consequence of these procedures is that saturation divers are more likely to suffer decompression sickness symptoms in the slowest tissues, whereas bounce divers are more likely to develop bubbles in faster tissues. Decompression from a saturation dive is a slow process. The rate of decompression typically ranges between 3 and 6 fsw (0.9 and 1.8 msw) per hour. The US Navy Heliox saturation decompression rates require a partial pressure of oxygen to be maintained at between 0.44 and 0.48  atm when possible, but not to exceed 23% by volume, to restrict the risk of fire. For practicality the decompression is done in increments of 1 fsw at a rate not exceeding 1 fsw per minute, followed by a stop, with the average complying with the table ascent rate. Decompression is done for 16 hours in 24, with the remaining 8 hours split into two rest periods. A further adaptation generally made to the schedule is to stop at 4 fsw for the time that it would theoretically take to complete the decompression at the specified rate, i.e. 80 minutes, and then complete the decompression to surface at 1 fsw per minute. This is done to avoid the possibility of losing the door seal at a low pressure differential and losing the last hour or so of slow decompression. The Norwegian saturation decompression tables are similar, but specifically do not allow decompression to start with an upward excursion. Partial pressure of oxygen is maintained between 0.4 and 0.5 bar, and a rest stop of 6 hours is specified each night starting at midnight. Therapeutic decompression Therapeutic decompression is a procedure for treating decompression sickness by recompressing the diver, thus reducing bubble size, and allowing the gas bubbles to re-dissolve, then decompressing slowly enough to avoid further formation or growth of bubbles, or eliminating the inert gases by breathing oxygen under pressure. Therapeutic decompression on air Recompression on atmospheric air was shown to be an effective treatment for minor DCS symptoms by Keays in 1909. Historically, therapeutic decompression was done by recompressing the diver to the depth of relief of pain, or a bit deeper, maintaining that pressure for a while, so that bubbles could be re-dissolved, and performing a slow decompression back to the surface pressure. Later air tables were standardised to specific depths, followed by slow decompression. This procedure has been superseded almost entirely by hyperbaric oxygen treatment. Hyperbaric oxygen therapy Evidence of the effectiveness of recompression therapy utilizing oxygen was first shown by Yarbrough and Behnke (1939), and has since become the standard of care for treatment of DCS. A typical hyperbaric oxygen treatment schedule is the US Navy Table 6, which provides for a standard treatment of 3 to 5 periods of 20 minutes of oxygen breathing at 60 fsw (18msw) followed by 2 to 4 periods of 60 minutes at 30 fsw (9 msw) before surfacing. Air breaks are taken between oxygen breathing to reduce the risk of oxygen toxicity. In-water recompression If a chamber is not available for recompression within a reasonable period, a riskier alternative is in-water recompression at the dive site. In-water recompression (IWR) is the emergency treatment of decompression sickness (DCS) by sending the diver back underwater to allow the gas bubbles in the tissues, which are causing the symptoms, to resolve. It is a risky procedure that should only be used when it is not practicable to travel to the nearest recompression chamber in time to save the victim's life. The principle behind in-water recompression treatment is the same as that behind the treatment of DCS in a recompression chamber The procedure is high risk as a diver suffering from DCS may become paralysed, unconscious or stop breathing whilst under water. Any one of these events may result in the diver drowning or further injury to the diver during a subsequent rescue to the surface. These risks can be mitigated to some extent by using a helmet or full-face mask with voice communications on the diver, suspending the diver from the surface so that depth is positively controlled, and by having an in-water standby diver attend the diver undergoing the treatment at all times. Although in-water recompression is regarded as risky, and to be avoided, there is increasing evidence that technical divers who surface and develop mild DCS symptoms may often get back into the water and breathe pure oxygen at a depth of for a period to seek to alleviate the symptoms. This trend is noted in paragraph 3.6.5 of DAN's 2008 accident report. The report also notes that while the reported incidents showed very little success, "[w]e must recognize that these calls were mostly because the attempted IWR failed. In case the IWR were successful, [the] diver would not have called to report the event. Thus we do not know how often IWR may have been used successfully." Historically, in-water recompression was the usual method of treating decompression sickness in remote areas. Procedures were often informal and based on operator experience, and used air as the breathing gas as it was all that was available. The divers generally used standard diving gear, which was relatively safe for this procedure, as the diver was at low risk of drowning if he lost consciousness. Decompression equipment There are several types of equipment used to help divers carry out decompression. Some are used to plan and monitor the decompression and some mark the underwater position of the diver and act as a buoyancy control aid and position reference in low visibility or currents. Decompression may be shortened (or accelerated) by breathing an oxygen-rich "deco gas" such as a nitrox with 50% or more oxygen. The high partial pressure of oxygen in such decompression mixes create the effect of the oxygen window. This decompression gas is often carried by scuba divers in side-slung cylinders. Cave divers who can only return by a single route, will often leave decompression gas cylinders attached to the guideline at the points where they will be used. Surface supplied divers will have the composition of the breathing gas controlled at the gas panel. Divers with long decompression obligations may be decompressed inside gas filled chambers in the water or at the surface. Planning and monitoring decompression Equipment for planning and monitoring decompression includes decompression tables, surface computer software and personal decompression computers. There is a wide range of choice: A decompression algorithm is used to calculate the decompression stops needed for a particular dive profile to reduce the risk of decompression sickness occurring after surfacing at the end of a dive. The algorithm can be used to generate decompression schedules for a particular dive profile, decompression tables for more general use, or be implemented in dive computer software. Depending on the algorithm chosen the range of no-decompression limits at a given depth on the same gas can vary considerably. It is not possible to discriminate between "right" and "wrong" options, but it is considered correct to say that the risk of developing DCS is greater for the longer exposures and less for the shorter exposures for a given depth. Dive tables or decompression tables are tabulated data, often in the form of printed cards or booklets, that allow divers to determine a decompression schedule for a given dive profile and breathing gas. In some cases they may also specify an altitude range. The choice of tables for professional diving use is generally made by the organization employing the divers, and for recreational training it is usually prescribed by the certifying agency, but for recreational purposes the diver is generally free to make use of any of the range of published tables, and for that matter, to modify them to suit himself or herself. Decompression software is available for personal computers to model the decompression requirements of user specified dive profiles with different gas mixtures using a choice of decompression algorithms. Schedules generated by decompression software represent a diver's specific dive plan and breathing gas mixtures. It is usual to generate a schedule for the planned profile and for the most likely contingency profiles. A personal dive computer is a small computer designed to be worn by a diver during a dive, with a pressure sensor and an electronic timer mounted in a waterproof and pressure resistant housing which has been programmed to model the inert gas loading of the diver's tissues in real time during a dive. A display allows the diver to see critical data during the dive, including the maximum and current depth, duration of the dive, and decompression data including the remaining no decompression limit calculated in real time for the diver throughout the dive. The dive computer keeps track of residual gas loading for each tissue used in the algorithm. Dive computers also provide a measure of safety for divers who accidentally dive a different profile to that originally planned. Most dive computers will provide the necessary decompression information for acceptably safe ascent in the event that the no-decompression limits are exceeded. The use of computers to manage recreational dive decompression is becoming the standard and their use is also common in occupational scientific diving. Their value in surface supplied commercial diving is more restricted, but they can usefully serve as a dive profile recorder. Controlling depth and ascent rate A critical aspect of successful decompression is that the depth and ascent rate of the diver must be monitored and sufficiently accurately controlled. Practical in-water decompression requires a reasonable tolerance for variation in depth and rate of ascent, but unless the decompression is being monitored in real time by a decompression computer, any deviations from the nominal profile will affect the risk. Several items of equipment are used to assist in facilitating accurate adherence to the planned profile, by allowing the diver to more easily control depth and ascent rate, or to transfer this control to specialist personnel at the surface. A shot line is a rope between a float at the surface, and a sufficiently heavy weight holding the rope approximately vertical. The shot line float should be sufficiently buoyant to support the weight of all divers that are likely to be using it at the same time. Recreational divers are free to choose lesser buoyancy at their own risk. The shot weight should be sufficient to prevent a diver from lifting it from the bottom by over-inflation of the buoyancy compensator or dry suit, but not sufficient to sink the float if the slack on the line is all taken up. Various configurations of shot line are used to control the amount of slack. The diver ascends along the shotline, and may use it purely as a visual reference, or can hold on to it to positively control depth, or can climb up it hand over hand. A Jonline may be used to fasten a diver to a shotline during a decompression stop. A decompression trapeze or decompression bar is a device used in recreational diving and technical diving to make decompression stops more comfortable and more secure and provide the divers' surface cover with a visual reference for the divers' position. It consists of a horizontal bar or bars suspended at the depth of intended decompression stops by buoys. The bars are of sufficient weight and the buoys of sufficient buoyancy that the trapeze will not easily change depth in turbulent water or if the divers experience buoyancy control problems. A decompression trapeze can be tethered to a shotline, or to the dive boat, or allowed to drift with the divers. It is effective for keeping the divers together during long stops. A surface marker buoy (SMB) with a reel and line is often used by a dive leader to allow the boat to monitor progress of the dive group. This can provide the operator with a positive control of depth, by remaining slightly negative and using the buoyancy of the float to support this slight over-weighting. This allows the line to be kept under slight tension which reduces the risk of entanglement. The reel or spool used to store and roll up the line usually has slightly negative buoyancy, so that if released it will hang down and not float away. A delayed or deployable surface marker buoy (DSMB) is a soft inflatable tube which is attached to a reel or spool line at one end, and is inflated by the diver under water and released to float to the surface, deploying the line as it ascends. This provides information to the surface that the diver is about to ascend, and where he is. This equipment is commonly used by recreational and technical divers, and requires a certain level of skill to operate safely. They are mostly used to signal the boat that the diver has started ascent or to indicate a problem in technical diving. A diving stage, sometimes known as the basket, or diver launch and recovery system (LARS), is a platform on which one or two divers stand which is hoisted into the water, lowered to the workplace or the bottom, and then hoisted up again to return the diver to the surface and lift him out of the water. This equipment is almost exclusively used by surface supplied professional divers, as it requires fairly complex lifting equipment. A diving stage allows the surface team to conveniently manage a diver's decompression as it can be hoisted at a controlled rate and stopped at the correct depth for decompression stops, and allows the divers to rest during the ascent. It also allows the divers to be relatively safely and conveniently lifted out of the water and returned to the deck or quayside. A wet bell, or open bell, is similar to a diving stage in concept, but has an air space, open to the water at the bottom in which the divers, or at least their heads, can shelter during ascent and descent. Providing gases to accelerate decompression Reducing the partial pressure of the inert gas component of the breathing mixture will accelerate decompression as the concentration gradient will be greater for a given depth. This is usually achieved by increasing the partial pressure of oxygen in the breathing gas, as substituting a different inert gas may have counter-diffusion complications due to differing rates of diffusion, which can lead to a net gain in total dissolved gas tension in a tissue. This can lead to bubble formation and growth, with decompression sickness as a consequence. Partial pressure of oxygen is usually limited to 1.6 bar during in water decompression for scuba divers, but can be up to 1.9 bar in-water and 2.2 bar in the chamber when using the US Navy tables for surface decompression. Stage cylinders are cylinders which are stored by scuba divers along the return route containing decompression and emergency gas. This is only practicable where the return route is known and marked by a guideline. Similar cylinders are carried by the divers when the route back is not secure. They are commonly mounted as sling cylinders, clipped to D-rings at the sides of the diver's harness. The divers must avoid breathing oxygen enriched "deco gas" at excessive depths because of the high risk of oxygen toxicity. To prevent this happening, cylinders containing oxygen-rich gases must always be positively identifiable. One way of doing this is by marking them with their maximum operating depth as clearly as possible. Surface supplied divers may be supplied with a gas mixture suitable for accelerated decompression by connecting a supply to the surface gas panel and providing it through the umbilical to the divers. This allows accelerated decompression, usually on oxygen, which can be used to a maximum depth of 30 ft (9 m). Surface supplied heliox bounce divers will be provided with mixtures suitable for their current depth, and the mixture may be changed several times during descent and ascent from great depths. Closed circuit rebreathers are usually controlled to provide a fairly constant partial pressure of oxygen during the dive (set point), and may be reset to a richer mix for decompression. The effect is to keep the partial pressure of inert gases as low as safely practicable throughout the dive. This minimizes the absorption of inert gas in the first place, and accelerates the elimination of the inert gases during ascent. Surface decompression Specialised equipment is available to decompress a diver out of the water. This is almost exclusively used with surface supplied diving equipment: Deck decompression chambers are used for surface decompression, described in a previous section. Most deck decompression chambers are fitted with built in breathing systems (BIBS), which supply an alternative breathing gas to the occupants (usually oxygen), and discharge the exhaled gas outside the chamber, so the chamber gas is not excessively enriched by oxygen, which would cause an unacceptable fire hazard, and require frequent flushing with chamber gas (usually air). A dry bell may be used for bounce dives to great depths, and then used as the decompression chamber during the ascent and later on board the support vessel. In this case it is not always necessary to transfer into a deck chamber, as the bell is quite capable of performing this function, though it would be relatively cramped, as a bell is usually as small as conveniently possible to minimize weight for deployment. A Saturation System or Saturation spread typically comprises a living chamber, transfer chamber and submersible decompression chamber, which is commonly referred to in commercial diving as the diving bell and in military diving as the personnel transfer capsule, PTC (Personnel Transfer Capsule) or SDC (Submersible Decompression Chamber). The diving bell is the elevator or lift that transfers divers from the system to the work site and back. At the completion of work or a mission, the saturation diving team is decompressed gradually back to atmospheric pressure by the slow venting of system pressure, at rates of about of to per day, (schedules vary). Thus the process involves only one ascent, thereby mitigating the time-consuming and comparatively risky process of multiple decompressions normally associated with multiple non-saturation ("bounce diving") operations. A hyperbaric lifeboat or hyperbaric rescue unit may be provided for emergency evacuation of saturation divers from a saturation system. This would be used if the platform is at immediate risk due to fire or sinking, and allows the divers under saturation to get clear of the immediate danger. The crew would normally start decompression as soon as possible after launching. Risk management Risk management for decompression sickness involves following decompression schedules of known and acceptable risk, providing mitigation in the event of a hit (diving term indicating symptomatic decompression sickness), and reducing risk to an acceptable level by following recommended practice and avoiding deprecated practice to the extent considered appropriate by the responsible person and the divers involved. The risk of decompression sickness for the algorithms in common use is not always accurately known. Human testing under controlled conditions with the end condition of symptomatic decompression sickness is no longer frequently carried out for ethical reasons. A considerable amount of self-experimentation is done by technical divers, but conditions are generally not optimally recorded, and there are usually several unknowns, and no control group. Several practices are recommended to reduce risk based on theoretical arguments, but the value of many of these practices in reducing risk is uncertain, particularly in combinations. The vast majority of professional and recreational diving is done under low risk conditions and without recognised symptoms, but in spite of this there are occasionally unexplained incidences of decompression sickness. The earlier tendency to blame the diver for not properly following the procedures has been shown to not only be counterproductive, but sometimes factually wrong, and it is now generally recognised that there is statistically a small but real risk of symptomatic decompression sickness for even highly conservative profiles. This acceptance by the diving community that sometimes one is simply unlucky encourages more divers to report borderline cases, and the statistics gathered may provide more complete and precise indications of risk as they are analysed. Conservatism Decompression conservatism refers to the application of factors to a basic decompression algorithm or set of tables that are expected to decrease the risk of developing symptomatic decompression sickness when following a given dive profile. This practice has a long history, originating with the practice of decompressing according to the tables for a dive deeper than the actual depth, longer than the actual bottom time, or both. These practices were empirically developed by divers and supervisors to account for factors that they considered increased risk, such as hard work during the dive, or cold water. With the development of computer programs to calculate decompression schedules for specified dive profiles, came the possibility of adjusting the allowed percentage of the maximum supersaturation (M-values). This feature became available in dive computers as an optional personal setting in addition to any conservatism added by the manufacturer, and the range of base conservatism set by manufacturers is large. Conservatism also varies between decompression algorithms due to the different assumptions and mathematical models used. In this case the conservatism is considered relative, as in most cases the validity of the model remains open to question, and has been adjusted empirically to produce a statistically acceptable risk by the designers. Where the depth, pressure and gas mixture exposure on a dive is outside of the experimentally tested range, the risk is unknown, and conservatism of adjustments to the allowable theoretical tissue gas load is relative to an unknown risk. The application of user conservatism for dive computers varies considerably. The general tendency in dive computers intended for the recreational market is to provide one or two preset conservatism settings which have the effect of reducing allowed no-decompression limit in a way which is not transparent to the user. Technical divers, who are required to have a deeper understanding of the theoretical basis of decompression algorithms, often want to be able to set conservatism as an informed choice, and technical computers often provide this option. For the popular Bühlmann algorithm, it is usually in the form of gradient factors. In some cases the computer may provide a readout of the current computed percentage of the M-value in real time, as an aid to managing a situation where the diver must balance decompression risk against other risks to make the ascent. The converse of conservative decompression is termed aggressive decompression. This may be used to minimise in-water time for exceptional exposure dives by divers willing to accept the unknown personal risk associated with the practice. It may also be used by more risk averse divers in a situation where the estimated decompression risk is perceived to be less dire than other possible consequences, such as drowning, hypothermia, or imminent shark attack. Recommended practices Practices for which there is some evidence or theoretical model suggesting that they may reduce risk of decompression sickness: Extended decompression: Providing that the depth is shallow enough that effectively no further inert gas tissue loading will occur, more decompression time will reduce the risk of decompression sickness, but with diminishing returns. In practice this can be facilitated by using two decompression computers. One is set at the least conservative setting acceptable to the diver, and is used to indicate minimum acceptable decompression and time to surface. The other is set at a conservatism which the diver considers adequate and low risk. Decompression will normally be done following the conservative setting, but if circumstances suggest getting out of the water sooner, the less conservative computer will show when the risk is at least acceptably low. Rehydration: Rest: Mild exercise during decompression: Sufficient exercise to stimulate the circulation and maintain body temperature is thought to accelerate inert gas washout, therefore reducing the risk of decompression sickness for a given decompression schedule. Core temperature recovery Surface oxygen breathing: The use of oxygen or nitrox as a post dive breathing mixture is recommended in cases where incomplete decompression or short periods of omitted decompression have occurred, or at any time when there is doubt that decompression was sufficient. Low exertion during the ingassing stage of the dive: This reduces circulation during ingassing, so it will take longer for perfusion limited tissues to reach any specific inert gas loading. Consequently, the tissue loading at the end of the dive will be lower than if the diver worked hard. This is obviously not always possible, and may be logistically undesirable when there is a job to be done. Decompression algorithms assume and are tested at a high level of exertion, so the indicated decompression should be acceptably safe even when exertion is fairly intense. Less exertion will reduce the risk by an unknown amount. Deprecated practices Practices considered to either increase the risk of developing decompression sickness after diving, or for which there is theoretical risk, but insufficient data: Hot tubs, jacuzzis, showers or saunas after diving: Exposing the diver to a hot external environment immediately after diving will alter decompression stress. The net result may be good or bad depending on the inert gas load and the heat stress. Reheating a chilled or hypothemic diver can restore impaired circulation to the extremities. If the inert gas load is low, this may improve the rate of gas elimination, but larger inert gas loads might be pushed to the point of bubble formation or growth due to temperature effects on solubility. Which of these effects will predominate is unpredictable and may even vary in the same diver in a given instance. The warming of tissues precedes the increase in blood flow, so bubbles may become problematic before circulation can remove the gas. This risk is not amenable to numerical analysis and there are many variables. The risk is likely to reduce with the passage of time, lower gas loading, and higher initial temperatures of the extremities. Flying or ascent to altitude soon after diving: This is known to increase risk as it is in effect further decompression. There are specific recommendations to manage risk in such cases. In most cases they are equivalent to a long decompression stop on air at sea level ambient pressure before ascending to a higher altitude, to ensure that the controlling tissues are sufficiently desaturated. Several rules of thumb have been recommended over the years. These include waiting until one reaches a specific repetitive group, and simple surface intervals based on the recent diving history. Heavy exercise following diving: The risk is thought to be associated with an increased pulmonary shunt that allows venous blood and bubbles to bypass the lungs, allowing bubbles into the arterial system. Consumption of alcohol before and after diving: Alcohol can increase dehydration and heat loss, both considered risk factors for decompression sickness. Use of some drugs: Breathhold diving after scuba or surface supplied diving: Bubble formation is more likely after significant decompression stress, and the risk increases with residual inert gas load, so deeper freediving and more intense exercise will have a greater associated risk. Diving after long flights: Long distance flying tends to leave the traveller tired and somewhat dehydrated, which is thought to be a factor predisposing to DCS due to less efficient inert gas elimination. Statistics are insufficient to show cause and effect but about a third of decompression sickness incidents reported annually from the Caribbean occur after the first day's dives. Diving during pregnancy: The change in risk of decompression sickness during pregnancy is unknown, and it is considered unethical to conduct experiments with an endpoint of symptomatic decompression sickness in pregnant women, so data is unlikely to accumulate sufficiently to allow the risk to be assessed realistically. The precautionary principle suggests that the risk should be avoided by not diving when pregnant. A history of diving during early stages of pregnancy is not considered likely to have adverse effects on the fetus, but the recommendations are to avoid it. Diving while medically unfit to dive: Saw-tooth dive profile: In a saw tooth profile the diver ascends and descends a number of times during the dive. Each ascent and descent increases the risk of decompression sickness if there are any bubbles already in the diver's tissues. The increase in risk depends on the ascent rate, magnitude and duration of the upwards excursion, the saturation levels of the tissues, and to some extent the time spent after returning to depth. Accurate assessment of the increase of risk is not currently (2016) possible, Teaching of decompression practice Basic decompression theory and use of decompression tables is part of the theory component of training for commercial divers, and dive planning based on decompression tables, and the practice and field management of decompression is a significant part of the work of the diving supervisor. Recreational divers are trained in the theory and practice of decompression to the extent that the certifying agency specifies in the training standard for each certification. This may vary from a rudimentary overview sufficient to allow the diver to avoid decompression obligation for entry level divers, to competence in the use of several decompression algorithms by way of personal dive computers, decompression software, and tables for advanced technical divers. The detailed understanding of decompression theory is not generally required of either commercial or recreational divers. The practice of decompression techniques is another matter altogether. Recreational divers are expected not to do decompression dives by most certification organizations, though CMAS and BSAC allow for short decompression dives in some levels of recreational divers. Technical, commercial, military and scientific divers may all be expected to do decompression dives in the normal course of their sport or occupation, and are specifically trained in appropriate procedures and equipment relevant to their level of certification. A significant part of practical and theoretical training for these divers is on the practice of safe and effective decompression procedures and the selection, inspection, and use of the appropriate equipment. See also Decompression models: References Sources Further reading Section 2 chapters 13–24 pages 181–350 External links Dive tables from the NOAA German BGV C 23 table, permitting a simplified procedure of decompression planning Online dive table calculator Decompression equipment
5178038
https://en.wikipedia.org/wiki/Jean%20Bartik
Jean Bartik
Jean Jennings Bartik (born Betty Jean Jennings, December 27, 1924 – March 23, 2011) was one of the original programmers for the ENIAC computer. Bartik studied mathematics in school then began work at the University of Pennsylvania, first manually calculating ballistics trajectories and then using ENIAC to do so. The other five ENIAC programmers were Betty Holberton, Ruth Teitelbaum, Kathleen Antonelli, Marlyn Meltzer, and Frances Spence. Bartik and her colleagues developed and codified many of the fundamentals of programming while working on the ENIAC, since it was the first computer of its kind. After her work on ENIAC, Bartik went on to work on BINAC and UNIVAC, and spent time at a variety of technical companies as a writer, manager, engineer and programmer. She spent her later years as a real estate agent and died in 2011 from congestive heart failure complications. Content-management framework Drupal's default theme, Bartik, is named in honor of her. Early life and education Born Betty Jean Jennings in Gentry County, Missouri, in 1924, she was the sixth of seven children. Her father, William Smith Jennings (1893-1971) was from Alanthus Grove, where he was a schoolteacher as well as a farmer. Her mother, Lula May Spainhower (1887-1988) was from Alanthus. Jennings had three older brothers, William (January 10, 1915) Robert (March 15, 1918); and Raymond (January 23, 1922); two older sisters, Emma (August 11, 1916) and Lulu (August 22, 1919), and one younger sister, Mable (December 15, 1928). In her childhood, she would ride on horseback to visit her grandmother, who bought the young girl a newspaper to read every day and became a role model for the rest of her life. She began her education at a local one-room school, and gained local attention for her softball skill. In order to attend high school, she lived with her older sister in the neighboring town, where the school was located, and then began to drive every day despite being only 14. She graduated from Stanberry High School in 1941, aged 16. She attended Northwest Missouri State Teachers College now known Northwest Missouri State University, majoring in mathematics with a minor in English and graduating in 1945. Jennings was awarded the only mathematics degree in her class. Although she had originally intended to study journalism, she decided to change to mathematics because she had a bad relationship with her adviser. Later in her life, she earned a master's degree in English at the University of Pennsylvania in 1967 and was awarded an honorary doctorate degree from Northwest Missouri State University in 2002. Career In 1945 the United States Army was recruiting mathematicians from universities to aid in the war effort; despite a warning by her adviser that she would be "a cog in a wheel" with the Army, and encouragement to become a mathematics teacher instead, Bartik decided to become a human computer. Bartik's calculus professor encouraged her to take the job at University of Pennsylvania because they had a differential analyzer. She applied to both IBM and the University of Pennsylvania at the age of 20. Although rejected by IBM, Jennings was hired by the University of Pennsylvania to work for Army Ordnance at Aberdeen Proving Ground, calculating ballistics trajectories by hand. While working there, Bartik met her future husband, William Bartik, who was an engineer working on a Pentagon project at the University of Pennsylvania. They married in December 1946. When the Electronic Numeric Integrator and Computer (ENIAC) was developed for the purpose of calculating the ballistic trajectories human computers like Bartik had been doing by hand, she applied to become a part of the project and was eventually selected to be one of its first programmers. Bartik was asked to set up problems for the ENIAC without being taught any techniques. Bartik and five other women (Betty Holberton, Marlyn Wescoff, Kathleen McNulty, Ruth Teitelbaum, and Frances Spence) were chosen to be the main programmers for the ENIAC. Many other women who are often unrecognized contributed to the ENIAC during a period of wartime male labor shortage. Bartik, who became the co-lead programmer (with Betty Holberton), and the other four original programmers became extremely adept at running the ENIAC; with no manual to rely on, the group reviewed diagrams of the device, interviewed the engineers who had built it, and used this information to teach themselves the skills they needed. Initially, they were not allowed to see the ENIAC's hardware at all since it was still classified and they had not received security clearance; they had to learn how to program the machine solely through studying schematic diagrams. The six-woman team was also not initially given space to work together, so they found places to work where they could, in abandoned classrooms and fraternity houses. While the six women worked on ENIAC, they developed subroutines, nesting, and other fundamental programming techniques, and arguably invented the discipline of programming digital computers. Bartik and the other ENIAC female programmers learned to physically modify the machine, moving switches and rerouting cables, in order to program it. In addition to performing the original ballistic trajectories they were hired to compute, the six female programmers soon became operators on the Los Alamos nuclear calculations, and generally expanded the programming repertoire of the machine. Bartik's programming partner on the important trajectory program for the military that would prove that the ENIAC worked to specification was Betty Holberton, known at the time as Betty Snyder. Bartik and Holberton's program was chosen to introduce the ENIAC to the public and larger scientific community. That demonstration occurred on February 15, 1946 and was a tremendous success. The ENIAC proved that it operated faster than the Mark I, a well known electromechanical machine at Harvard, and also showed that the work that would take a "human computer" 40 hours to complete could be done in 20 seconds. Bartik described the first public demonstration of the ENIAC in 1946: The public demonstration was a success, but most of the congratulations on its turnout were given to its engineers, John Mauchly and John Eckert. Bartik was later asked to form and lead a group of programmers to convert the ENIAC into a stored program computer, working closely with John von Neumann, Dick Clippinger and Adele Goldstine. Bartik converted the ENIAC into a stored program computer by March 1948. As head of this process, Bartik was charged with the conversion that allowed the ENIAC to be turned into a rudimentary stored program computer to assist with Clippinger's wind tunnel programs, which allowed the ENIAC to operate more quickly, efficiently, and accurately. Letters between Bartik and Adele Goldstine were discovered by authors Thomas Haigh and Mark Priestley during the time of the project, as well as the fact that much of the 60-order code was in Bartik's handwriting. After the end of the war, Bartik went on to work with the ENIAC designers John Eckert and John Mauchly, and helped them develop the BINAC and UNIVAC I computers. BINAC was the first computer to use magnetic tape instead of punch cards to store data and the first computer to utilize the twin unit concept. BINAC was purchased by Northrop Aircraft to guide the Snark missile, but the BINAC proved to be too large for their purposes. However, according to a Northrop Aircraft programmer, claims that the BINAC didn't work once it was moved to Northrop Aircraft were erroneous and the BINAC was working well into the mid-1950s. Besides BINAC, Jean's more important work involved her responsibilities in designing the UNIVAC's logic circuits among other UNIVAC programming and design tasks. Bartik also co-programmed with her life-long friend Betty Holberton the first generative programming system (SORT/MERGE) for a computer. Recalling her time working with Eckert and Mauchly on these projects, she described their close group of computer engineers as a "technical Camelot." In the early 1950s, once the Eckert-Mauchly Corporation was sold to Remington Rand Bartik went on to help train on how to program and use the UNIVAC for the first six UNIVACs sold, including the programmers at the United States Census Bureau (first UNIVAC sold) and Atomic Energy Commission. Later, Bartik moved to Philadelphia when her husband, William "Bill" Bartik, took a job with Remington Rand. Unfortunately, due to a company policy at the time about husbands and wives working together, Jean was asked to resign from the company. Between 1951 and 1954, prior to her first child's birth, Jean did mostly freelance programming assignments for John Mauchly and was a helpmate to her husband. Once her son was born, Jean walked away from her career in computing to concentrate on raising a family, during which time she had two other children with her husband. It was sometime during this 1950s period that Bartik began going by the name "Jean" rather than her birth first name "Betty", which is what she had been known as during her ENIAC, UNIVAC and Remington-Rand years. Even though Bartik played an integral part in developing ENIAC, her work at University of Pennsylvania and on the ENIAC was completely hidden until her pioneering work was documented by columnist Tom Petzinger in several articles for the Wall Street Journal on Bartik and Holberton. Later life After getting her master's degree from the University of Pennsylvania in 1967 and making the decision to divorce her husband, Bartik joined the Auerbach Corporation writing and editing technical reports on minicomputers. Bartik remained with Auerbach for eight years, then moved among positions with a variety of other companies for the rest of her career as a manager, writer, and engineer. Jean Bartik and William Bartik divorced by 1968. Bartik ultimately retired from the computing industry in 1986 when her final employer, Data Decisions (a publication of Ziff-Davis), was sold; Bartik spent the following 25 years as a real estate agent. Bartik died from congestive heart failure in a Poughkeepsie, New York nursing home on March 23, 2011. She was 86. Legacy Starting in 1996, once the importance of their role in the development of computing was re-discovered, Bartik along with Betty Holberton and Bartik's other friend of over 60 years Kathleen Antonelli (ENIAC programmer and wife of ENIAC co-inventor John Mauchly) began to finally receive the acknowledgement and honors for their pioneering work in the early field of computing. Bartik and Antonelli became invited speakers both at home and abroad to share their experiences working with the ENIAC, BINAC and UNIVAC. Bartik especially went on to receive many honors and awards for her pioneering role programming the ENIAC, BINAC and UNIVAC, the latter of which helped to launch the commercial computer industry, and for turning the ENIAC into the world's first stored program computer. In 2010, a documentary called, "Top Secret Rosies: The Female "Computers" of WWII" was released. The film centered around in-depth interviews of three of the six women programmers, focusing on the commendable patriotic contributions they made during World War II. The ENIAC team is also the subject of the 2013 short documentary film The Computers. This documentary, created by Kathy Kleiman and the ENIAC Programmers Project, combines actual footage of the ENIAC team from the 1940s with interviews with the female team members as they reflect on their time working together on the ENIAC. The Computers is the first part of a three-part documentary series, titled Great Unsung Women of Computing: The Computers, The Coders, and The Future Makers. Bartik wrote her autobiography "Pioneer Programmer: Jean Jennings Bartik and the Computer that Changed the World" prior to her death in 2011 with the help of long-time colleagues, Dr. Jon T. Rickman and Kim D. Todd. The autobiography was published in 2013 by Truman State Press to positive reviews. One of the best pieces of advice Bartik ever received was: "Don't ever let anyone tell you that you can't do something because they think you can't. You can do anything, achieve anything, if you think you can and you educate yourself to succeed." Encouraging girls and women to follow their dreams, she said, "If my life has proved anything, it is that women (and girls) should never be afraid to take risks and try new things." The Jean Jennings Bartik Computing Museum at Northwest Missouri State University in Maryville, Missouri is dedicated to the history of computing and Bartik's career. Content-management framework Drupal's default theme, Bartik, is named in honor of her. Awards and honors Inductee, Women in Technology International Hall of Fame (1997). Fellow, Computer History Museum (2008) IEEE Computer Pioneer Award, IEEE Computer Society (2008) Korenman Award from the Multinational Center for Development of Women in Technology (2009) See also Adele Goldstine Betty Holberton Frances Spence Ruth Teitelbaum Marlyn Wescoff Kathleen Antonelli List of pioneers in computer science Timeline of women in science References External links ENIAC Programmers documentary Oral history from Bartik at the UNIVAC conference, Charles Babbage Institute Jean Jennings Bartik Computing Museum at NWMSU Bartik receives the Computer Pioneer Award Oral history given by Bartik to the Computer History Museum in 2008 1924 births 2011 deaths People from Gentry County, Missouri American computer programmers University of Pennsylvania alumni Northwest Missouri State University alumni American women computer scientists American computer scientists Human computers 20th-century American women scientists American real estate brokers Mathematicians from Missouri Scientists from Missouri 21st-century American women
62720448
https://en.wikipedia.org/wiki/Information%20Technology%20Rules%2C%202021
Information Technology Rules, 2021
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 is secondary or subordinate legislation that suppresses India's Intermediary Guidelines Rules 2011. The 2021 rules have stemmed from section 87 of the Information Technology Act, 2000 and are a combination of the draft Intermediaries Rules, 2018 and the OTT Regulation and Code of Ethics for Digital Media. The Central Government of India along with the Ministry of Electronics and Information Technology (MeitY) and the Ministry of Information and Broadcasting (MIB) have coordinated in the development of the rules. Intermediaries had until 25 May 2021 to comply with the rules. History During Monsoon session of the Parliament in 2018 a motion on “Misuse of social media platforms and spreading of fake news” was admitted. The Minister of Electronics and Information Technology accordingly made a detailed statement of the "resolve of the Government to strengthen the legal framework and make the social media platforms accountable under the law". MeitY then prepared the draft Information Technology (Intermediary Guidelines) Rules 2018 to replace the 2011 rules. The Information Technology Act, 2000 provided that intermediaries are protected liabilities in some cases. The draft 2018 Rules sought to elaborate the liabilities and responsibilities of the intermediaries in a better way. Further the draft Rules have been made "in order to prevent spreading of fake news, curb obscene information on the internet, prevent misuse of social-media platforms and to provide security to the users." The move followed a notice issued to WhatsApp in July 2018, warning it against helping to spread fake news and look on as a "mute spectator". In relation to the Prajawala case, on 11 December 2018, the Supreme Court of India observed that "the Government of India may frame the necessary Guidelines / SOP and implement them within two weeks so as to eliminate child pornography, rape and gang rape imageries, videos and sites in content hosting platforms and other applications." Further a parliamentary report laid in 2020 studied the effect of pornography on children. On 5 January 2019 a government open house was held to discuss the Rules. Further, ten days were given for counter comments, until 28 January. On 21 September 2019 the Centre informed the Madras High Court bench under Justice M Sathyanarayanan that deliberations on the Draft Rules 2018 had been completed. Facebook wrote a plea to transfer the matter to the Supreme Court. MeitY had invited comments on proposed amendments early in 2019. The amendments were seen by many to "overstep the aforesaid intention sparking concerns of violating free speech and privacy rights of individuals." It is seen that "the guidelines suffer with excessive delegation of powers and shift the burden of responsibility of identification of unlawful content from a government/ judiciary to intermediaries." A total of 171 comments were received by MeitY; all of the comments were published for counter comments. On 21 October 2019, MeitY asked the court for three months’ time for finalisation of the Intermediary Rules, 2018. About Rules to be administered by MeitY include the due diligence required of intermediaries and the grievance redressal mechanism. Rules to be administered by MIB include a code of ethics, a self-classification system and an oversight mechanism. Tracking the origin of information Rule 4(2) covers the "identification of the first originator of the information". The extent of the first originator is limited to India— "Provided further that where the first originator of any information on the computer resource of an intermediary is located outside the territory of India, the first originator of that information within the territory of India shall be deemed to be the first originator of the information." Additional due diligence Rule 4 (1)(a),(b), & (c) of the guidelines require the appointment of a Chief Compliance Officer, a nodal contact person and a Resident Grievance Officer. Concerns Concerns over the 2018 draft Various issues have been pointed out with the rules such as restriction of free speech, requirements such as automatic identification and removal of content, and lack of elaboration on how the five million users will be calculated. Questions raised included if "intermediaries" include online media portals, raised by Free Software Movement of India. Mozilla (Firefox), also raised issues with the draft Rules. BSA (The Software Alliance) wrote to MeitY to "exclude enterprise cloud service providers" from the scope of the Rules and to remove the filtering obligations. Centre for Internet and Society has raised concerns with the draft rules and has asked for changes such as that draft Rule 3(2), Rule 3(4), Rule 3(5), Rule 3(10) be completely deleted. Divij Joshi, Tech Policy Fellow at Mozilla, also recommends that draft Rule 3(5) be deleted and that "requirement to proactively identify and remove access to all 'unlawful content' is vague and overbroad." A joint letter written by a group of experts from research, academia, and media, including Faisal Farooqui, Karma Paljor, Nikhil Pahwa, Shamnad Basheer and professors from IIM Bangalore and IIT Bombay, and organisations including Free Software Foundation Tamil Nadu, Free Software Movement of India, Free Software Movement Karnataka and Software Freedom Law Centre, India, to MeitY, pointed out various issues the Rules could cause such as the traceability requirements interfering with the privacy rights of citizens. Aftermath Amit Khare, Secretary, Ministry of Information and Broadcasting has called the rules as a "progressive institutional mechanism". Immediately following the publication of the rules, a number of platforms advised creators of caution on the basis of the new rules. Petitions have been filed challenging the rules with respect to the digital news media. The Foundation for Independent Journalism editor M. K. Venu (The Wire) and The News Minute editor Dhanya Rajendran filed the first case challenging the rules. LiveLaw, The Quint and Pratidhvani have challenged the rules in court. On 25 May 2021, the last day for intermediaries to comply, WhatsApp sued the Government of India over the rules. The Ministry of Electronics and Information Technology, described the action as "clear act of defiance". After a statement made by Twitter, the government released a press statement which said, "Protecting free speech in India is not the prerogative of only a private, for-profit, foreign entity like Twitter, but it is the commitment of the world’s largest democracy and its robust institutions. Twitter’s statement is an attempt to dictate its terms to the world's largest democracy. Through its actions and deliberate defiance, Twitter seeks to undermine India's legal system. Furthermore, Twitter refuses to comply with those very regulations in the Intermediary Guidelines on the basis of which it is claiming a safe harbour protection from any criminal liability in India." On 5 July 2021, the government released a statement claiming Twitter has lost its liability protection concerning user-generated content. This was brought on by Twitter's failure to comply with the new rules with a filing stating that the company failed to appoint executives to govern user content on the platform. In July 2021, Press Trust of India moved the Delhi High Court over the rules. See also Information Technology Act, 2000 Personal Data Protection Bill 2019 Shreya Singhal v. Union of India References Notes Citations Further reading (30 Dec 2019). "Opinion | Wikipedia must stay open" LiveMint Torsha Sarkar (12 August 2019). "Rethinking the intermediary liability regime in India" Centre for Internet and Society (India) Gurshabad Grover et al (31 January 2019). Response to the Draft of The Information Technology [Intermediary Guidelines (Amendment) Rules] 2018. Centre for Internet and Society (India) Joshi, Divij (January 31, 2019). Towards a Safer Social Media – Submissions to the Ministry of Information and Technology, Government of India, on the Draft Information Technology Intermediary Guidelines (Amendment) Rules, 2018. Available at SSRN: or Asia Internet Coalition (28 January 2019) "AIC Submits Comments on India’s Information Technology Intermediary Guidelines (Amendment) Rules 2018" (submission) Esya Centre (30 January 2019) "Response to the Draft Information Technology [Intermediary Guidelines (Amendment) Rules], 2018" Consumer Unity and Trust Society (2019) "Counter Comments On The Submissions Received By Ministry Of Electronics And Information Technology On ‘The Information Technology Intermediary Guidelines (Amendment) Rules, 2018’" Yesha Tshering Paul (14 January 2019) "Fake News: Misguided Policymaking To Counter Misinformation". BloombergQuint Information privacy Data laws of Asia Data protection Law in India Computing legislation Information technology in India Censorship in India Internet in India Modi administration 2018 in India 2018 in law Medical privacy legislation Cyber Security in India
2054965
https://en.wikipedia.org/wiki/Meanings%20of%20minor%20planet%20names%3A%2018001%E2%80%9319000
Meanings of minor planet names: 18001–19000
18001–18100 |-id=004 | 18004 Krystosek || || Rebecca Jennifer Krystosek, ISEF awardee in 2003 || |-id=009 | 18009 Patrickgeer || || Patrick L. Geer, ISEF awardee in 2003 || |-id=012 | 18012 Marsland || || Kyle Anthony Marsland, ISEF awardee in 2003 || |-id=013 | 18013 Shedletsky || || Anna-Katrina Shedletsky, ISEF awardee in 2003 || |-id=015 | 18015 Semenkovich || || Nicholas Paul Semenkovich, ISEF awardee in 2003 || |-id=016 | 18016 Grondahl || || Brian Jacob Grondahl, ISEF awardee in 2003 || |-id=019 | 18019 Dascoli || || Jennifer Anne D'Ascoli, 2004 ISTS finalist and ISEF awardee in 2003 || |-id=020 | 18020 Amend || || Gregory Amend, ISEF awardee in 2003 || |-id=021 | 18021 Waldman || || Sarah Elyse Waldman, ISEF awardee in 2003 || |-id=022 | 18022 Pepper || || Brian Jeffrey Pepper, ISEF awardee in 2003 || |-id=024 | 18024 Dobson || || John Dobson (1915–2014), an American telescope maker and amateur astronomer || |-id=026 | 18026 Juliabaldwin || || Julia Ruby Baldwin, ISEF awardee in 2003 || |-id=027 | 18027 Gokcay || || Chelsea Bahar Gokcay, ISEF awardee in 2003 || |-id=028 | 18028 Ramchandani || || Joia Ramchandani, ISEF awardee in 2003 || |-id=032 | 18032 Geiss || || Johannes Geiss (born 1926), a German-born space scientist at the Swiss University of Bern † || |-id=043 | 18043 Laszkowska || || Monika Laszkowska, ISEF awardee in 2003 || |-id=055 | 18055 Fernhildebrandt || || Fern C. Hildebrandt (born 1927) instilled and cultivated an interest in astronomy in codiscoverer Gary Hug at a very early age. Resident now in Topeka, Kansas, she has been an example of dedication and triumph through difficult times and has inspired this codiscoverer to search the night sky. || |-id=059 | 18059 Cavalieri || || Bonaventura Cavalieri (1598–1647), a friar and a professor at the University of Bologna. || |-id=060 | 18060 Zarex || || Zarex, from Greek mythology. He was a grandson of Chiron, married Rhoeo after she arrived on Delos and became the step-father of Anius. || |-id=075 | 18075 Donasharma || || Dona Sarah Sharma, ISEF awardee in 2003 || |-id=077 | 18077 Dianeingrao || || Diane L. Ingrao (born 1951), an American secretary of the Warren Astronomical Society in Detroit, Michigan || |-id=079 | 18079 Lion-Stoppato || || Piero Francesco Lion-Stoppato (born 1969), an Italian space scientist at University of Padua || |-id=084 | 18084 Adamwohl || || Adam Richard Wohl, ISEF awardee in 2003 || |-id=086 | 18086 Emilykraft || || Emily Michele Kraft, ISEF awardee in 2003 || |-id=087 | 18087 Yamanaka || || Yvonne Joy Yamanaka, ISEF awardee in 2003 || |-id=088 | 18088 Roberteunice || || Robert Earl Eunice, ISEF awardee in 2003 || |-id=090 | 18090 Kevinkuo || || Kevin Chester Kuo, ISEF awardee in 2003 || |-id=091 | 18091 Iranmanesh || || Arya Mohammad Iranmanesh, ISEF awardee in 2003 || |-id=092 | 18092 Reinhold || || Kimberly Elise Reinhold, ISEF awardee in 2003 || |-id=095 | 18095 Frankblock || || Frank Emmanuel Block, ISEF awardee in 2003 || |-id=099 | 18099 Flamini || || Enrico Flamini (born 1951), an Italian astronomer || |-id=100 | 18100 Lebreton || || Jean-Pierre Lebreton (born 1949), French astronomer || |} 18101–18200 |- | 18101 Coustenis || || Athéna Coustenis, French astronomer || |-id=102 | 18102 Angrilli || || Francesco Angrilli, Italian space scientist || |-id=104 | 18104 Mahalingam || || Satish Mahalingam, ISEF awardee in 2003 || |-id=106 | 18106 Blume || || William H. Blume, American senior space mission designer || |-id=110 | 18110 HASI || || The 44 members of the Huygens Atmospheric Structure Instrument (HASI) team || |-id=111 | 18111 Pinet || || Patrick Pinet, French astronomer || |-id=112 | 18112 Jeanlucjosset || || Jean-Luc Josset, Swiss astronomer, director of the Space Exploration Institute in Neuchâtel, Switzerland || |-id=113 | 18113 Bibring || || Jean-Pierre Bibring, French astronomer and planetary scientist || |-id=114 | 18114 Rosenbush || || Vera K. Rosenbush, Ukrainian astronomer || |-id=115 | 18115 Rathbun || || Donald Rathbun, American neurologist || |-id=116 | 18116 Prato || || Prato province, Tuscany, Italy, where the Museo di Scienze Planetarie (Museum of Planetary Sciences) is located || |-id=117 | 18117 Jonhodge || || Jonathon Hodge (born 1948), American teacher and astronomy communicator || |-id=119 | 18119 Braude || || Semen Ya. Braude, Russian radioastronomer || |-id=120 | 18120 Lytvynenko || || Leonid Mikolajovich Lytvynenko (Leonid Nikolaevich Lytvynenko), Ukrainian radioastronomer || |-id=121 | 18121 Konovalenko || || Alexandr A. Konovalenko, Ukrainian radioastronomer || |-id=122 | 18122 Forestamartin || || Franco Foresta Martin, Italian science popularizer, scientific editor for the newspaper Corriere della Sera || |-id=123 | 18123 Pavan || || Luciano Pavan, Italian musician, writer, painter and amateur astronomer || |-id=124 | 18124 Leeperry || || Lee Taylor Perry, ISEF awardee in 2003 || |-id=125 | 18125 Brianwilson || 2000 OF || Californian songwriter and record producer Brian Wilson (born 1942) contributed to 1960s pop culture, with songs like Fun Fun Fun, exemplifying the pastimes of modern teenage life, through the Beach Boys' pop group harmonies, giving out very good vibrations indeed. || |-id=127 | 18127 Denversmith || || Denver L. Smith, ISEF awardee in 2003 || |-id=128 | 18128 Wysner || || Laura C. Wysner, ISEF awardee in 2003 || |-id=132 | 18132 Spector || || Phil Spector, American record producer and songwriter † || |-id=142 | 18142 Adamsidman || || Adam Daniel Sidman, ISEF awardee in 2003 || |-id=148 | 18148 Bellier || || Guy and Caroline Bellier, French orthopedic surgeons, and their sons Thomas and Margaux || |-id=149 | 18149 Colombatti || || Giacomo Colombatti, Italian planetologist || |-id=150 | 18150 Lopez-Moreno || || José J. Lopez-Moreno, Spanish planetologist || |-id=151 | 18151 Licchelli || || Domenico Licchelli, Italian astronomer and popularizer || |-id=152 | 18152 Heidimanning || || Heidi L. K. Manning, American planetary scientist || |-id=155 | 18155 Jasonschuler || || Jason Michael Schuler, ISEF awardee in 2003 || |-id=156 | 18156 Kamisaibara || || Kamisaibara, the village in Okayama prefecture. || |-id=157 | 18157 Craigwright || || Craig John Wright, ISEF awardee in 2003 || |-id=158 | 18158 Nigelreuel || || Nigel Forest Reuel, ISEF awardee in 2003 || |-id=159 | 18159 Andrewcook || || Andrew Gordon Cook, ISEF awardee in 2003 || |-id=160 | 18160 Nihon Uchu Forum || || Nihon Uchu Forum, Japanese editor of the Japan Aerospace Exploration Agency (JAXA) annual NASDA Note. || |-id=161 | 18161 Koshiishi || || Hajime Koshiishi (born 1930) became interested in investigating minor planets as a natural resource. He organized a society for the study of NEAs and their resource utilization and made efforts toward the establishment of the Japan Spaceguard Association || |-id=162 | 18162 Denlea || || Jeremy Micah Denlea, ISEF awardee in 2003 || |-id=163 | 18163 Jennalewis || || Jenna Lyanne Lewis, ISEF awardee in 2003, and IFAA recipient || |-id=167 | 18167 Buttani || || Buttani Philippe (born 1966), a friend of one of the discoverers, started the "CCD adventure" with him in July 1994 || |-id=169 | 18169 Amaldi || 2000 QF || The nuclear physicist Edoardo Amaldi (1908–1989) was part of the team of Enrico Fermi and contributed to the completion of the first particle accelerator in Italy. || |-id=170 | 18170 Ramjeawan || || Khaivchandra Ramjeawan, ISEF awardee in 2003 || |-id=171 | 18171 Romaneskue || || Roman Garrick Eskue, ISEF awardee in 2003 || |-id=174 | 18174 Khachatryan || || George Alexander Khachatryan, ISEF awardee in 2003 || |-id=175 | 18175 Jenniferchoy || || Jennifer Tze-Heng Choy, ISEF awardee in 2003 || |-id=176 | 18176 Julianhong || || Julian C. Hong, ISEF awardee in 2003 || |-id=177 | 18177 Harunaga || || Jill Shizuko Harunaga, ISEF awardee in 2003 || |-id=180 | 18180 Irenesun || || Irene Yuan Sun, ISEF awardee in 2003 || |-id=182 | 18182 Wiener || || Norbert Wiener (1894–1964) contributed to many areas of mathematics, including cybernetics, stochastic processes and quantum theory. He was the author of the book Cybernetics, or control and communication in the animal and machine (1948). || |-id=184 | 18184 Dianepark || || Diane Hyemin Park, ISEF awardee in 2003 || |-id=189 | 18189 Medeobaldia || || Maria Elena De Obaldia, ISEF awardee in 2003 || |-id=190 | 18190 Michaelpizer || || Michael J. Pizer, ISEF awardee in 2003 || |-id=191 | 18191 Rayhe || || Ray Chengchuan He, ISEF awardee in 2003 || |-id=192 | 18192 Craigwallace || || Craig J. Wallace, ISEF awardee in 2003 || |-id=193 | 18193 Hollilydrury || || Hollilyne Drury, ISEF awardee in 2003 || |-id=196 | 18196 Rowberry || || Megan Rowberry, ISEF awardee in 2003 || |} 18201–18300 |-id=228 | 18228 Hyperenor || 3163 T-1 || Hyperenor, one of the sons of Panthoos and a great hero on the Trojan side. || |-id=235 | 18235 Lynden-Bell || 1003 T-2 || Donald Lynden-Bell, a professor at the University of Cambridge. || |-id=236 | 18236 Bernardburke || 1059 T-2 || Bernard Burke (born 1928), a professor of physics at the Massachusetts Institute of Technology. || |-id=237 | 18237 Kenfreeman || 1182 T-2 || Kenneth C. Freeman, a professor at the Australian National University. || |-id=238 | 18238 Frankshu || 1241 T-2 || Frank Shu (born 1943), a president of National Tsinghua University in Taiwan and former professor at the University of California in Berkeley. || |-id=239 | 18239 Ekers || 1251 T-2 || Ronald Ekers, current president of the IAU and ex-director of the Australian Telescope National Facility and of the Very Large Array. || |-id=240 | 18240 Mould || 1317 T-2 || Jeremy Mould, Australian astronomer || |-id=241 | 18241 Genzel || 1325 T-2 || Reinhard Genzel (born 1952), German astronomer and 2020 Physics Nobel prize || |-id=242 | 18242 Peebles || 2102 T-2 || Princeton theoretical cosmologist Jim Peebles (born 1935) and 2019 Physics Nobel Prize, plays a central role in the understanding of the evolution and structure of the universe. His studies of the evolution of matter in the earliest moments of the universe were critical in the establishment of the Big Bang theory as a widely accepted hypothesis. || |-id=243 | 18243 Gunn || 2272 T-2 || James Edward Gunn, a professor at Princeton University. || |-id=244 | 18244 Anneila || 3008 T-2 || Anneila Sargent, American astronomer || |-id=263 | 18263 Anchialos || 5167 T-2 || The Greek heroes Anchialos and Menestheus were together on their chariot when they were killed by Hector. || |-id=268 | 18268 Dardanos || 2140 T-3 || Dardanos, a son of Zeus and a nymph, mythical ancestor of the Trojans. || |-id=278 | 18278 Drymas || 4035 T-3 || Drymas, a king of Phrygia and father of Priam's second wife Hekabe (in Latin, Hecuba). || |-id=281 | 18281 Tros || 4317 T-3 || Tros, a grandson of Dardanos. His country was named Troas after him, and its principal city was Troy. || |-id=282 | 18282 Ilos || 4369 T-3 || Ilos, the oldest son of Tros, and he built the citadel Ilion, also named Ilios. Ilos was the father of Laomedon and the grandfather of Priam. || |-id=284 | 18284 Tsereteli || 1970 PU || Zurab Konstantinovich Tsereteli (b.1934), world-renowned Russian sculptor. || |-id=285 | 18285 Vladplatonov || 1972 GJ || Vladimir Petrovich Platonov (born 1938), well-known journalist and documentary-film director, is the author of many books, articles and films about the creators of space-rocket technologies and the many challenges in that field || |-id=286 | 18286 Kneipp || || Sebastian Kneipp (1821–1897), a German priest, skilled in the art of healing, introduced manifold applications of cold and warm water and suggested that a healthy way of living conformed to nature. His papers were translated into many languages and were an essential influence on modern physical therapeutics and balneology. || |-id=287 | 18287 Verkin || || Boris Ieremievich Verkin (1919–1990), a Ukrainian Soviet physicist and creator of the scientific school of cryogenic physics and technology, was the founder and first director of the Institute for Low Temperature Physics and Engineering in Kharkiv || |-id=288 | 18288 Nozdrachev || || Aleksandr Danilovich Nozdrachev (born 1931), a professor and head of physiology at St. Petersburg University. || |-id=289 | 18289 Yokoyamakoichi || || Koichi Yokoyama (born 1940) is a professor emeritus of the National Astronomical observatory, Japan. || |-id=290 | 18290 Sumiyoshi || || Sumiyoshi, in the south of Osaka prefecture, is an important port for trade between Japan and the Korean Peninsula. The Sumiyoshi Taisha shrine, a guardian of voyage, was founded in 211. At the shrine there is a lighthouse, believed to be the oldest in Japan. || |-id=291 | 18291 Wani || || Wani was a scholar who came to Japan from Korea in the second half of the 4th century. He brought ten volumes of The Analects of Confucius and one volume of the Thousand Character Classic to Japan. || |-id=292 | 18292 Zoltowski || 1977 FB || Frank B. Zoltowski (born 1957), an Australian discoverer of minor planets who made numerous critical observations of near-earth objects, notably a dramatic recovery of 1999 AN10, while he was working in South Australia during 1997–1999. He continued to make astrometric contributions on his return to the U.S. || |-id=293 | 18293 Pilyugin || || Nikolay Alekseyevich Pilyugin, 20th-century Russian designer of autonomous control systems and computers for space rocketry || |-id=294 | 18294 Rudenko || || Anatolij Afanas'evich Rudenko (born 1949) is a full member of the Tsiolkovsky Russian Academy of Cosmonautics and an authority on systems analysis and high technology. He was a member of the team that created aerospace systems and developed powerful liquid-propellant engines || |-id=295 | 18295 Borispetrov || || Boris Mikhajlovich Petrov, Russian journalist, director of the St. Petersburg regional center of the Russian News Agency ITAR-TASS || |} 18301–18400 |- | 18301 Konyukhov || || Fyodor Fyodorovich Konyukhov (born 1951) has performed 50 extensive travels, mainly alone. He conquered both poles and all the highest mountains of the world. The renowned Russian traveler has taken many of the world's most difficult land and sea routes and has sailed around the world three times. || |-id=302 | 18302 Körner || || Harald Körner (1881–1953) was headmaster of the private elementary school in Lund from 1916 to 1944, and a proponent of education for girls. || |-id=321 | 18321 Bobrov || || Vsevolod Bobrov (1922–1979), a Soviet ice hockey and football champion || |-id=322 | 18322 Korokan || || Korokan was a guest house for foreign envoys built in Chikushi (now Fukuoka city) in the 8th century. || |-id=334 | 18334 Drozdov || || Nikolaj Nikolaevich Drozdov (born 1937), a Russian professor of biology and the author and chief producer of very popular TV program V mire zhivotnykh (In the World of Animals). || |-id=335 | 18335 San Cassiano || || San Cassiano, an Italian village in the hills near Verona in northern Italy, is renowned for its high-quality oil (Grignano) and wine (Amarone). Its isolated location affords views of both the Alps and the Adriatic Sea. || |-id=343 | 18343 Asja || 1989 TN || Asja Geyer-Fischer (born 1934) is a splendid pianist with a great love for Mozart and Chopin. She is an especially good teacher for children. In 1962 she followed her husband, astronomer E. H. Geyer to the Boyden Observatory, South Africa, where he had been appointed Director of the observatory for two years. || |-id=349 | 18349 Dafydd || || Dafydd ap Llywelyn (c. 1212–1246), prince of Wales || |-id=359 | 18359 Jakobstaude || || Jakob Staude (born 1944) is staff astronomer at the Heidelberg Max Planck Institute for Astronomy and a well-known expert on star formation. Since 1981 Staude has also served as editor-in-chief of the German journal Sterne und Weltraum. || |-id=360 | 18360 Sachs || || Hans Sachs (1494–1576), master of the shoemaker guild in Nuremberg from 1520, is the most important German poet of the sixteenth century. || |-id=365 | 18365 Shimomoto || || Shigeo Shimomoto (born 1963), a Japanese amateur astronomer and computer programmer. || |-id=368 | 18368 Flandrau || || The Flandrau Science Center and Planetarium in Tucson, Arizona. It is part of the University of Arizona. || |-id=376 | 18376 Quirk || 1991 SQ || Steve Quirk (born 1958), an Australian amateur astronomer and astrophotographer who also operates fireball patrol and meteor video cameras (Src). || |-id=377 | 18377 Vetter || || John Francis Vetter (born 1945), an Australian amateur astronomer and retired automotive mechanic, who established the Mudgee Observatory in 2005 (Src). || |-id=379 | 18379 Josévandam || || José van Dam (born 1940), a Belgian bass-baritone, who entered the Brussels Royal Conservatory at the age of 17 || |-id=381 | 18381 Massenet || 1991 YU || Jules Massenet (1842–1912) was a prolific French composer of operas. His greatest successes were Manon (1884), Werther (1892) and Thaïs (1894). The Méditation, a violin solo with orchestra from Thaïs, became world-famous. In 1878 he was elected a member of the Académie des Beaux-Arts || |-id=395 | 18395 Schmiedmayer || || (born 1960), an Austrian physicist and a leading expert in the field of quantum optics. || |-id=396 | 18396 Nellysachs || || Nelly Sachs (1891–1970), a German-Swedish poet, dramatist, and Nobel Prize winner. || |-id=398 | 18398 Bregenz || || Bregenz, capital of the Austrian province of Vorarlberg || |-id=399 | 18399 Tentoumushi || || The Tentoumushi astronomy club was named after the seven-starred ladybug. The club received an award from the city of Komatsu for its astronomy popularization. || |-id=400 | 18400 Muramatsushigeru || || Shigeru Muramatsu (born 1951), a Japanese amateur astronomer living in Imabari, Ehime. || |} 18401–18500 |-id=403 | 18403 Atsuhirotaisei || 1993 AG || Atsuhiro Ikuta (1999–2011) and Taisei Ikuta (2003–2011) were two brothers who loved the stars. They died in an automobile accident on the night of 2011 December 10, on their return home from viewing a total lunar eclipse || |-id=404 | 18404 Kenichi || || Kenichi Miyoshi, an amateur astronomer who has contributed to astronomical awareness in Ehime Prefecture over many years. || |-id=412 | 18412 Kruszelnicki || 1993 LX || Karl Kruszelnicki (born 1948), an Australian science communicator. || |-id=413 | 18413 Adamspencer || || Adam Spencer (born 1969) is an Australian mathematics communicator, television and radio presenter. || |-id=418 | 18418 Ujibe || || Tadashi Ujibe, an amateur astronomer who constructed the three-meter dome of his own private observatory. || |-id=426 | 18426 Maffei || || Paolo Maffei, Italian astronomer † || |-id=430 | 18430 Balzac || || Honoré de Balzac (1799–1850), the creator of the French realistic novel. || |-id=431 | 18431 Stazzema || 1994 BM || Stazzema, a pleasant village located in the Alpi Apuane mountains of Tuscany, Italy. Since 2000, it has been the site of the Italian Park of Peace. Name proposed by Mario Di Martino || |-id=434 | 18434 Mikesandras || || Mike Sandras, American director of the Kenner Planetarium, Louisiana. || |-id=441 | 18441 Cittadivinci || 1994 PE || Vinci is a small beautiful village in Tuscany, where the great genius Leonardo da Vinci was born in 1452. For this reason it is visited by thousands of people each year, eager to visit either the museum or see Leonardo's machines. || |-id=449 | 18449 Rikwouters || || Rik Wouters, 19th/20th-century Belgian fauve painter and sculptor || |-id=453 | 18453 Nishiyamayukio || 1994 TT || Yukio Nishiyama (born 1950) is the president of a shipbuilding design company who spends his evenings as an amateur astronomer. || |-id=456 | 18456 Mišík || 1995 ES || Vladimír Mišík (born 1947), a Czech rock and blues guitarist, singer and songwriter. || |-id=458 | 18458 Caesar || || Emperor Gaius Julius Caesar (100-44 B.C.) promulgated in 46 B.C. on the advice of the Alexandrine astronomer Sosigenes what is now called the Julian calendar. || |-id=460 | 18460 Pecková || 1995 PG || Dagmar Pecková, Czech mezzo-soprano † || |-id=461 | 18461 Seiichikanno || 1995 QQ || Seiichi Kanno (born 1954) is an education consultant and an amateur astronomer, who has observed the planets since 1970. He built an observatory in Kaminoyama city, Yamagata, in 1989, and now observes the planets with a video camera || |-id=462 | 18462 Riccò || || Annibale Riccò, Italian astronomer † || |-id=467 | 18467 Nagatatsu || || Tatsuo Nagahama (born 1952), an amateur astronomer. || |-id=469 | 18469 Hakodate || || Hakodate, located at the southernmost part of Hokkaido, is a prosperous city of fishing and tourism. The night view from Mount Hakodate is one of the best tourist attractions in Japan || |-id=472 | 18472 Hatada || || Naoki Hatada (born 1967), an editor of the Inagawa Observatory web site since 2003. || |-id=473 | 18473 Kikuchijun || || Jun Kikuchi (born 1967) purchased his first telescope during the height of Halley's Comet fever in 1986. Though cloudy skies thwarted his attempts at comet photography, his interest in solar eclipse photography led him to France in 1999, and to China in 2008 and 2009 || |-id=493 | 18493 Demoleon || || Demoleon, a Trojan warrior and son of Antenor, was struck in the head by Achilles' spear || |-id=497 | 18497 Nevězice || || Nevězice, village and place of a Celtic oppidum in central Bohemia, the Czech Republic † || |-id=498 | 18498 Cesaro || 1996 MN || Ernesto Cesàro (1859–1906), a prolific mathematician and professor at the universities of Palermo and Naples. || |-id=499 | 18499 Showalter || 1996 MR || Mark R. Showalter (born 1957), planetary scientist at the SETI Institute, is (co-)discover of the Jovian gossamer ring, Saturnian moon Pan, Uranian moons Mab and Cupid, two faint Uranian rings, Neptunian moon S/2004 N 1, and Plutonian moons Kerberos and Styx. He is the leader of the Planetary Data Systems Rings Node || |} 18501–18600 |-id=505 | 18505 Caravelli || || Vito Caravelli (1724–1800), a professor of mathematics at the Naval Institute of Naples. || |-id=509 | 18509 Bellini || || Vincenzo Bellini (1801–1835), an Italian composer best known for his "Norma" and "I puritani". || |-id=510 | 18510 Chasles || 1996 SN || Michel Chasles (1793–1880), a professor at the École Polytechnique and later at the Sorbonne. || |-id=520 | 18520 Wolfratshausen || || Wolfratshausen, a city in southern Bavaria, Germany, has a long history extending back to the original name found in court papers by Holy Roman Emperor Heinrich II in 1003. Rainer Maria Rilke (1875–1926) stayed in the city with Lou Andreas-Salome (1861–1937) in 1897 || |-id=524 | 18524 Tagatoshihiro || || Toshihiro Taga (born 1951) is a Japanese amateur astronomer and president of the Tottori Society of Astronomy. He is a popularizer of astronomy. || |-id=531 | 18531 Strakonice || || Strakonice, a town in southern Bohemia, the Czech Republic † ‡ || |-id=542 | 18542 Broglio || || Luigi Broglio, Italian aeronautical engineer, conceptor and director of the San Marco programme † || |-id=548 | 18548 Christoffel || || Elwin Bruno Christoffel (1829–1900), a professor at various German universities. || |-id=550 | 18550 Maoyisheng || || Yisheng Mao (1896–1989) was a world-renowned scientist and one of the founders of modern bridge engineering of China. || |-id=553 | 18553 Kinkakuji || || Kinkakuji is the popular name of a gilded pavilion in the Rokuon-ji temple complex (a World Cultural Heritage site) in Kyoto, Japan. || |-id=555 | 18555 Courant || || Richard Courant (1888–1972) studied and later taught at Göttingen. In 1934 he became a professor at New York University, where he founded and led one of the most prestigious institutes of applied mathematics, later named in his honor. || |-id=556 | 18556 Battiato || || Franco Battiato, Italian (Sicilian) polyhedric artist and amateur astronomer † || |-id=560 | 18560 Coxeter || || Harold Scott MacDonald Coxeter (born 1907), an English-Canadian mathematician and former professor at the University of Toronto. || |-id=561 | 18561 Fengningding || || Fengning Ding (born 1994), ISTS awardee in 2012 || |-id=562 | 18562 Ellenkey || || Ellen Key (1849–1926) was a Swedish feminist and writer on subjects such as family life, ethics and education. She was on early advocate of child-centered approach to education. || |-id=563 | 18563 Danigoldman || || Danielle Goldman (born 1994), ISTS awardee in 2012 || |-id=564 | 18564 Caseyo || || Casey O'Connell, mentor at the ISTS in 2012 || |-id=565 | 18565 Selg || || Timothy Selg, mentor at the ISTS in 2012 || |-id=567 | 18567 Segenthau || || Segenthau, Banat village and childhood home of the discoverer † || |-id=568 | 18568 Thuillot || || William Thuillot (born 1951) works at the Institut de Mécanique Céleste on the theory of the motions of Jupiter's Galilean satellites, including analysis of observations of eclipses by the planet and mutual phenomena. || |-id=572 | 18572 Rocher || || Patrick Rocher (born 1951) works at the Institut de Mécanique Céleste in Paris. His main task has been to build an integration package to compute orbital parameters for minor planets and comets. || |-id=574 | 18574 Jeansimon || || Jean-Louis Simon (born 1940) works at the Paris Institut de Mécanique Céleste on analytical planetary theory. He produced the first values of the secular variation of the orbital semimajor axes of the planets. || |-id=579 | 18579 Duongtuyenvu || || Duong Tuyen Vu (born 1933) works at the Paris Institut de Mécanique Céleste on ephemerides of natural satellites. || |-id=581 | 18581 Batllo || || Valerie Batllo (born 1967) works on cometary orbits at the Institut de Mécanique Céleste in Paris. She studies in particular how the short-period comets could be produced by encounters with the giant planets. || |-id=583 | 18583 Francescopedani || || Francesco Pedani (1953–1998) was an amateur astronomer, biologist and school teacher of science and mathematics. In 1988 he founded the Societ Astronomica Fiorentina, an association of amateur astronomers based in Florence, Italy. He was its first president until his untimely death. || |-id=593 | 18593 Wangzhongcheng || || Wang Zhongcheng (born 1925), neurosurgeon-academician of the Chinese Academy of Engineering. || |-id=596 | 18596 Superbus || || Tarquinius Superbus, seventh and last king of Rome, reigned from 534 to 509 B.C. || |} 18601–18700 |- | 18601 Zafar || || Abu-Bakr Zafar (born 1985), ISEF awardee in 2003 || |-id=602 | 18602 Lagillespie || || Lacy Ann Gillespie (born 1985), ISEF awardee in 2003 || |-id=605 | 18605 Jacqueslaskar || || Jacques Laskar (born 1955) is principally concerned with the chaotic behavior of the principal planets. A staff member of the Institut de Mécanique Céleste in Paris, he was the first to show the chaotic motion of the inner solar system and the stabilization of the obliquity of the ecliptic by the moon. || |-id=609 | 18609 Shinobuyama || || Shinobuyama, called "Fuku-Shima" many centuries ago, is a small mountain in Fukushima city, Japan. This beloved mountain is the symbol of the city. || |-id=610 | 18610 Arthurdent || || Arthur Philip Dent, character in The Hitchhiker's Guide to the Galaxy† || |-id=611 | 18611 Baudelaire || || French poet Charles Baudelaire (1821–1867) was one of the major innovators of French literature. His Les Fleurs du Mal (1857) is considered to rank with the finest of French poetry. Baudelaire is particularly known for his excellent translations of Poe's Tales, a writer whose style much resembled his own || |-id=617 | 18617 Puntel || || Nathalie Puntel (born 1968) is a French woman who prefers deep-sky pictures to searches for minor planets. || |-id=623 | 18623 Pises || || Observatoire des Pises, which was inaugurated in 1991, is located in the South of France. It is the Montpellier astronomical society observatory. || |-id=624 | 18624 Prévert || || Jacques Prévert, French poet and screenwriter. || |-id=626 | 18626 Michaelcarr || || Michael Carr (born 1947) is an instrument maker who worked at Caltech and then Princeton University. || |-id=627 | 18627 Rogerbonnet || || Roger-Maurice Bonnet (born 1937) is a French experimental astrophysicist specializing in stellar physics. From 1983 to 2001 he was Science Director of ESA and he created Horizon 2000. Under his lead, ESA launched the scientific projects, Giotto, Hipparcos, ISO, XMM, SOHO, Cluster, Cassini-Huygens and HST. || |-id=628 | 18628 Taniasagrati || || Tania Sagrati (1967–2012) was the cousin of the second discoverer. She graduated from the Art Institute of Firenze and worked as an interior decorator. || |-id=631 | 18631 Maurogherardini || || Mauro Gherardini (1941–2008), a surveyor by profession, was a great lover of the sky. He was a popularizer of astronomy, promoting astro-navigation at evening school events. || |-id=632 | 18632 Danielsson || || Ann-Kristin "Kikki" Danielsson (born 1952) is a well-known and popular country singer from Sweden. || |-id=634 | 18634 Champigneulles || || Champigneulles, Meurthe-et-Moselle, France. || |-id=635 | 18635 Frouard || || Frouard, Meurthe-et-Moselle, France. || |-id=636 | 18636 Villedepompey || || Pompey, a French village || |-id=637 | 18637 Liverdun || || Liverdun, a French village || |-id=638 | 18638 Nouet || || Nicolas-Antoine Nouet (1740–1811), an astronomer at the Observatoire de Paris, traveled to St. Domingue to map the island. Later he mapped the Rhine region and traveled with Napoleon Bonaparte to Egypt, where he created a map of that country. || |-id=639 | 18639 Aoyunzhiyuanzhe || || Aoyunzhiyuanzhe, meaning "Olympic Games Volunteer", honors the 1.7 million volunteers whose work, devotion, smiles and service during the 2008 Olympic and Paralympic Games touched the whole world, setting a milestone in voluntary service and opening a fresh chapter in volunteerism in China || |-id=643 | 18643 van Rysselberghe || || Théo van Rysselberghe, 19th/20th-century Belgian pointillistic and impressionistic painter || |-id=644 | 18644 Arashiyama || || Arashiyama, situated west of Kyoto city, is the area that includes Arashiyama mountain and the shores of the Katsuragawa river, including the Togetsukyo bridge. It is known nationally for its cherry blossoms and colorful autumn leaves and is designated as a National Historic Site and Place of Scenic Beauty. || |-id=647 | 18647 Václavhübner || || Václav Hübner (1922–2000), a Czech amateur astronomer † || |-id=649 | 18649 Fabrega || || Joaquin Fabrega (born 1967), an amateur astronomer from Panama || |-id=653 | 18653 Christagünt || || Christa and Günter Rothermel, parents of uncredited German co-discoverer Jens Rothermel || |-id=656 | 18656 Mergler || || Natalie Rose Mergler, ISEF awardee in 2003 || |-id=658 | 18658 Rajdev || || Priya Ashoke Rajdev, ISEF awardee in 2003 || |-id=659 | 18659 Megangross || || Megan Chaya Gross, ISEF awardee in 2003 || |-id=661 | 18661 Zoccoli || || Christina Marie Mariolana Zoccoli, ISEF awardee in 2003 || |-id=662 | 18662 Erinwhite || || Erin Margaret White, ISEF awardee in 2003 || |-id=663 | 18663 Lynnta || || Lynn Marie Torrech-Antonetty, ISEF awardee in 2003 || |-id=664 | 18664 Rafaelta || || Rafael Andres Torrech-Antonetty, ISEF awardee in 2003 || |-id=665 | 18665 Sheenahayes || || Sheena Marie Hayes, ISEF awardee in 2003 || |-id=668 | 18668 Gottesman || || David Alexander Gottesman, ISEF awardee in 2003 || |-id=669 | 18669 Lalitpatel || || Lalit Ramesh Patel, ISEF awardee in 2003, and IFAA recipient || |-id=670 | 18670 Shantanugaur || || Shantanu Kadir Gaur, ISEF awardee in 2003 || |-id=671 | 18671 Zacharyrice || || Zachary Philip Rice, ISEF awardee in 2003 || |-id=672 | 18672 Ashleyamini || || Ashley Ali Amini, ISEF awardee in 2003 || |-id=675 | 18675 Amiamini || || Ami Rebecca Amini, ISEF awardee in 2003 || |-id=676 | 18676 Zdeňkaplavcová || || Zdeňka Plavcová, Czech radio-astronomer † || |-id=679 | 18679 Heatherenae || || Heather Renae Messick, ISEF awardee in 2003 || |-id=680 | 18680 Weirather || || Sara Jo Weirather, ISEF awardee in 2003 || |-id=681 | 18681 Caseylipp || || Casey Albert Lipp, ISEF awardee in 2003 || |-id=689 | 18689 Rodrick || || Richard Jean Rodrick, ISEF awardee in 2003 || |-id=697 | 18697 Kathanson || || Kathleen Suzanne Hanson, ISEF awardee in 2003 || |-id=698 | 18698 Racharles || || Rachael Ann Charles, ISEF awardee in 2003 || |-id=699 | 18699 Quigley || || Carolyn Ann Quigley, ISEF awardee in 2003 || |} 18701–18800 |-id=702 | 18702 Sadowski || || John Paul Sadowski, ISEF awardee in 2003 || |-id=704 | 18704 Brychristian || || Bryan William Christian, ISEF awardee in 2003 || |-id=707 | 18707 Annchi || || Ann Chi, ISTS awardee in 2004, and ISEF in 2003 || |-id=708 | 18708 Danielappel || || Daniel Clayton Appel, ISEF awardee in 2003 || |-id=709 | 18709 Laurawong || || Laura Anne Wong, ISEF awardee in 2003 || |-id=720 | 18720 Jerryguo || || Jerry Ji Guo, ISEF awardee in 2003 || |-id=725 | 18725 Atacama || || The Atacama desert, which covers regions II, III and IV of Chile, is one of the driest deserts on Earth. || |-id=727 | 18727 Peacock || || Anthony J. Peacock, British-Dutch(?) project scientist for the European Space Agency Exosat and XMM-Newton missions || |-id=728 | 18728 Grammier || || Richard ("Rick") S. Grammier (1955–2011) was director of solar system exploration at NASA's Jet Propulsion Laboratory. || |-id=729 | 18729 Potentino || || Potentino castle, near Seggiano, Tuscany, Italy. || |-id=730 | 18730 Wingip || || Wing Ip (born 1947) is Vice Chancellor of the University system of Taiwan. || |-id=731 | 18731 Vil'bakirov || || Vil' S. Bakirov (born 1946) is a Ukrainian sociologist, president of the Sociological Association and corresponding member of the National Academy of Sciences of Ukraine. Since 1998 he has served as rector of Kharkiv V. N. Karazin National University, where he has promoted the development of astronomy and other sciences || |-id=734 | 18734 Darboux || || Jean-Gaston Darboux (1842–1917), a French mathematician and professor at the Sorbonne. || |-id=735 | 18735 Chubko || || Larysa Sergiivna Chubko, Ukrainian astronomer || |-id=737 | 18737 Aliciaworley || || Alicia Lorraine Worley, ISEF awardee in 2003 || |-id=739 | 18739 Larryhu || || Larry Zhixing Hu, ISEF awardee in 2003 || |-id=745 | 18745 San Pedro || || San Pedro de Atacama, a town in Chile's region II, was inhabited by the Likan Antay population for thousands of years. || |-id=747 | 18747 Lexcen || || Ben Lexcen, Australian marine architect † || |-id=749 | 18749 Ayyubguliev || || Ayyub Salakh-ogly Guliev, Azeri astronomer, director of the Shamakhi Astrophysical Observatory || |-id=750 | 18750 Leonidakimov || || Leonid Afanas'evich Akimov, Ukrainian planetary scientist || |-id=751 | 18751 Yualexandrov || || Yurij Vladimirovich Alexandrov, Ukrainian planetary scientist || |-id=755 | 18755 Meduna || || Matthew Paul Meduna, ISEF awardee in 2003 || |-id=766 | 18766 Broderick || || Tamara Ann Broderick, ISEF awardee in 2003 || |-id=768 | 18768 Sarahbates || || Sarah Woodring Bates, ISEF awardee in 2003 || |-id=770 | 18770 Yingqiuqilei || || Yingqiuqi Lei, ISEF awardee in 2003 || |-id=771 | 18771 Sisiliang || || Sisi Liang, ISEF awardee in 2003 || |-id=773 | 18773 Bredehoft || || Belle Dean Bredehoft, ISEF awardee in 2003 || |-id=774 | 18774 Lavanture || || Douglas George Lavanture, ISEF awardee in 2003 || |-id=775 | 18775 Donaldeng || || Donald Eng, ISEF awardee in 2003 || |-id=776 | 18776 Coulter || || Michael Edward Coulter, ISEF awardee in 2003 || |-id=777 | 18777 Hobson || || Christina Nicole Hobson, ISEF awardee in 2003 || |-id=779 | 18779 Hattyhong || || Hatty Hong, ISEF awardee in 2003 || |-id=780 | 18780 Kuncham || || Vivek Kuncham, ISEF awardee in 2003 || |-id=781 | 18781 Indaram || || Maanasa Indaram, ISEF awardee in 2003 || |-id=782 | 18782 Joanrho || || Joan Young Rho, ISEF awardee in 2003 || |-id=783 | 18783 Sychamberlin || || Sydney JoAnne Chamberlin, ISEF awardee in 2003 || |-id=785 | 18785 Betsywelsh || || Elizabeth Jean Welsh, ISEF awardee in 2003 || |-id=786 | 18786 Tyjorgenson || || Tyler Lee Jorgenson, ISEF awardee in 2003 || |-id=787 | 18787 Kathermann || || Katherine Laura Hermann, ISEF awardee in 2003 || |-id=788 | 18788 Carriemiller || || Carrie Anna Miller, ISEF awardee in 2003 || |-id=789 | 18789 Metzger || || Vincent Tyler Metzger, ISEF awardee in 2003 || |-id=790 | 18790 Ericaburden || || Erica Mariel Burden, ISEF awardee in 2003 || |-id=794 | 18794 Kianafrank || || Kiana Laieikawai Frank, ISEF awardee in 2003 || |-id=796 | 18796 Acosta || || Iyen Abdon Acosta, ISEF awardee in 2003 || |-id=800 | 18800 Terresadodge || || Terresa Louise Dodge, ISEF awardee in 2003 || |} 18801–18900 |- | 18801 Noelleoas || || Noelle Joan Oas, ISEF awardee in 2003 || |-id=803 | 18803 Hillaryoas || || Hillary Joan Oas, ISEF awardee in 2003 || |-id=805 | 18805 Kellyday || || Kelly Jean Day, ISEF awardee in 2003 || |-id=806 | 18806 Zachpenn || || Zach Penn, ISEF awardee in 2003 || |-id=809 | 18809 Meileawertz || || Meilea Elise Wertz, ISEF awardee in 2003 || |-id=812 | 18812 Aliadler || || Alexandra Raisa Adler, ISEF awardee in 2003 || |-id=814 | 18814 Ivanovsky || || Oleg Genrikhovich Ivanovsky, Russian Deputy Chief Designer for the Soviet Luna and Lunokhod missions, a designer for the Vostok spacecraft, director of the Museum of the Lavochkin Space Association in Moscow || |-id=818 | 18818 Yasuhiko || || Yasuhiko Takahashi (born 1934), the younger brother-in-law of the discoverer. || |-id=821 | 18821 Markhavel || || Mark Junichi Havel, ISEF awardee in 2003 || |-id=823 | 18823 Zachozer || || Zachary Adam Ozer, ISEF awardee in 2003 || |-id=824 | 18824 Graves || || Daniel David Graves, ISEF awardee in 2003 || |-id=825 | 18825 Alicechai || || Alice Wan Chai, ISEF awardee in 2003 || |-id=826 | 18826 Leifer || || Andrew Michael Leifer, ISEF awardee in 2003 || |-id=830 | 18830 Pothier || || David Guillaume Pothier, ISEF awardee in 2003 || |-id=836 | 18836 Raymundto || || Raymund Chun-Hung To, ISEF awardee in 2003 || |-id=838 | 18838 Shannon || 1999 OQ || Claude Elwood Shannon (1916–2001), an American scientist || |-id=839 | 18839 Whiteley || 1999 PG || Brett Whiteley (1939–1992) an Abstract artist and Australia's leading painter of his generation who won all of the major Australian art prizes many times over || |-id=840 | 18840 Yoshioba || || Yoshio Oba (born 1934) is a retired professor of earth sciences at Yamagata University and an amateur astronomer who observes occultations. || |-id=841 | 18841 Hruška || || Luboš Hruška, Czech creator of the Monument to the Victims of Evil in Plzeň † || |-id=843 | 18843 Ningzhou || || Ning Zhou, ISTS awardee in 2004, and ISEF awardee in 2003 || |-id=845 | 18845 Cichocki || || Bruno Cichocki, civil engineer and amateur astronomer || |-id=851 | 18851 Winmesser || || Winston Harmon Messer, ISEF awardee in 2003 || |-id=855 | 18855 Sarahgutman || || Sarah Elizabeth Gutman, ISEF awardee in 2003 || |-id=857 | 18857 Lalchandani || || Rupa Lalchandani, ISEF awardee in 2003 || |-id=858 | 18858 Tecleveland || || Thomas Edgar Cleveland, ISEF awardee in 2003 || |-id=861 | 18861 Eugenishmidt || || Eugenia Shmidt, ISEF awardee in 2003 || |-id=862 | 18862 Warot || || Gregory Andrew Warot, ISEF awardee in 2003 || |-id=871 | 18871 Grauer || || Albert D. Grauer (born 1942), an American astronomer. || |-id=872 | 18872 Tammann || || Gustav Tammann, Swiss cosmologist † || |-id=873 | 18873 Larryrobinson || || Larry Robinson, American astronomer † || |-id=874 | 18874 Raoulbehrend || || Raoul Behrend, Swiss astronomer † || |-id=876 | 18876 Sooner || 1999 XM || a "sooner", a person who is settling on land in the early American west before the land was officially open to settlement. The name particularly honors the U.S. state of Oklahoma and the University of Oklahoma, alma mater of the discoverer. || |-id=877 | 18877 Stevendodds || || Steven L. Dodds (born 1961) has been furnishing telescope optics for the astronomical community since 1986. He constructed two parabolic off-axis segments (adaptive optic components) used in the Gemini North 8.1-meter telescope located on Mauna Kea. || |-id=880 | 18880 Toddblumberg || || Todd James Blumberg, ISEF awardee in 2003 || |-id=883 | 18883 Domegge || || Domegge di Cadore, a small town nestled in the Northeastern Italian Alps, surrounded by the rose-colored Dolomites, Domegge di Cadore's very dark and clear skies are an inspiration to any astronomer. || |-id=887 | 18887 Yiliuchen || || Yiliu Chen, ISEF awardee in 2003 || |-id=891 | 18891 Kamler || || Jonathan Jacques Kamler, ISEF awardee in 2003 || |} 18901–19000 |-id=903 | 18903 Matsuura || || Takeshirou Matsuura (1818–1888), a Japanese geographer and explorer. || |-id=905 | 18905 Weigan || || Wei Gan, ISEF awardee in 2003 || |-id=907 | 18907 Kevinclaytor || || Kevin E. Claytor, ISEF awardee in 2003 || |-id=910 | 18910 Nolanreis || || Nolan Herman Reis, ISEF awardee in 2003 || |-id=912 | 18912 Kayfurman || || Kay Dee Furman, ISEF awardee in 2003 || |-id=918 | 18918 Nishashah || || Nisha Vikram Shah, ISEF awardee in 2003 || |-id=923 | 18923 Jennifersass || || Jennifer Rose Sass, ISEF awardee in 2003 || |-id=924 | 18924 Vinjamoori || || Anant Vinjamoori, ISEF awardee in 2003 || |-id=928 | 18928 Pontremoli || || Pontremoli is an Italian town. || |-id=930 | 18930 Athreya || || Khannan Kameshvaran Athreya, ISEF awardee in 2003 || |-id=932 | 18932 Robinhood || || Robin Hood, the legendary thirteenth-century English archer and outlaw of Sherwood Forest who, with his band of Merry Men, robbed rich unscrupulous officials to aid and protect the poor in what might be described as a medieval form of socialism || |-id=935 | 18935 Alfandmedina || || Alfredo Andres Medina, ISEF awardee in 2003 || |-id=938 | 18938 Zarabeth || || Zarabeth Lehr Golden, ISEF awardee in 2003 || |-id=939 | 18939 Sariancel || || Sari Ancel, ISEF awardee in 2003 || |-id=943 | 18943 Elaisponton || || Elais M. Ponton, ISEF awardee in 2003 || |-id=944 | 18944 Sawilliams || || Stephanie Alexandra Williams, ISEF awardee in 2003 || |-id=946 | 18946 Massar || || Sonny Raye Massar, ISEF awardee in 2003 || |-id=947 | 18947 Cindyfulton || || Cindy Marie Fulton, ISEF awardee in 2003 || |-id=948 | 18948 Hinkle || || Athena Leah Hinkle, ISEF awardee in 2003 || |-id=949 | 18949 Tumaneng || || Karen Andres Tumaneng, ISEF awardee in 2003 || |-id=950 | 18950 Marakessler || || Marissa Rachel Kessler, ISEF awardee in 2003 || |-id=953 | 18953 Laurensmith || || Lauren Marie Smith, ISEF awardee in 2003 || |-id=954 | 18954 Sarahbounds || || Sarah Brittany Bounds, ISEF awardee in 2003 || |-id=956 | 18956 Jessicarnold || || Jessica Lynn Arnold, ISEF awardee in 2003 || |-id=957 | 18957 Mijacobsen || || Michael Thomas Jacobsen, ISEF awardee in 2003 || |-id=961 | 18961 Hampfreeman || || Thomas Hampton Freeman, ISEF awardee in 2003 || |-id=964 | 18964 Fairhurst || || Maggie Sara Fairhurst, ISEF awardee in 2003 || |-id=965 | 18965 Lazenby || || Tanya Marie Lazenby, ISEF awardee in 2003 || |-id=969 | 18969 Valfriedmann || || Valerie Star Friedmann, ISEF awardee in 2003 || |-id=970 | 18970 Jenniharper || || Jennifer Dawn Harper, ISEF awardee in 2003 || |-id=973 | 18973 Crouch || || Kegan Kade Crouch, ISEF awardee in 2003 || |-id=974 | 18974 Brungardt || || Adam Robert Brungardt, ISEF awardee in 2003 || |-id=976 | 18976 Kunilraval || || Kunil Kaushik Raval, ISEF awardee in 2003 || |-id=979 | 18979 Henryfong || || Henry Fong, ISEF awardee in 2003 || |-id=980 | 18980 Johannatang || || Johanna Tang, ISEF awardee in 2003 || |-id=983 | 18983 Allentran || || Allen Hing Tran, ISEF awardee in 2003 || |-id=984 | 18984 Olathe || || Olathe, Kansas, location of the Sunflower Observatory † || |-id=987 | 18987 Irani || || Natasha Rustom Irani, ISEF awardee in 2003 || |-id=991 | 18991 Tonivanov || || Tonislav Ivanov Ivanov, ISEF awardee in 2003 || |-id=992 | 18992 Katharvard || || Katherine Harvard, ISEF awardee in 2003 || |-id=994 | 18994 Nhannguyen || || Nhan Duy Nguyen, ISEF awardee in 2003 || |-id=996 | 18996 Torasan || || Kiyoshi Atsumi (1928–1996), Japanese actor known for his roles in the film It's tough being a man and in the "Tora-san" series, of which there were 48 installments during 1969–1995. The Tora-san series became a huge success in Japan and received a National Honor Award in 1996 || |-id=997 | 18997 Mizrahi || || Jonathan Albert Mizrahi, ISEF awardee in 2003 || |} References 018001-019000
38634705
https://en.wikipedia.org/wiki/Extreme%20programming
Extreme programming
Extreme programming (XP) is a software development methodology intended to improve software quality and responsiveness to changing customer requirements. As a type of agile software development, it advocates frequent releases in short development cycles, intended to improve productivity and introduce checkpoints at which new customer requirements can be adopted. Other elements of extreme programming include: programming in pairs or doing extensive code review, unit testing of all code, not programming features until they are actually needed, a flat management structure, code simplicity and clarity, expecting changes in the customer's requirements as time passes and the problem is better understood, and frequent communication with the customer and among programmers. The methodology takes its name from the idea that the beneficial elements of traditional software engineering practices are taken to "extreme" levels. As an example, code reviews are considered a beneficial practice; taken to the extreme, code can be reviewed continuously (i.e. the practice of pair programming). History Kent Beck developed extreme programming during his work on the Chrysler Comprehensive Compensation System (C3) payroll project. Beck became the C3 project leader in March 1996. He began to refine the development methodology used in the project and wrote a book on the methodology (Extreme Programming Explained, published in October 1999). Chrysler cancelled the C3 project in February 2000, after seven years, when Daimler-Benz acquired the company. Ward Cunningham was another major influence on XP. Many extreme-programming practices have been around for some time; the methodology takes "best practices" to extreme levels. For example, the "practice of test-first development, planning and writing tests before each micro-increment" was used as early as NASA's Project Mercury, in the early 1960s. To shorten the total development time, some formal test documents (such as for acceptance testing) have been developed in parallel with (or shortly before) the software being ready for testing. A NASA independent test group can write the test procedures, based on formal requirements and logical limits, before programmers write the software and integrate it with the hardware. XP takes this concept to the extreme level, writing automated tests (sometimes inside software modules) which validate the operation of even small sections of software coding, rather than only testing the larger features. Origins Two major influences shaped software development in the 1990s: Internally, object-oriented programming replaced procedural programming as the programming paradigm favored by some developers. Externally, the rise of the Internet and the dot-com boom emphasized speed-to-market and company growth as competitive business factors. Rapidly changing requirements demanded shorter product life-cycles, and often clashed with traditional methods of software development. The Chrysler Comprehensive Compensation System (C3) started in order to determine the best way to use object technologies, using the payroll systems at Chrysler as the object of research, with Smalltalk as the language and GemStone as the data access layer. Chrysler brought in Kent Beck, a prominent Smalltalk practitioner, to do performance tuning on the system, but his role expanded as he noted several problems with the development process. He took this opportunity to propose and implement some changes in development practices - based on his work with his frequent collaborator, Ward Cunningham. Beck describes the early conception of the methods: Beck invited Ron Jeffries to the project to help develop and refine these methods. Jeffries thereafter acted as a coach to instill the practices as habits in the C3 team. Information about the principles and practices behind XP disseminated to the wider world through discussions on the original wiki, Cunningham's WikiWikiWeb. Various contributors discussed and expanded upon the ideas, and some spin-off methodologies resulted (see agile software development). Also, XP concepts have been explained, for several years, using a hypertext system map on the XP website at http://www.extremeprogramming.org circa 1999. Beck edited a series of books on XP, beginning with his own Extreme Programming Explained (1999, ), spreading his ideas to a much larger audience. Authors in the series went through various aspects attending XP and its practices. The series included a book critical of the practices. Current state XP generated significant interest among software communities in the late 1990s and early 2000s, seeing adoption in a number of environments radically different from its origins. The high discipline required by the original practices often went by the wayside, causing some of these practices, such as those thought too rigid, to be deprecated or reduced, or even left unfinished, on individual sites. For example, the practice of end-of-day integration tests for a particular project could be changed to an end-of-week schedule, or simply reduced to testing on mutually agreed dates. Such a more relaxed schedule could avoid people feeling rushed to generate artificial stubs just to pass the end-of-day testing. A less-rigid schedule allows, instead, the development of complex features over a period of several days. Meanwhile, other agile-development practices have not stood still, and XP continues to evolve, assimilating more lessons from experiences in the field, to use other practices. In the second edition of Extreme Programming Explained (November 2004), five years after the first edition, Beck added more values and practices and differentiated between primary and corollary practices. Concept Goals Extreme Programming Explained describes extreme programming as a software-development discipline that organizes people to produce higher-quality software more productively. XP attempts to reduce the cost of changes in requirements by having multiple short development cycles, rather than a long one. In this doctrine, changes are a natural, inescapable and desirable aspect of software-development projects, and should be planned for, instead of attempting to define a stable set of requirements. Extreme programming also introduces a number of basic values, principles and practices on top of the agile programming framework. Activities XP describes four basic activities that are performed within the software development process: coding, testing, listening, and designing. Each of those activities is described below. Coding The advocates of XP argue that the only truly important product of the system development process is code – software instructions that a computer can interpret. Without code, there is no working product. Coding can be used to figure out the most suitable solution. Coding can also help to communicate thoughts about programming problems. A programmer dealing with a complex programming problem, or finding it hard to explain the solution to fellow programmers, might code it in a simplified manner and use the code to demonstrate what they mean. Code, say the proponents of this position, is always clear and concise and cannot be interpreted in more than one way. Other programmers can give feedback on this code by also coding their thoughts. Testing Testing is central to extreme programming. Extreme programming's approach is that if a little testing can eliminate a few flaws, a lot of testing can eliminate many more flaws. Unit tests determine whether a given feature works as intended. Programmers write as many automated tests as they can think of that might "break" the code; if all tests run successfully, then the coding is complete. Every piece of code that is written is tested before moving on to the next feature. Acceptance tests verify that the requirements as understood by the programmers satisfy the customer's actual requirements. System-wide integration testing was encouraged, initially, as a daily end-of-day activity, for early detection of incompatible interfaces, to reconnect before the separate sections diverged widely from coherent functionality. However, system-wide integration testing has been reduced, to weekly, or less often, depending on the stability of the overall interfaces in the system. Listening Programmers must listen to what the customers need the system to do, what "business logic" is needed. They must understand these needs well enough to give the customer feedback about the technical aspects of how the problem might be solved, or cannot be solved. Communication between the customer and programmer is further addressed in the planning game. Designing From the point of view of simplicity, of course one could say that system development doesn't need more than coding, testing and listening. If those activities are performed well, the result should always be a system that works. In practice, this will not work. One can come a long way without designing but at a given time one will get stuck. The system becomes too complex and the dependencies within the system cease to be clear. One can avoid this by creating a design structure that organizes the logic in the system. Good design will avoid many dependencies within a system; this means that changing one part of the system will not affect other parts of the system. Values Extreme programming initially recognized four values in 1999: communication, simplicity, feedback, and courage. A new value, respect, was added in the second edition of Extreme Programming Explained. Those five values are described below. Communication Building software systems requires communicating system requirements to the developers of the system. In formal software development methodologies, this task is accomplished through documentation. Extreme programming techniques can be viewed as methods for rapidly building and disseminating institutional knowledge among members of a development team. The goal is to give all developers a shared view of the system which matches the view held by the users of the system. To this end, extreme programming favors simple designs, common metaphors, collaboration of users and programmers, frequent verbal communication, and feedback. Simplicity Extreme programming encourages starting with the simplest solution. Extra functionality can then be added later. The difference between this approach and more conventional system development methods is the focus on designing and coding for the needs of today instead of those of tomorrow, next week, or next month. This is sometimes summed up as the "You aren't gonna need it" (YAGNI) approach. Proponents of XP acknowledge the disadvantage that this can sometimes entail more effort tomorrow to change the system; their claim is that this is more than compensated for by the advantage of not investing in possible future requirements that might change before they become relevant. Coding and designing for uncertain future requirements implies the risk of spending resources on something that might not be needed, while perhaps delaying crucial features. Related to the "communication" value, simplicity in design and coding should improve the quality of communication. A simple design with very simple code could be easily understood by most programmers in the team. Feedback Within extreme programming, feedback relates to different dimensions of the system development: Feedback from the system: by writing unit tests, or running periodic integration tests, the programmers have direct feedback from the state of the system after implementing changes. Feedback from the customer: The functional tests (aka acceptance tests) are written by the customer and the testers. They will get concrete feedback about the current state of their system. This review is planned once in every two or three weeks so the customer can easily steer the development. Feedback from the team: When customers come up with new requirements in the planning game the team directly gives an estimation of the time that it will take to implement. Feedback is closely related to communication and simplicity. Flaws in the system are easily communicated by writing a unit test that proves a certain piece of code will break. The direct feedback from the system tells programmers to recode this part. A customer is able to test the system periodically according to the functional requirements, known as user stories. To quote Kent Beck, "Optimism is an occupational hazard of programming. Feedback is the treatment." Courage Several practices embody courage. One is the commandment to always design and code for today and not for tomorrow. This is an effort to avoid getting bogged down in design and requiring a lot of effort to implement anything else. Courage enables developers to feel comfortable with refactoring their code when necessary. This means reviewing the existing system and modifying it so that future changes can be implemented more easily. Another example of courage is knowing when to throw code away: courage to remove source code that is obsolete, no matter how much effort was used to create that source code. Also, courage means persistence: a programmer might be stuck on a complex problem for an entire day, then solve the problem quickly the next day, but only if they are persistent. Respect The respect value includes respect for others as well as self-respect. Programmers should never commit changes that break compilation, that make existing unit-tests fail, or that otherwise delay the work of their peers. Members respect their own work by always striving for high quality and seeking for the best design for the solution at hand through refactoring. Adopting the four earlier values leads to respect gained from others in the team. Nobody on the team should feel unappreciated or ignored. This ensures a high level of motivation and encourages loyalty toward the team and toward the goal of the project. This value is dependent upon the other values, and is oriented toward teamwork. Rules The first version of rules for XP was published in 1999 by Don Wells at the XP website. 29 rules are given in the categories of planning, managing, designing, coding, and testing. Planning, managing and designing are called out explicitly to counter claims that XP doesn't support those activities. Another version of XP rules was proposed by Ken Auer in XP/Agile Universe 2003. He felt XP was defined by its rules, not its practices (which are subject to more variation and ambiguity). He defined two categories: "Rules of Engagement" which dictate the environment in which software development can take place effectively, and "Rules of Play" which define the minute-by-minute activities and rules within the framework of the Rules of Engagement. Here are some of the rules (incomplete): Coding The customer is always available Code the unit test first Only one pair integrates code at a time Leave optimization until last No overtime Testing All code must have unit tests All code must pass all unit tests before it can be released. When a bug is found, tests are created before the bug is addressed (a bug is not an error in logic; it is a test that was not written) Acceptance tests are run often and the results are published Principles The principles that form the basis of XP are based on the values just described and are intended to foster decisions in a system development project. The principles are intended to be more concrete than the values and more easily translated to guidance in a practical situation. Feedback Extreme programming sees feedback as most useful if it is done frequently and promptly. It stresses that minimal delay between an action and its feedback is critical to learning and making changes. Unlike traditional system development methods, contact with the customer occurs in more frequent iterations. The customer has clear insight into the system that is being developed, and can give feedback and steer the development as needed. With frequent feedback from the customer, a mistaken design decision made by the developer will be noticed and corrected quickly, before the developer spends much time implementing it. Unit tests contribute to the rapid feedback principle. When writing code, running the unit test provides direct feedback as to how the system reacts to the changes made. This includes running not only the unit tests that test the developer's code, but running in addition all unit tests against all the software, using an automated process that can be initiated by a single command. That way, if the developer's changes cause a failure in some other portion of the system that the developer knows little or nothing about, the automated all-unit-test suite will reveal the failure immediately, alerting the developer of the incompatibility of their change with other parts of the system, and the necessity of removing or modifying their change. Under traditional development practices, the absence of an automated, comprehensive unit-test suite meant that such a code change, assumed harmless by the developer, would have been left in place, appearing only during integration testing – or worse, only in production; and determining which code change caused the problem, among all the changes made by all the developers during the weeks or even months previous to integration testing, was a formidable task. Assuming simplicity This is about treating every problem as if its solution were "extremely simple". Traditional system development methods say to plan for the future and to code for reusability. Extreme programming rejects these ideas. The advocates of extreme programming say that making big changes all at once does not work. Extreme programming applies incremental changes: for example, a system might have small releases every three weeks. When many little steps are made, the customer has more control over the development process and the system that is being developed. Embracing change The principle of embracing change is about not working against changes but embracing them. For instance, if at one of the iterative meetings it appears that the customer's requirements have changed dramatically, programmers are to embrace this and plan the new requirements for the next iteration. Practices Extreme programming has been described as having 12 practices, grouped into four areas: Fine-scale feedback Pair programming Planning game Test-driven development Whole team Continuous process Continuous integration Refactoring or design improvement Small releases Shared understanding Coding standards Collective code ownership Simple design System metaphor Programmer welfare Sustainable pace Controversial aspects The practices in XP have been heavily debated. Proponents of extreme programming claim that by having the on-site customer request changes informally, the process becomes flexible, and saves the cost of formal overhead. Critics of XP claim this can lead to costly rework and project scope creep beyond what was previously agreed or funded. Change-control boards are a sign that there are potential conflicts in project objectives and constraints between multiple users. XP's expedited methods are somewhat dependent on programmers being able to assume a unified client viewpoint so the programmer can concentrate on coding, rather than documentation of compromise objectives and constraints. This also applies when multiple programming organizations are involved, particularly organizations which compete for shares of projects. Other potentially controversial aspects of extreme programming include: Requirements are expressed as automated acceptance tests rather than specification documents. Requirements are defined incrementally, rather than trying to get them all in advance. Software developers are usually required to work in pairs. There is no Big Design Up Front. Most of the design activity takes place on the fly and incrementally, starting with "the simplest thing that could possibly work" and adding complexity only when it's required by failing tests. Critics compare this to "debugging a system into appearance" and fear this will result in more re-design effort than only re-designing when requirements change. A customer representative is attached to the project. This role can become a single-point-of-failure for the project, and some people have found it to be a source of stress. Also, there is the danger of micro-management by a non-technical representative trying to dictate the use of technical software features and architecture. Critics have noted several potential drawbacks, including problems with unstable requirements, no documented compromises of user conflicts, and a lack of an overall design specification or document. Scalability Thoughtworks has claimed reasonable success on distributed XP projects with up to sixty people. In 2004, industrial extreme programming (IXP) was introduced as an evolution of XP. It is intended to bring the ability to work in large and distributed teams. It now has 23 practices and flexible values. Severability and responses In 2003, Matt Stephens and Doug Rosenberg published Extreme Programming Refactored: The Case Against XP, which questioned the value of the XP process and suggested ways in which it could be improved. This triggered a lengthy debate in articles, Internet newsgroups, and web-site chat areas. The core argument of the book is that XP's practices are interdependent but that few practical organizations are willing/able to adopt all the practices; therefore the entire process fails. The book also makes other criticisms, and it draws a likeness of XP's "collective ownership" model to socialism in a negative manner. Certain aspects of XP have changed since the publication of Extreme Programming Refactored; in particular, XP now accommodates modifications to the practices as long as the required objectives are still met. XP also uses increasingly generic terms for processes. Some argue that these changes invalidate previous criticisms; others claim that this is simply watering the process down. Other authors have tried to reconcile XP with the older methodologies in order to form a unified methodology. Some of these XP sought to replace, such as the waterfall methodology; example: Project Lifecycles: Waterfall, Rapid Application Development (RAD), and All That. JPMorgan Chase & Co. tried combining XP with the computer programming methods of capability maturity model integration (CMMI), and Six Sigma. They found that the three systems reinforced each other well, leading to better development, and did not mutually contradict. Criticism Extreme programming's initial buzz and controversial tenets, such as pair programming and continuous design, have attracted particular criticisms, such as the ones coming from McBreen and Boehm and Turner, Matt Stephens and Doug Rosenberg. Many of the criticisms, however, are believed by Agile practitioners to be misunderstandings of agile development. In particular, extreme programming has been reviewed and critiqued by Matt Stephens's and Doug Rosenberg's Extreme Programming Refactored. See also Agile software development Continuous obsolescence EXtreme Manufacturing Extreme project management Extreme programming practices Kaizen List of software development philosophies Pair programming Scrum (development) Software craftsmanship Stand-up meeting Timeboxing References Further reading Ken Auer and Roy Miller. Extreme Programming Applied: Playing To Win, Addison–Wesley. Kent Beck: Extreme Programming Explained: Embrace Change, Addison–Wesley. First edition, 1999. Second edition, with Cynthia Andres, 2004. Kent Beck and Martin Fowler: Planning Extreme Programming, Addison–Wesley. Alistair Cockburn: Agile Software Development, Addison–Wesley. Martin Fowler: Refactoring: Improving the Design of Existing Code.With Kent Beck, John Brant, William Opdyke, and Don Roberts (1999). Addison-Wesley. Harvey Herela (2005). Case Study: The Chrysler Comprehensive Compensation System. Galen Lab, U.C. Irvine. Jim Highsmith. Agile Software Development Ecosystems, Addison–Wesley. Ron Jeffries, Ann Anderson and Chet Hendrickson (2000), Extreme Programming Installed, Addison–Wesley. Craig Larman & V. Basili (2003). "Iterative and Incremental Development: A Brief History", Computer (IEEE Computer Society) 36 (6): 47–56. Matt Stephens and Doug Rosenberg (2003). Extreme Programming Refactored: The Case Against XP, Apress. Waldner, JB. (2008). "Nanocomputers and Swarm Intelligence". In: ISTE, 225–256. External links A gentle introduction Industrial eXtreme Programming Problems and Solutions to XP implementation Using an Agile Software Process with Offshore Development – ThoughtWorks' experiences with implementing XP in large distributed projects Software development philosophies Agile software development
883596
https://en.wikipedia.org/wiki/Phelix
Phelix
Phelix is a high-speed stream cipher with a built-in single-pass message authentication code (MAC) functionality, submitted in 2004 to the eSTREAM contest by Doug Whiting, Bruce Schneier, Stefan Lucks, and Frédéric Muller. The cipher uses only the operations of addition modulo 232, exclusive or, and rotation by a fixed number of bits. Phelix uses a 256-bit key and a 128-bit nonce, claiming a design strength of 128 bits. Concerns have been raised over the ability to recover the secret key if the cipher is used incorrectly. Performance Phelix is optimised for 32-bit platforms. The authors state that it can achieve up to eight cycles per byte on modern x86-based processors. FPGA Hardware performance figures published in the paper "Review of stream cipher candidates from a low resource hardware perspective" are as follows: Helix Phelix is a slightly modified form of an earlier cipher, Helix, published in 2003 by Niels Ferguson, Doug Whiting, Bruce Schneier, John Kelsey, Stefan Lucks, and Tadayoshi Kohno; Phelix adds 128 bits to the internal state. In 2004, Muller published two attacks on Helix. The first has a complexity of 288 and requires 212 adaptive chosen-plaintext words, but requires nonces to be reused. Souradyuti Paul and Bart Preneel later showed that the number of adaptive chosen-plaintext words of Muller's attack can be reduced by a factor of 3 in the worst case (a factor of 46.5 in the best case) using their optimal algorithms to solve differential equations of addition. In a later development, Souradyuti Paul and Bart Preneel showed that the above attack can also be implemented with chosen plaintexts (CP) rather than adaptive chosen plaintexts (ACP) with data complexity 235.64 CP's. Muller's second attack on Helix is a distinguishing attack that requires 2114 words of chosen plaintext. Phelix's design was largely motivated by Muller's differential attack. Security Phelix was selected as a Phase 2 Focus Candidate for both Profile 1 and Profile 2 by the eSTREAM project. The authors of Phelix classify the cipher as an experimental design in its specifications. The authors advise that Phelix should not be used until it had received additional cryptanalysis. Phelix was not advanced to Phase 3, largely because of Wu and Preneel's key-recovery attack noted below that becomes possible when the prohibition against reusing a nonce is violated. The first cryptanalytic paper on Phelix was a chosen-key distinguishing attack, published in October 2006. Doug Whiting has reviewed the attack and notes that while the paper is clever, the attack unfortunately relies on incorrect assumptions concerning the initialisation of the Phelix cipher. This paper was subsequently withdrawn by its authors. A second cryptanalytic paper on Phelix titled "Differential Attacks against Phelix" was published on 26 November 2006 by Hongjun Wu and Bart Preneel. The paper is based on the same attacks assumption as the Differential Attack against Helix. The paper shows that if the cipher is used incorrectly (nonces reused), the key of Phelix can be recovered with about 237 operations, 234 chosen nonces and 238.2 chosen plaintext words. The computational complexity of the attack is much less than that of the attack against Helix. The authors of the differential attack express concern that each plaintext word affects the keystream without passing through (what they consider to be) sufficient confusion and diffusion layers. They claim this is an intrinsic weakness in the structure of Helix and Phelix. The authors conclude that they consider Phelix to be insecure. References D. Whiting, B. Schneier, S. Lucks, and F. Muller, Phelix: Fast Encryption and Authentication in a Single Cryptographic Primitive (includes source code) T. Good, W. Chelton, M. Benaissa: Review of stream cipher candidates from a low resource hardware perspective (PDF) Yaser Esmaeili Salehani, Hadi Ahmadi: A Chosen-key Distinguishing Attack on Phelix, submitted to eSTREAM [withdrawn 2006-10-14] Niels Ferguson, Doug Whiting, Bruce Schneier, John Kelsey, Stefan Lucks and Tadayoshi Kohno, Helix: Fast Encryption and Authentication in a Single Cryptographic Primitive, Fast Software Encryption - FSE 2003, pp330–346. Frédéric Muller, Differential Attacks against the Helix Stream Cipher, FSE 2004, pp94–108. Souradyuti Paul and Bart Preneel, Solving Systems of Differential Equations of Addition, ACISP 2005. Full version Souradyuti Paul and Bart Preneel, Near Optimal Algorithms for Solving Differential Equations of Addition With Batch Queries, Indocrypt 2005. Full version External links eStream page on Phelix "Differential Attacks against Phelix" by Hongjun Wu and Bart Preneel Stream ciphers
62007065
https://en.wikipedia.org/wiki/Volcano%20Security%20Suite
Volcano Security Suite
Volcano Security Suite is a piece of harmful security software that disguises itself as an antispyware program. It issues a false messages and alerts, and false system scan results on the computer to scare people to pay for the full version of the rogue software. It is a part of FakeVimes family. Symptoms of infection It attempts to disable some legitimate antivirus programs. It can also hijack the Internet Explorer. It also displays false alerts stating that the computer is infected with malware. Removal Volcano Security Suite can be detected and removed by certain antivirus and antispyware like SpyHunter malware suite, as well as Malwarebytes antimalware. See also Rogue security software References Rogue software
1697285
https://en.wikipedia.org/wiki/Julian%20Gollop
Julian Gollop
Julian Gollop is a British computer game designer and producer specialising in strategy games, who has founded and led Mythos Games, Codo Technologies and Snapshot Games. He is known best as the "man who gave birth to the X-COM franchise." Biography Childhood Julian Gollop was born in 1965. He came of age in Harlow, England. When he was a child, his father introduced him to many different types of games, including chess, card games, and board games. His family played games regularly, choosing to play games instead of going to see films. When he was about 14 years old, Gollop started playing more complex games like Dungeons & Dragons, SPI board games, and Avalon Hill board games. After home computers became a reality while he was in secondary school, Gollop's fascination for complex strategy games helped him recognise how computers could allow him to make and play games he enjoyed. Early career (1982 to 1988) In 1982, while he was still in secondary school, Gollop started designing and programming computer games. For £25, Gollop bought his first computer, a ZX81, from a school friend to learn programming. Even though the ZX81 only had one kilobyte of memory and no real graphics processing ability, he was "amazed" at its capabilities. His first published games were Islandia and Time Lords, which he made for the BBC Micro in 1983 with programmer Andy Greene, a school friend. Gollop subsequently upgraded to a ZX Spectrum and began creating video games like Nebula in BASIC. He recognised that his future involved computers. When Gollop went on to the London School of Economics to study sociology, he spent more time creating video games such as Chaos: The Battle of Wizards and Rebelstar than he spent studying. He created the first Rebelstar by himself as a two-player game and brought it to a publisher that had an office near his college. They wanted it to be a single-player game, something he had not made before, so Gollop created functional path-finding algorithms from scratch, the game got published, and it ended up doing well. Mythos Games (1988 to 2001) In 1988, he was joined by his brother, Nick Gollop, in founding Target Games, a video game development company that subsequently changed to Mythos Games. Under the Mythos name, the Gollop brothers designed and developed computer games such as Laser Squad, X-COM: UFO Defense and X-COM: Apocalypse. Up to this time, Gollop had only made computer games for 8-bit and 16-bit home computers commonly found in Europe. It was with X-COM: UFO Defense that he first beginning making video games directly for the MS-DOS and later Microsoft Windows operating system personal computers that at the time would be sold primarily in the United States. Despite the success of these and other games, Mythos Games was forced to close in 2001 after an essential publisher was acquired by a company that withdrew commitments for The Dreamland Chronicles: Freedom Ridge, which Mythos Games was in the process of developing. Codo Technologies (2001 to 2006) After closing Mythos Games, Gollop and his brother founded Codo Technologies. They were disheartened by how mainstream publishers treated them at Mythos Games, so they tried a different business model. The inaugural game of Codo Technologies in 2002 was Laser Squad Nemesis, a turn-based tactics game with asynchronous, multiplayer play-by-email features which required a monthly subscription. The Gollop brothers developed only one other game, Rebelstar: Tactical Command, before he moved to Bulgaria with his wife in 2006. Ubisoft Sofia (2006 to 2012) After moving to Bulgaria, Gollop began working for Ubisoft in Sofia as a game designer. He was promoted quickly to producer, eventually leading the development of Tom Clancy's Ghost Recon: Shadow Wars for the Nintendo 3DS. He then became the co-creative director of Assassin's Creed III: Liberation for the PlayStation Vita. Gollop left Ubisoft in 2012 with ideas to remake games from earlier in his career. Snapshot Games (since 2013) As of 2017, Gollop works in Sofia as the CEO and chief designer for Snapshot Games, an independent video game developer he co-founded in 2013 with David Kaye. Chaos Reborn, the studio's first game, was released by Snapshot Games in 2015. He then led his company's development of Phoenix Point, which was released in December 2019. Accolades IGN included him among the top hundred computer game creators of all time. In the X-COM reboot, XCOM: Enemy Unknown, Firaxis Games gives homage to Gollop in the form of a "Gollop Chamber" facility in the game. Jake Solomon, creative lead for this XCOM and its sequel, XCOM 2, credits Gollop for much of his success. List of computer games References External links Gameography at Mythos Games website by WayBackMachine (2002) 1965 births Board game designers British founders British video game designers British video game programmers Living people People from Harlow Video game producers XCOM
30077036
https://en.wikipedia.org/wiki/TeachAids
TeachAids
TeachAids (pronounced ) is a nonprofit social enterprise that develops global health education technology products for HIV/AIDS, concussions, and COVID-19, based on an approach invented through research at Stanford University. The TeachAids software for HIV education, their first area of focus, has been cited as a model health intervention. Since the materials bypass issues of stigma, they allow HIV prevention education to be provided to communities where it has previously not been allowed. In other communities, the tutorials provide the highest learning effects and comfort rates of any tested educational approach. Their HIV products are animated, interactive software tutorials, developed for individual cultures and languages, and incorporating the voices of celebrities from each region. In India, these include national icons such as Amitabh Bachchan, Shabana Azmi, Nagarjuna and Sudeep Ssanjeev. In Botswana, they include musicians Scar, Zeus, and former President of Botswana, Festus Mogae. TeachAids operates globally, with its software in use in more than 80 countries. Its materials are made available for free under a Creative Commons License, funded by sponsorships, grants, and donations. Backers include Barclays, Cigna, Covington & Burling, Google, Microsoft, UNICEF, and Yahoo!. History TeachAids began in 2005 as a research project at Stanford University. From 2005 to 2009, a new interdisciplinary approach to HIV/AIDS education was developed through IRB-approved research by Piya Sorcar. Key advisors included professors Shelley Goldman (Learning Sciences), Martin Carnoy (Comparative Education), Cheryl Koopman (Psychiatry), Randall Stafford (Epidemiology), and Clifford Nass (Communication). The project's goal was to find a way to address the frequently taboo subjects associated with sexual issues and HIV/AIDS specifically. One major finding was that 2D cartoon figures were the optimal balance between comfort and clarity in terms of visual representation for sex-related topics. On that basis, animated storyboards were created which emphasized the biological aspects of HIV transmission and used cultural euphemisms to overcome social stigma. In addition, specific pedagogical techniques (e.g., instructional scaffolding) were utilized to create a coherent conception of HIV transmission for learners, as opposed to the fragmented knowledge created by mass media campaigns. Early research versions of the software were sponsored by Time Warner, the Government of South Korea, and Neeru Khosla, and used custom illustrations drawn by Sorcar's father, award-winning animator Manick Sorcar. Pilot versions were subsequently created in English, Hindi, Kinyarwanda, Mandarin, and Spanish. Additional experts contributed to the design and evaluation of the materials, including Stanford professors David Katzenstein (Infectious Disease), Douglas Owens (Medicine), and Roy Pea (Learning Sciences). TeachAids was spun out of Stanford in 2009 as an independent 501(c)(3) organization, co-founded by Piya Sorcar, Clifford Nass, Shuman Ghosemajumder, and Ashwini Doshi. It began developing its infrastructure and new versions of its software for additional countries and languages around the world. The first additional versions of the software in Indian English, Telugu, and Tswana were launched in 2010. Celebrity partners The TeachAids interactive software implements animated avatars of cultural icons to improve pedagogical efficacy. Over time, numerous international actors, musicians, and celebrities have lent their voices and likenesses to the TeachAids materials. These include: Amitabh Bachchan Amol Palekar Anu Choudhury Anu Prabhakar Anushka Shetty Jayanthi Imran Khan Moloya Goswami Nagarjuna Akkineni Navdeep Prashanta Nanda Swati Reddy Shabana Azmi Shruti Haasan Siddharth Sudeep Suhasini Maniratnam Suriya Vijay Raghavendra Zerifa Wahid Zeus The TeachAids advisory board includes film director Mahesh Bhatt, HIV/AIDS treatment pioneer Nimmagadda Prasad, Global Fund for Women founder Anne Firth Murray, and former President of Botswana Festus Mogae. Actress Amala Akkineni is a trustee of TeachAids in India. In 2020, Kate Courtney starred in a concussion education video for their CrashCourse virtual reality series. Worldwide use The TeachAids tutorials are available for free online and are used in more than 80 countries around the world, distributed by over 200 partner organizations. Numerous AIDS service organizations, AIDS education and training centers, NGOs, and government agencies distribute and utilize the tutorials as part of their own HIV/AIDS prevention efforts. Some of the organizations partnered with TeachAids include CARE, the Elizabeth Glaser Pediatric AIDS Foundation, and the U.S. Peace Corps. In India, the National AIDS Control Organisation approved the TeachAids materials in January 2010, marking the first time HIV/AIDS education could be provided decoupled from sex education. Later that year, the Government of Karnataka approved the materials for their state of 50 million and committed to distributing them in 5,500 government schools. In Assam, Chief Minister Tarun Gogoi helped launched TeachAids. Odisha, Andhra Pradesh, and other Indian states have also joined with official support and distribution. In Botswana, the TeachAids tutorials were adopted nationally as the standard method for HIV/AIDS education. In 2011, the Ministry of Education began distributing the tutorials to every primary, secondary, and tertiary educational institution in the country, reaching all learners from 6 to 24 years of age nationwide. June 15 in Botswana was declared "National TeachAIDS Day". In the United States, the Stanford Program on International and Cross-Cultural Education distributes the tutorials on CD along with a custom educator handbook, both of which are made available at no cost. The creation of TeachAids has been cited as an important innovation in achieving the United Nations Millennium Development Goal for combating the spread of HIV/AIDS. In 2012, TeachAids was named one of 12 global laureates by The Tech Awards, referred to as the "Nobel prize of tech philanthropy". See also AIDS education and training centers HIV/AIDS in Africa HIV/AIDS in India Sex education References External links TeachAids official website TeachAids official channel on YouTube TeachAids software versions on the Internet Movie Database Companies based in Palo Alto, California Educational software companies Free and open-source software organizations HIV/AIDS prevention organizations International charities Charities based in California Organizations established in 2009 Science education software Social enterprises Health charities in the United States Medical and health organizations based in California
10470189
https://en.wikipedia.org/wiki/Bangabasi%20Morning%20College
Bangabasi Morning College
{{Infobox university |image_name = Bangabasi Morning College - Logo.png |image_size = 150px |name = Bangabasi Morning College |motto = प्रणिप्रातेन परिप्रशनेन सेवयाBhagvad Gita''(Earn Education & Serve the Humanitarian) |established = |type = Public |principal = Dr. Sandeep Sinha |vice_chancellor = |city = Kolkata |state = West Bengal |country = India |undergrad = 3000+ |postgrad = 20+ |campus = Urban; 2 campuses |free_label = Recognition |free = NAAC A Level |website = www.bangabasimorning.edu.in }}Bangabasi Morning College is an undergraduate college affiliated with the University of Calcutta. It is located at Sealdah in the heart of the city of Kolkata. It has a very large auditorium named as P.K. Bose Memorial Hall. Accreditation Bangabasi Morning College has been Re-Accredited with Grade "A" by NAAC in December 2016. Cultural Programmes College has many cultural programme including the regional cultural programme of West Bengal, main programme which are organized by the students are: Rabindra Jayanti on 9 May Netaji birthday on 23 Jan Independence Day on 15 Aug Founders Day on 27 Sept Saraswati Puja Bengali New Year on 15 Apr Republic Day on 26 Jan Every year the college publishes an Annual Magazine enriched with contribution from member of the staff and the students on the various subjects. It encourages the students to contribute their own article for publication. The magazine helps to kindle the creative talents of the students. Annual Social Day Annual social day is celebrated every year in college campus. Every year a Freshers welcome party is celebrated to welcome the new admitted students. In 2013 an inter-college quiz competition was organized in which a student named ABHISHEK TIWARI got the 1st prize. Every year the college organizes a blood donation camp to donate the blood to the NGO'S and related charitable trust. Faculty History: Dr. Sandeep Sinha (Ph.D., M.A., Principal) Anthropology: Dr. P. Sarkar (M.Sc., Ph.D., Assistant Professor) Hindi: Dr. Ausotosh Kumar (M.A., Ph.D., Assistant Professor) Zoology: Dr. Sreejata Biswas (M.Sc., Ph.D., B.Ed., Assistant Professor) Botany: Dr. Shyamali Mazumdar (M.Sc., L.L.B., Ph.D., Associate Professor) Bengali: Dr.Madan Chandra (M.A., M.Phil, Pd.D., Assistant Professor) Chemistry: Dr. Amitabh Dutta (M.sc., Ph.D., Assistant Professor) Computer Science:Sri Subhrat Dinda, M.C.A., M.Tech. Mathematics: Dr.Sujata Sinha (M.A., Ph.D., Assistant Professor) Philosophy:Smt Sukla Sarkar (M.A., Assistant Professor) Physics: Dr. Mukul Mitra (M.Sc., PhD., Associate Professor) Librarian: Smt Shila Ghosh (M.A., B.Lib., I.Sc) Courses Presently the college offers honours degree courses in the following: Accounting & Finance Bengali Biological Science Botany Computer Science Economics English Mathematics Physics Political Science Chemistry It is one of the few centers of study of Urdu at the undergraduate level, and grants degrees including B.Sc., B.Com., and Bachelor of Arts. Undergraduate courses B.SC. (PURE) Honours : Physics, Chemistry, Mathematics, Computer ScienceGeneral : Physics, Mathematics, Chemistry, Computer Science B.SC.(BIO) Honours : Botany, Zoology, AnthropologyGeneral : Zoology, Botany, Chemistry B.A Honours : Bengali, English, Political Science, HindiGeneral : Elective Bengali, Elective English, Elective Hindi, Elective Urdu, Political Science, History, Philosophy, Economics, Geography B.COM Honours : Accounting & FinanceGeneral :''' All compulsory subjects under B.Com Syllabus Facility The college has large size campus in area and it gives all the facility to the students that a college must have. The main facility provided by the college are: Common Room (university) Students Union Room Library Canteen Computer Lab Laboratories for (Physics, Chemistry, Zoology, Botany, Anthropology, Computer Sc) College Auditorium Sports Club N.C.C. Room Alumni Jatindra Nath Das Netaji civil activist Sanjib Sarkar - Music director See also Bangabasi College Bangabasi Evening College References External links 1965 establishments in West Bengal University of Calcutta affiliates Educational institutions established in 1965 Universities and colleges in Kolkata
313899
https://en.wikipedia.org/wiki/RSTS/E
RSTS/E
RSTS () is a multi-user time-sharing operating system, initially developed by Evans Griffiths & Hart of Boston, and acquired by Digital Equipment Corporation (DEC, now part of Hewlett Packard) for the PDP-11 series of 16-bit minicomputers. The first version of RSTS (RSTS-11, Version 1) was implemented in 1970 by DEC software engineers that developed the TSS-8 time-sharing operating system for the PDP-8. The last version of RSTS (RSTS/E, Version 10.1) was released in September 1992. RSTS-11 and RSTS/E are usually referred to just as "RSTS" and this article will generally use the shorter form. Acronyms and abbreviations BTSS (Basic Time Sharing System – never marketed) – The first name for RSTS. CCL (Concise Command Language) – equivalent to a command to run a program kept in the Command Line Interpreter. CIL (Core Image Library) – Similar to a shared library (.so) on Linux or .DLL on Microsoft Windows. CILUS (Core Image Library Update and Save) – Program to manipulate a CIL file. CLI (Command Line Interpreter) – See Command-line interface. CUSPs (Commonly Used System Programs) – System management applications like Task Manager or Registry Editor on Microsoft Windows. On RSTS-11, CUSPs were written in BASIC-Plus just like user programs. DCL (Digital Command Language) – See DIGITAL Command Language. DTR (DATATRIEVE) – programming language FIP (File Information Processing) – resident area for issuing file requests FIRQB (File Information Request Queue Block) – A data structure containing information about file requests. KBM (Keyboard Monitor) – Analogous to Command Line Interpreter. LAT (Local Area Transport) – Digital's predecessor to TCP/IP MFD (Master File Directory) – Root directory of file system. PBS (Print Batch Services) PIP (Peripheral Interchange Program) PPN (Project Programmer Number) – Analogous to GID and UID in Unix. RDC (Remote Diagnostics Console) – A replacement front panel for a PDP-11 which used a serial connection to the console terminal or a modem instead of lights and toggle switches to control the CPU. RSTS-11 (Resource Sharing Time Sharing System) – The first commercial product name for RSTS RSTS/E (Resource Sharing Timesharing System Extended) – The current implementation of RSTS. RTS (Run Time System) – Read only segment of code provided by the supplier which would be mapped into the high end of a 32K, 16-bit word address space that a user program would use to interface with the operating system. Only one copy of an RTS would be loaded into RAM, but would be mapped into the address space of any user program that required it. In essence, shared, re-entrant code, to reduce RAM requirements, by sharing the code between any programs that required it. RTSS (Resource Time Sharing System – never marketed) – The second name for RSTS SATT (Storage Allocation Truth Table) a series of 512KB blocks on every disk that indicated if the block, or cluster, on the whole disk was allocated on the disk. Bitwise, a 1 indicated a cluster was in use; a 0 indicated it was not in use. SIL (Save Image Library) – The new name for a CIL file after DEC started selling PDP-11 systems with all Semiconductor memory and no Magnetic-core memory such as the PDP-11T55. SILUS (Save Image Library Update and Save) – The new name for CILUS after CIL files were renamed SIL files. UFD (User File Directory) – A user's home directory. Root directory of a file system. XRB (Transfer Request Block) – A data structure containing information about other types of system requests that do not use FIRQBs to convey the information Development 1970s The kernel of RSTS was programmed in the assembly language MACRO-11, compiled and installed to a disk using the CILUS program, running on a DOS-11 operating system. RSTS booted into an extended version of the BASIC programming language which DEC called "BASIC-PLUS". All of the system software CUSPS for the operating system, including the programs for resource accounting, login, logout, and managing the system, were written in BASIC-PLUS. From 1970 to 1973, RSTS ran in only 56K bytes of magnetic core memory (64 kilobytes including the memory-mapped I/O space). This would allow a system to have up to 16 terminals with a maximum of 17 jobs. The maximum program size was 16K bytes. By the end of 1973 DEC estimated there were 150 licensed systems running RSTS. In 1973 memory management support was included in RSTS (now RSTS/E) for the newer DEC PDP-11/40 and PDP-11/45 minicomputers (the PDP-11/20 was only supported under RSTS-11). The introduction of memory management in the newer PDP-11 computers not only meant these machines were able to address four times the amount of memory (18-bit addressing, 256K bytes), it also paved the way for the developers to separate user mode processes from the core of the kernel. In 1975 memory management support was again updated for the newer 22-bit addressable PDP-11/70. RSTS systems could now be expanded to use as much as two megabytes of memory running up to 63 jobs. The RTS and CCL concepts were introduced although they had to be compiled in during "SYSGEN". Multi-terminal service was introduced which would allow a single job the ability to control multiple terminals (128 total). Large-message send/receive and interprocess communication became very sophisticated and efficient. By August there are 1,200 licensed systems. In 1977 the installation process for RSTS was no longer dependent on DOS-11. The RSTS kernel could now be compiled under the RT-11 RTS, formatted as a kernel file with RT-11 SILUS, and copied to the system or other disks, while the computer was time-sharing. The BASIC-PLUS RTS (as well as RT-11, RSX-11, TECO and third party RTSs) all ran as user mode processes, independent of the RSTS kernel. A systems manager could now decide during the bootstrap phase which RTS to run as the systems default KBM. By now, there were some 3,100 licensed systems. In 1978 the final memory management update was included for all machines that could support 22bit addressing. RSTS could now use the maximum amount of memory available to a PDP-11 (4 megabytes). Support was also included for SUPERVISORY mode which made RSTS the first DEC operating system with this capability. DECnet was also supported as well as remote diagnostics from field service technicians at the RDC in Colorado Springs, Colorado (a DEC subscription service). By the end of the decade, there are over 5,000 licensed systems. 1980s In 1981 support for separate instruction and data space for users with Unibus machines (PDP-11/44, PDP-11/45, PDP-11/55 and PDP-11/70) provided an extension to the memory constraints of an individual program. Compiling programs to use separate instruction and data space would soon give a program up to 64 kB for instructions, and up to 64 kB for buffering data. The DCL RTS is included as well as support for the newer revision of DECnet III. By 1983, with an estimated 15,000 DEC machines running RSTS/E, V8.0-06 included support for the smallest 18-bit PDP-11 sold by DEC (the MicroPDP-11). A pre-generated kernel and CUSPS were included in this distribution to make installation on the MicroPDP-11 easier. DEC sold the pre-generated version on the MicroPDP-11 as MicroRSTS at a reduced price, however users needed to purchase the full version if they had a need to generate their own kernel. The file system was upgraded and given the designation RSTS Directory Structure 1 (RDS1). All previous versions of the RSTS file system are given the designation RDS0. The newer file system is designed to support more than 1700 user accounts. "It is now thought that there are well over 10,000 licensed users and at least an equal number of unlicensed users!". From 1985 to 1989 RSTS became a mature product in the Version 9 revisions. DCL was installed as the primary RTS and the file system was again upgraded (now RDS1.2) to support new user account features. Passwords were now encrypted using a modified DES algorithm instead of limited to six (6) characters stored in DEC Radix-50 format. Before Version 9, there was a non-user system account in the project (group) zero (the designation is [0,1]), and all accounts in project number 1 were privileged (not unlike the root account on Unix systems). After Version 9 was released, additional accounts could be created for project zero, and multiple privileges could be individually set for any account. Support for the LAT protocol was included as well as the ability to run the newest version of DECnet IV. These network enhancements gave any user connected to a terminal through a DECserver the ability to communicate with a RSTS machine, just as easily as they could with a VAX running VMS. The DCL command structure between DEC operating systems also contributed to the familiar look and feel: This is not just another pseudo command file processor; it is based on VMS features. The DCL command file processor is fully supported and integrated in RSTS through extensive changes to DCL and the monitor. DCL executes command files as part of your job; therefore, no pseudo keyboard or forcing of commands to your keyboard is necessary (as with ATPK). 1990s In 1994 DEC sold most of its PDP-11 software business to Mentec. Digital continued to support its own PDP-11 customers for a short period after with the assistance of Mentec staff. In 1997 Digital and Mentec granted anyone wishing to use RSTS 9.6 or earlier for non-commercial, hobby purposes a no-cost license . The license is only valid on the SIMH PDP-11 emulator. The license also covers some other Digital operating systems. Copies of the license are included in authorized software kit available for download on the official website of the SIMH emulator. Documentation The standard complement of documentation manuals that accompanies a RSTS distribution consists of at least 11 large three-ring binders (collectively known as "The orange wall"), one small three-ring binder containing the RSTS/E Quick Reference Guide and a paperback copy of Introduction to BASIC AA-0155B-TK. Each of the 11 three-ring binders contains: Volume 1: General Information and Installation Documentation Directory Release Notes Maintenance Notebook System Installation and Update Guide Volume 2: System Management System Manager's Guide Volume 3: System Usage System User's Guide Guide to Writing Command Procedures Volume 4: Utilities Utilities Reference Manual Introduction to the EDT Editor SORT/MERGE User's Guide RUNOFF User's Guide Volume 4A: Utilities EDT Editor Manual Volume 4B: Utilities Task Builder Reference Manual Programmer's Utilities Manual RT11 Utilities Manual TECO User's Guide Volume 5: BASIC-PLUS BASIC-PLUS Language Manual Volume 6: System Programming Programming Manual Volume 7: MACRO Programming System Directives Manual ODT Reference Manual Volume 7A: MACRO Programming MACRO-11 Language Manual RMS-11 MACRO Programmer's Guide Volume 8: RMS RMS-11: An Introduction RMS11 User's Guide RMS-11 Utilities Operation Communication RSTS uses a serial communication connection to interact with the operator. The connection might be a local computer terminal with a 20 mA current loop interface, an RS-232 interface (either local serial port or remote connection via modem), or by an ethernet connection utilizing DECnet or LAT. As many as 128 terminals (using multi-terminal service) could connect to a RSTS system, running under a maximum of 63 jobs (depending on the processor being used, the amount of memory and disk space, and the system load). Most RSTS systems had nowhere near that many terminals. Users could also submit jobs to be run in batch mode. There was also a batch program called "ATPK" that allowed users to run a series of commands on an imaginary terminal (pseudo-terminal) in semi-interactive mode similar to batch commands in MS-DOS. Login [Project, Programmer] Users connected to the system by typing the LOGIN command (or HELLO) at a logged-out terminal and pressing return. Actually, typing any command at a logged-out terminal simply started the LOGIN program which then interpreted the command. If it was one of the commands which were allowed to be used by a user that is not yet logged in ("Logged Out"), then the associated program for that command was CHAINed to, otherwise the message "Please say HELLO" was printed on the terminal. Prior to Version 9, a user could also initiate a 1 line login, however this left the user's password on the screen for anyone else in the room to view (examples follow): Bye HELLO 1,2;SECRET Ready or I 1,2;SECRET Ready or LOGIN 1,2;SECRET Ready One could determine the status of a terminal from the command responses, printed by the command interpreter. A logged-in user communicating with the BASIC-PLUS KBM was given the prompt "Ready" and a user who is logged out is given the prompt "Bye". A user would log in by supplying their PPN number and password. User numbers consisted of a project number (this would be the equivalent of a group number in Unix), a comma, and a programmer number. Both numbers were in the range of 0 to 254, with special exceptions. When specifying an account, the project and programmer number were enclosed in brackets. A typical user number could be [10,5] (project 10, programmer 5), [2,146], [254,31], or [200,220], etc. When a user was running a system program while logged out (because the system manager had enabled it) their PPN number was [0,0], and would appear in the SYSTAT CUSP as **,**. Thus that is not a valid account number. System and user accounts In every project, the programmer number 0 was usually reserved as a group account, as it could be referenced by the special symbol #. If one's user number were [20,103], a reference to a file name beginning with "#" would refer to a file stored in the account of the user number [20,0]. This feature would be useful in educational environments, as programmer number 0 could be issued to the instructor of a class, and the individuals students given accounts with the same project number, and the instructor could store in his account files marked as shared only for that project number (which would be students in that class only, and no other). Two special classes of project numbers existed. The project number 0 is generally reserved for system software, and prior to Version 9 there was only 1 project 0 account (named [0,1]). Programmers in the project number 1 were privileged accounts, equivalent to the single account "root" on Unix systems, except that the account numbers [1,0] through [1,254] were all privileged accounts. After Version 9 was released, any account could be granted specific privileges by the systems manager. The account [0,1] is used to store the operating system file itself, all run-time library systems, and certain system files relating to booting the system (author's comments appear on the right in bold): DIR [0,1] Name .Ext Size Prot Date SY:[0,1] BADB .SYS 0P < 63> 06-Jun-98 List of bad blocks SATT .SYS 3CP < 63> 06-Jun-98 Bitmap of allocated disk storage INIT .SYS 419P < 40> 06-Jun-98 Operating system loader program ERR .ERR 16CP < 40> 06-Jun-98 System error messages RSTS .SIL 307CP < 60> 06-Jun-98 Operating system itself BASIC .RTS 73CP < 60> 06-Jun-98 BASIC-PLUS run time system RT11 .RTS 20C < 60> 06-Jun-98 RT-11 run time system SWAP .SYS 1024CP < 63> 06-Jun-98 System swap file CRASH .SYS 35CP < 63> 06-Jun-98 System crash dump RSX .RTS 16C < 60> 23-Sep-79 RSX-11 run-time system TECO .RTS 39C < 60> 24-Sep-79 TECO text editor Total of 1952 blocks in 11 files in SY:[0,1] (Editor's note: This directory listing is previous to Version 9.) The DIR command is an installed CCL equivalent to a RUN command for the DIRECT program. [0,1] is the account number (and directory name) of the operating system storage account. It would be referred to as "project number 0, programmer number 1". The numbers shown after each file represent its size in disk blocks, a block being 512 bytes or 1/2 kilobyte (K). "C" indicates the file is contiguous (is stored as one file without being separated into pieces, similar to files on a Microsoft Windows system after a drive has been defragmented), while "P" indicates it is specially protected (cannot be deleted, even by a privileged user, unless the P bit is cleared by separate command). The numbers in brackets (like "< 40>") represent the protections for the file, which is always displayed in decimal. Protections indicate if the file may be seen by any other user, by other users with the same programmer number, if the file is read only or if it may be altered by another user, and whether the file may be executed by an ordinary user giving them additional privileges. These protection codes are very similar to the r, w and x protections in Unix and similar operating systems such as BSD and Linux. Code 60 is equivalent to a private file, code 63 is a private non-deletable file, and 40 is a public file. Library files are kept in account [1,1] and it is usually referenced by the logical name LB:. The account [1,2] is the system startup account (much like a unix system starting up under root), and contains the system CUSPS that could be referenced by prefixing the CUSP name with a dollar sign ($). "!" is used for account [1,3], "%" for [1,4] and "&" for [1,5]. The account [1,1] also had the special privilege of being the only account where a user logged in under that account is permitted to execute the POKE system call to put values into any memory in the system. Thus the account number [1,1] is the closest equivalent to "root" on Unix-based systems. Run-time environments One of the features of RSTS is the means for the execution of programs and the environment used to run them. The various environments allowed for programming in BASIC-PLUS, the enhanced BASIC-Plus-2, and in more traditional programming languages such as COBOL and FORTRAN. These environments were separate from each other such that one could start a program from one environment and the system would switch to a different environment while running a different program, and then return the user to the original environment they started with. These environments were referred to as an RTS. The term for the command line interface that most of these RTSs had was the KBM. Prior to Version 9, the systems manager needed to define which RTS the system would start under, and it had to be one that would execute compiled programs. A systems manager may also install special CCL commands, which take precedence over all KBM commands (with the exception of DCL). A CCL is analogous to a shortcut to a program on a Windows system or a symbolic link on Unix-based systems. CCLs are installed as a memory-resident command either during startup, or dynamically while the system is running by a system's manager (i.e.: it is not permanent like a disk file). When logged in, a user can "SWITCH" to any of these environments, type language statements in the BASIC-PLUS programming language, issue RUN commands to specific programs, or issue a special command called a CCL to execute a program with command options. Most RSTS systems managers generated the kernel to include the "Control-T" one line status option which could tell the user what program they were running, under what RTS the program was using, how much memory the program was taking, how much it could expand to, and how much memory the RTS was using. BASIC-PLUS Programs written in BASIC-PLUS ran under the BASIC RTS, which allowed them up to 32K bytes of memory (out of 64K total). The language was interpreted, each different keyword being internally converted to a unique byte code and the variables and data being indexed and stored separately within the memory space. The internal byte-code format was known as PCODE - when the interactive SAVE command was issued, the BASIC Plus RTS simply saved the working memory area to a disk file with a ".BAC" extension. Although this format was undocumented, two Electronic Engineering undergraduates from Southampton University in the UK (Nick de Smith and David Garrod) developed a decompiler that could reverse engineer BAC files into their original BASIC Plus source, complete with original line numbers and variable names (both subsequently worked for DEC). The rest of the memory was used by the BASIC RTS itself. If one wrote programs in a language that permitted true binary executables such as BASIC-Plus-2, FORTRAN-IV, or Macro Assembler, then the amount of memory available would be 56K (8K allocated to the RTS). The standard BASIC-PLUS prompt is the "Ready" response, pressing Control-T displays status (example): DCL (Digital Command Language) Starting with Version 9, DCL became the primary startup RTS even though it does not have the ability to execute binary programs. This became possible with the advent of the disappearing RSX RTS (see below). DCL was incorporated into all of the recent versions of DEC's operating systems (RSX-11, RT-11, VMS, and later OpenVMS) for compatibility. The standard DCL prompt is the dollar "$" sign (example): $ write 0 "Hello World, it is "+F$TIME() Hello World, it is 01-Jan-08 10:20 PM $ inquire p1 "Press Control-T for 1 line status:" Press Control-T for 1 line status: 1 KB0 DCL+DCL KB(0R) 4(8)K+24K 0.1(+0.1) -8 $ set verify/debug/watch $ show memory (show memory) (SYSTAT/C) Memory allocation table: Start End Length Permanent Temporary 0K - 85K ( 86K) MONITOR 86K - 1737K (1652K) (User) 1738K - 1747K ( 10K) (User) DAPRES LIB 1748K - 1751K ( 4K) (User) RMSRES LIB 1752K - 2043K ( 292K) ** XBUF ** 2044K - *** END *** $ RSX (Realtime System eXecutive) Programs that were written for the RSX RTS such as COBOL, Macro Assembler, or later releases of BASIC-Plus-2, could utilize the maximum amount of memory available for a binary program (56K due to the requirements of an RTS needing the top 8K to use for itself). RSTS Version 7 and later allowed the RSX RTS to be included in the kernel, making it completely "disappear" from the user address space, thus allowing 64K bytes of memory for user programs. Programs got around the limitations of the amount of available memory by using libraries (when permissible), by complicated overlay strategies, or by calling other programs ("Chaining") and passing them commands in a shared memory area called "Core Common," among other practices. When RSX is the default KBM, the standard RSX prompt (both logged in and logged out) is the ">" (or MCR "Monitor Console Routine") sign (example): >run Please type HELLO >HELLO 1,2;SECRET >run ?What? >help Valid keyboard commands are: ASSIGN DISMOUNT HELP RUN UNSAVE BYE EXIT MOUNT SHUTUP DEASSIGN HELLO REASSIGN SWITCH >run CSPCOM CSP>HWORLD=HWORLD CSP>^Z >RUN TKB TKB>HWORLD=HWORLD,LB:CSPCOM.OLB/LB TKB>// >run HWORLD.TSK Hello World Press Control-T for 1 line status: ? 1 KB0 HWORLD+...RSX KB(0R) 7(32)K+0K 0.8(+0.2) +0 >DIR HWORLD.*/na/ex/si/pr SY:[1,2] HWORLD.BAS 1 < 60> HWORLD.BAC 7C <124> HWORLD.OBJ 2 < 60> HWORLD.TSK 25C <124> Total of 35 blocks in 4 files in SY:[1,2] > RT-11 The RT-11 RTS emulated the Single Job version of the RT-11 distribution. Like the RSX emulation, RT-11 occupied the top 8K of memory, leaving the bottom 56K for CUSPS, programs written in FORTRAN-IV or Macro Assembler. When RT-11 is the default KBM, the standard RT-11 prompt (both logged in and logged out) is the "." sign (example): .VERSION Please type HELLO .HELLO 1,2;SECRET .VERSION RT-11SJ V3-03; RSTS/E V8.0 .R PIP *HWORLD.MAC=KB: .MCALL .TTYIN,.PRINT,.EXIT HWORLD: .ASCII /Hello World/<15><12> .ASCIZ /Press Control-T for 1 line status:/ .EVEN Start: .PRINT #HWORLD .TTYIN .EXIT .END START ^Z *^Z .R MACRO HWORLD=HWORLD *^Z .R LINK *HWORLD=HWORLD *^Z .R HWORLD.SAV Hello World Press Control-T for 1 line status: 1 KB0 HWORLD+RT11 KB(0R) 2(28)K+4K 0.6(+0.2) +0 ..DIR HWORLD.*/na/ex/si/pr SY:[1,2] HWORLD.BAS 1 < 60> HWORLD.BAC 7C <124> HWORLD.TSK 25C <124> HWORLD.MAC 1 < 60> HWORLD.OBJ 1 < 60> HWORLD.SAV 2C <124> Total of 37 blocks in 6 files in SY:[1,2] . TECO (Text Editor and COrrector) The TECO editor was itself implemented as an RTS to maximize the amount of memory available for the editing buffer, and also because it was first implemented in RSTS V5B, before the release of the general purpose runtime systems (RSX and RT11). TECO was the only RTS distributed with RSTS that did not contain a built-in KBM. The user would start up TECO (like any other program) by running a TECO program (TECO.TEC). TECO and the affine QEDIT were the direct ancestors of the first UNIX-based text editor, ED. Most RSTS systems used CCL's to create a file (MAKE filespec), edit a file (TECO filespec), or run a TECO program (MUNG filespec, data). The following program is an example of how TECO could be used to calculate pi (currently set to 20 digits): Ready run TECO *GZ0J\UNQN"E 20UN ' BUH BUV HK QN< J BUQ QN*10/3UI QI< \ +2*10+(QQ*QI)UA B L K QI*2-1UJ QA/QJUQ QA-(QQ*QJ)-2\ 10@I// -1%I > QQ/10UT QH+QT+48UW QW-58"E 48UW %V ' QV"N QV^T ' QWUV QQ-(QT*10)UH > QV^T @^A/ /HKEX$$ 31415926535897932384 Ready RSTS easter eggs System start-up (INIT.SYS) If a user typed an unrecognised command at system boot to the "Option:" prompt of INIT.SYS, the startup utility, the message "Type 'HELP' for help" was displayed. If the user subsequently typed 'HELP' (including the quotes) to the prompt, the response was "How amusing, anyway..." followed by the actual help message. PDP-11 console lights One of the nice features that a system manager could compile into the kernel was a rotating display pattern that gave the illusion of 2 snakes chasing each other around the console lights. The normal kernel would give the illusion of 1 snake moving from right to left in the data lights across the bottom. If the system manager also compiled the "lights" object module the user would see an additional snake moving from left to right in the address lights across the top. This was accomplished by using supervisory mode in the versions prior to 9.0. RSX also had a similar display pattern that would appear as if 2 snakes were playing chicken and would run into each other in the center of the console. Teco easter egg The command 'make' allowed a user to make a text file and automatically enter TECO text editor. If a user typed 'make love', the system created a file called 'love' and typed back, 'Not War?' Open Files List Kevin Herbert, later working for DEC, added an undocumented feature in the 90's to allow a user to enter ^F to see a list of open files the user process had, complete with blocks in use and file sizes Stardate Beginning with version 9.0, an undocumented feature would allow the system manager to change the display of the system date. RSTS now became the first operating system that would display the system date as a set of numbers representing a stardate as commonly known from the TV series Star Trek. Add-ons by other companies System Industries bought the only source license for RSTS to implement an enhancement called (SImultaneous Machine ACceSs), which allowed their special disk controller to set a semaphore flag for disk access, allowing multiple WRITES to the same files on a RSTS System where the disk is shared by multiple PDP-11 RSTS systems. This feature was implemented in System Industries controllers that were attached to many DEC computers and designed by Dr. Albert Chu while he worked at System Industries. The main innovation was use of a semaphore, a flag to indicate which processor, by cooperative sharing, has exclusive write access. This required many changes to the way access to disks was accomplished by the RSTS operating system. The FIPS (File Information Processing System) system, that handled i/o access, was single-threaded in RSTS. To allow a disk access to stall while another machine had active access to a block, required that the FIPS could timeout a request, go to the next request and 'come back' to the stalled one in a round robin fashion. The code to allow this was written by Philip Hunt while working at System Industries, in Milipitas, California. He eventually worked for Digital Equipment in the New England area in the late 1980s and early '90s. SIMACS was not limited to the PDP-11 product line; VAXen could also use it. RSTS emulations ROSS/V In 1981, Evans Griffiths & Hart marketed the ROSS/V product. ROSS/V allowed all user mode processes of RSTS (CUSPS, RTSs and user programs) the ability to run unmodified under VMS on the VAX-11 machines. The code for this emulation handled all of the kernel processes that would normally be handled by a RSTS kernel running on a PDP-11. The original BASIC-PLUS language that has carried through all versions of RSTS was subcontracted by Evans Griffiths & Hart, Inc. for a fixed price of $10,500. Other PDP-11 emulators RSTS and its applications can run under any PDP-11 emulator. For more information, see PDP-11 RSTS mascot Spike and Albert Versions RSTS was originally called BTSS (Basic Time Sharing System). Before shipment actually began, the name was changed from BTSS to RTSS because a product called BTSS was already being marketed by Honeywell. A simple typing mistake changed the name from RTSS to RSTS. The addition of new memory management support and the ability to install more memory in the PDP-11/40 and PDP-11/45 led to another name change: RSTS-11 now became RSTS/E. Clones in the USSR DOS-KP ("ДОС-КП") Applications Computer bureaus sometimes deployed User-11 for RSTS/E-based data management. See also Asynchronous System Trap BASIC-Plus-2 Concise Command Language DATATRIEVE DECnet Front panel Kevin Mitnick Local Area Transport Octal Debugging Technique QIO Record Management Services Runtime system SYSTAT (command) Time-sharing Time-sharing system evolution References External links Elvira at The Royal Institute of Technology in Stockholm, Sweden RSTS Hobbyist Site SimH web page Wofford Witch DEC operating systems PDP-11 Discontinued operating systems Time-sharing operating systems Assembly language software 1970 software
36721708
https://en.wikipedia.org/wiki/Mighty%20Eagle
Mighty Eagle
The Mighty Eagle (also known as the Warm Gas Test Article) is a Prototype Robotic Lander developed by NASA at the Marshall Space Flight Center in Huntsville, Alabama. The vehicle is an autonomous flying testbed that is used for testing hardware, sensors and algorithms. These sensors and algorithms include such things as onboard cameras that, with specialized guidance, navigation and control software, could aid in the capture of orbiting space debris, in-space docking with a fuel depot, docking of a robotic lander with an orbiting command module and the rendezvous of multiple unmanned stages for deep space human exploration of the Solar System. History Initial software and hardware development were done on precursor vehicle called the Cold Gas Test Article which used compressed air as a propellant and had about 10 seconds of flight time. The knowledge gained from this development and testing was used in the design of the Mighty Eagle. The Mighty Eagle prototype lander was developed by the Marshall Center and Johns Hopkins University Applied Physics Laboratory. Key partners in this project include the Von Braun Center for Science and Innovation, the Science Applications International Corporation, Dynetics Corporation and Teledyne Brown Engineering. The design of the vehicle began in late 2009 and integration was completed in January 2011. The vehicle was transported to an indoor test facility and bolted to the ground for initial testing, followed by free flight testing. Outdoor testing at another facility ran from August to November 2011. In 2012, a test area at MSFC was developed and the Mighty Eagle tested "Autonomous Rendezvous and Capture" technology. In 2013, enhancements were made including legs that are lighter by about 6.8 kg (15 lbs), a 3D stereo camera that allows the detection and avoidance of 3D (three dimensional) objects and an onboard image processor in preparation for "hazard avoidance" testing. In July 2013 a hazard field (test area for the lander) consisting of 200 tons of lunar simulant began construction at the Marshall Space Flight Center. In August 2013 the hazard field was completed. The 3D camera was installed in an enclosure on the vehicle permitting the camera to be pointed at three different angles. After many tests (described below) the Mighty Eagle lander was put into "organization and storage" in December 2013. Information from the NASA Robotic Lunar Lander Development Project (aka Mighty Eagle) was merged into the Lunar CATALYST initiative. For further details see Robotic Lunar Lander Development Project. The lander is named after the Mighty Eagle character in the Angry Birds game. Specifications Three-legged "green" lander: Fuel - percent pure hydrogen peroxide Main thrust is provided by an EGC (Earth Gravity Cancelling) thruster giving to of thrust High purity nitrogen stored at ~3000 psi is regulated down to ~750 psi and is used to push the peroxide out of the thrusters. The vehicle can carry of pressurant. Twelve Attitude Control System (ACS) thrusters each giving of thrust provide attitude control 3D camera which was given its own battery as part of the 200th modification to the vehicle. Onboard computer responsible for execution of pre-programmed flight profiles Dimensions - tall and in diameter Mass - when fuelled is ~ Performs vertical take off and vertical landings (VTVL) For additional information see the Robotic Lunar Lander information pages. Engines The NASA Mighty Eagle produces thrust by the violent decomposition of hydrogen peroxide (H2O2) using silver as a catalyst. Testing Prior to the flight tests, each subsystem was testing individually including the propulsion system. Flight tests in 2011 January 2011 - NASA engineers successfully integrated and completed system testing on a new robotic lander at Teledyne Brown Engineering's facility in Huntsville. Part of the testing involved placing the robotic lander prototype on modified skateboards and a customized track system. This low-cost solution controlled movement during the final testing of the prototype's sensors, onboard computer and thrusters. June 13, 2011 - indoor free (untethered) flight to 7 feet for 27 seconds. June 16, 2011 - Second free flight including a hover at 6 feet with controlled descent. The inertial measurement unit and radar altimeter were used to control the flight. August 23, 2011 - Performed a translated manoeuvre, (i.e. moved itself sideways) to execute a controlled, safe landing 13 feet from the launch pad. October–November 2011 - the Robotic Lander Development Project from NASA's Marshall Space Flight Center in Huntsville performed a series of complex tests on the prototype lander. At the Redstone Test Center's propulsion test facility on the U.S. Army Redstone Arsenal in Huntsville, Alabama the machine flew to three feet, then 30 feet, and finally a record 100-foot flight test. The flight lasted 30 seconds. Summer 2012 tests "These lander tests provide the data necessary to expand our capabilities to go to other destinations". August 8, 2012 - Mighty Eagle flew to a height of and landed safely. August 28, 2012 - flew to a height of . During the flight the lander automatically identified its destination 10 m away, flew there and landed safely. September 5, 2012 - flew to a height of , used an onboard camera to identify an on-the-ground target and then autonomously landed itself at the chosen spot. Deliberately only carrying of fuel this time. October 19, 2012 - WGTA tethered test of software changes. October 25, 2012 - Flew to a height of , above the tree tops. 2013 tests A hazard field test area that simulates the lunar surface, including boulders, is being prepared. Amongst the test software and hardware modifications was procurement of a quadcopter whose WIFI camera can film midflight. April 10, 2013 - Regression test flight to three feet with lighter legs. The vehicle's three legs were tethered to the ground. April 19, 2013 - Free flight test with hundreds of student spectators. The flight was filmed with a quadcopter. August 30, 2013 - Tethered test flight with the 3D camera in its new point-able enclosure. September 16, 2013 - HAZ02 A free flight across the Hazard Field. 110.02 kg of propellant pressurised to 760 psi was loaded. The flight was successful, although dust was kicked up the vehicle was able to take a reconnaissance photograph of the Hazard Field. Modifications were made to the vehicle because test data from the practice flight on September 4, 2013 showed that there was insufficient power for the camera and intermittently the EGC throttle motor does not fully open. The flight can be seen in this video taken from a quadcopter. September 20, 2013 HAZ03 - A free flight across the Hazard Field. The field had been watered to reduce dust. Guidance software from Moon Express was also carried to test to see if its outputs matched those of the NASA Guidance Software. September 26, 2013 Repeat of HAZ02. Mighty Eagle flew at a height of 20m and translating 45m. Moving pictures of the flight can be seen in this documentary. October 24, 2013 Test sequence HAZ05 was flown. This simulated a real landing by NASA Mighty Eagle ascending to 30m followed by a descending to 20m while translating and taking stereo images across the field. Only 100 kg of propellant were loaded. The flight can be seen in this video, including the steam/fog cloud produced by the cold. November 14, 2013 Tethered Test flight with modifications. Only 42.06 kg of propellant loaded. The normal software was replaced by guidance software form Moon Express. The Nanolaunch team supplied a secondary payload including several low-cost Inertial measurement units and Global Positioning System sensors for in flight characterization and algorithm testing. November 25, 2013 Test MEG02 The vehicle flew to a height of three meters at a vertical velocity of 0.5 m/s, followed by a 12-second hover and finished with a descent at -1 m/s under the control of the Moon Express software. Further details about the tests and hardware can be found in the "Mighty Eagle: The Development and Flight Testing of an Autonomous Robotic Lander Test Bed" article in Johns Hopkins APL Technical Digest. Video Entire NASA MSFC Mighty Eagle YouTube playlist (official). See also VTVL (Vertical Take off Vertical Landing) Project Morpheus Lunar CATALYST References External links NASA Home page for Robotic Lander NASA Mighty Eagle Twitter account NASA space launch vehicles Missions to the Moon Experimental rockets of the United States VTVL rockets
60492
https://en.wikipedia.org/wiki/Abstract%20machine
Abstract machine
An abstract machine, , is a theoretical computer used for defining a model of computation. Abstraction of computing processes is used in both the computer science and computer engineering disciplines and usually assumes a discrete time paradigm. In computer science A typical abstract machine consists of a definition in terms of input, output, and the set of allowable operations used to turn the former into the latter. The best-known example is the Turing machine. More complex definitions create abstract machines with full instruction sets, registers and models of memory. One popular model more similar to real modern machines is the RAM model, which allows random access to indexed memory locations. As the performance difference between different levels of cache memory grows, cache-sensitive models such as the external-memory model and cache-oblivious model are growing in importance. An abstract machine can also refer to a microprocessor design which has yet to be (or is not intended to be) implemented as hardware. An abstract machine implemented as a software simulation, or for which an interpreter exists, is called a virtual machine. In the theory of computation, abstract machines are often used in thought experiments regarding computability or to analyze the complexity of algorithms. This application of abstract machines is related to the subject of computational complexity theory. Abstract machines can also be used to model abstract data types, which can be specified in terms of their operational semantics on an abstract machine. For example, a stack can be specified in terms of operations on an abstract machine with an array of memory. Through the use of abstract machines, it is possible to compute the amount of resources (time, memory, etc.) necessary to perform a particular operation without having to construct a physical system. See also Abstraction (computer science) Abstract interpretation Bulk synchronous parallel Discrete time Finite-state machine Flynn's taxonomy Formal models of computation Krivine machine Model of computation Parallel random-access machine, the de facto standard model. SECD machine State space Turing machine References Further reading Peter van Emde Boas, Machine Models and Simulations pp. 3–66, appearing in: Jan van Leeuwen, ed. "Handbook of Theoretical Computer Science. Volume A: Algorithms and Complexity'', The MIT PRESS/Elsevier, 1990. (volume A). QA 76.H279 1990. Stephan Diehl, Pieter Hartel and Peter Sestoft, Abstract Machines for Programming Language Implementation, Future Generation Computer Systems, Vol. 16(7), Elsevier, 2000. Automata (computation) Models of computation
6909493
https://en.wikipedia.org/wiki/REAPER
REAPER
REAPER (an acronym for Rapid Environment for Audio Production, Engineering, and Recording) is a digital audio workstation and MIDI sequencer software created by Cockos. The current version is available for Microsoft Windows (XP and newer) and macOS (10.5 and newer), as well as for Linux. REAPER acts as a host to most industry-standard plug-in formats (such as VST and AU) and can import all commonly used media formats, including video. REAPER and its included plug-ins are available in 32-bit and 64-bit format. Licensing REAPER provides a free, fully functional 60-day evaluation period. For further use two licenses are available – a commercial and a discounted one. They are identical in features and differ only in price and target audience, with the discount license being offered for private use, schools and small businesses. Any paid license includes the current version with all of its future updates and a free upgrade to the next major version and all of its subsequent updates, when they are released. Any license is valid for all configurations (x64 and x86) and allows for multiple installations, as long it is being run on one computer at a time. Customization Extensive customization opportunities are provided through the use of ReaScript (edit, run and debug scripts within REAPER) and user-created themes and functionality extensions. ReaScript can be used to create anything from advanced macros to full-featured REAPER extensions. ReaScripts can be written in EEL2 (JSFX script), Lua and Python. SWS / S&M is a popular, open-source extension to REAPER, providing workflow enhancements and advanced tempo/groove manipulation functionality. REAPER's interface can be customized with user-built themes. Each previous version's default theme is included with REAPER and theming allows for complete overhauls of the GUI. REAPER has been translated into multiple languages and downloadable language packs are available. Users as well as developers can create language packs for REAPER. Included software and plug-ins Reaper comes with a variety of commonly used audio production effects. They include tools such as ReaEQ, ReaVerb, ReaGate, ReaDelay, ReaPitch and ReaComp. The included Rea-plug-ins are also available as a separate download for users of other DAWs, as the ReaPlugs VST FX Suite. Also included are hundreds of JSFX plug-ins ranging from standard effects to specific applications for MIDI and audio. JSFX scripts are text files, which when loaded into REAPER (exactly like a VST or other plug-in) become full-featured plug-ins ranging from simple audio effects (e.g delay, distortion, compression) to instruments (synths, samplers) and other special purpose tools (drum triggering, surround panning). All JSFX plug-ins are editable in any text editor and thus are fully user customizable. REAPER includes no third-party software, but is fully compatible with all versions of the VST standard (currently VST3) and thus works with the vast majority of both free and commercial plug-ins available. REAPER x64 can also run 32-bit plug-ins alongside 64-bit processes. As of version 5.97, REAPER supports ARA 2 plugins. Video editing While not a dedicated video editor, REAPER can be used to cut and trim video files and to edit or replace the audio within. Common video effects such as fades, wipes and cross-fades are available. REAPER aligns video files in a project, as it would an audio track, and the video part of a file can be viewed in separate video window while working on the project. Control surface support REAPER has built-in support for: BCF2000 – Behringer's motorized faders control surface, USB/MIDI TranzPort – Frontier Design Group's wireless transport control AlphaTrack – Frontier Design Group's AlphaTrack control surface FaderPort – Presonus' FaderPort control surface Baby HUI – Mackie's Baby HUI control surface MCU – Mackie's "Mackie Control Universal" control surface Version history First public release – December 23, 2005 as freeware 1.0 – released on August 23, 2006 as shareware 2.0 – October 10, 2007 2.43 – July 30, 2008: Beta Mac OS X and Windows x64 support. 2.56 – March 2, 2009: Finalized Mac OS X and Windows x64 ports. 3.0 – May 22, 2009 4.0 – August 3, 2011 Work on Linux support began. 5.0 – August 12, 2015 Beta-quality Linux support 5.981 - July 22, 2019 Cumulative improvements and enhancements, notably Notation mode MIDI Editor (new in 5.20), VST3 support, Reascript, Video support, Control Grouping, FX Parameter Automation, Envelope modes, new API functions, new Actions, and much more 6.0 – December 3, 2019 See also Comparison of digital audio editors List of digital audio workstation software List of music software References External links REAPER home page REAPER en español (unofficial website, tutorials & tips) Linux Audio editing software for Linux Digital audio editors for Linux Digital audio workstation software Linux software MacOS audio editors
41172
https://en.wikipedia.org/wiki/Frame%20%28networking%29
Frame (networking)
A frame is a digital data transmission unit in computer networking and telecommunication. In packet switched systems, a frame is a simple container for a single network packet. In other telecommunications systems, a frame is a repeating structure supporting time-division multiplexing. A frame typically includes frame synchronization features consisting of a sequence of bits or symbols that indicate to the receiver the beginning and end of the payload data within the stream of symbols or bits it receives. If a receiver is connected to the system during frame transmission, it ignores the data until it detects a new frame synchronization sequence. Packet switching In the OSI model of computer networking, a frame is the protocol data unit at the data link layer. Frames are the result of the final layer of encapsulation before the data is transmitted over the physical layer. A frame is "the unit of transmission in a link layer protocol, and consists of a link layer header followed by a packet." Each frame is separated from the next by an interframe gap. A frame is a series of bits generally composed of frame synchronization bits, the packet payload, and a frame check sequence. Examples are Ethernet frames, Point-to-Point Protocol (PPP) frames, Fibre Channel frames, and V.42 modem frames. Often, frames of several different sizes are nested inside each other. For example, when using Point-to-Point Protocol (PPP) over asynchronous serial communication, the eight bits of each individual byte are framed by start and stop bits, the payload data bytes in a network packet are framed by the header and footer, and several packets can be framed with frame boundary octets. Time-division multiplex In telecommunications, specifically in time-division multiplex (TDM) and time-division multiple access (TDMA) variants, a frame is a cyclically repeated data block that consists of a fixed number of time slots, one for each logical TDM channel or TDMA transmitter. In this context, a frame is typically an entity at the physical layer. TDM application examples are SONET/SDH and the ISDN circuit-switched B-channel, while TDMA examples are Circuit Switched Data used in early cellular voice services. The frame is also an entity for time-division duplex, where the mobile terminal may transmit during some time slots and receive during others. See also Datagram Jumbo frame Multiplex techniques Overhead bit References Computer networks Link protocols Packets (information technology) Units of information it:Ethernet#Frame
34558
https://en.wikipedia.org/wiki/20th%20century
20th century
The 20th (twentieth) century began on January 1, 1901 (MCMI), and ended on December 31, 2000 (MM). The 20th century was dominated by significant events that defined the modern era: Spanish flu pandemic, World War I and World War II, nuclear weapons, nuclear power and space exploration, nationalism and decolonization, technological advances, and the Cold War and post-Cold War conflicts. These reshaped the political and social structure of the globe. The 20th century saw a massive transformation of humanity's relationship with the natural world. Global population, sea level rise, and ecological collapses increased while competition for land and dwindling resources accelerated deforestation, water depletion, and the mass extinction of many of the world's species and decline in the population of others. Man-made global warming increased the risk of extreme weather conditions. Additional themes include intergovernmental organizations and cultural homogenization through developments in emerging transportation and communications technology; poverty reduction and world population growth, awareness of environmental degradation, ecological extinction; and the birth of the Digital Revolution. Automobiles, airplanes and the use of home appliances became common, as did video and audio recording. Great advances in power generation, communication, and medical technology allowed for near-instantaneous worldwide computer communication and genetic modification of life. The repercussions of the World Wars, Cold War, and globalization crafted a world where people are more united than any previous time in human history, as exemplified by the establishment of international law, international aid, and the United Nations. The Marshall Plan—which spent $13 billion ($ billion in U.S. dollars) to rebuild the economies of post-war nations—launched "Pax Americana". Throughout the latter half of the 20th century, the rivalry between the United States and the Soviet Union created enormous tensions around the world which manifested in various armed proxy regional conflicts and the omnipresent danger of nuclear proliferation. The dissolution of the Soviet Union in 1991 after the collapse of its European alliance was heralded by the West as the end of communism, though by the century's end roughly one in six people on Earth lived under communist rule, mostly in China which was rapidly rising as an economic and geopolitical power. It took over two hundred thousand years of modern human history and 6 million years of human evolution for the world's population to reach 1 billion in 1804; world population reached an estimated 2 billion in 1927; by late 1999, the global population reached 6 billion, with over half in East, South and Southeast Asia. Global literacy averaged 80%. Penicillin and other medical breakthroughs combined with the World Health Organization's global vaccination campaigns to eradicate smallpox and other diseases responsible for more human deaths than all wars and natural disasters combined yielded unprecedented results; smallpox now only existed in labs. Machines came to be used in all areas of production, feeding increasingly intricate supply chains that allowed mankind for the first time to be constrained not by how much it could produce, but by peoples' willingness to consume. Trade improvements greatly expanded upon the limited set of food-producing techniques used since the Neolithic period, multiplying the diversity of foods available and boosting the quality of human nutrition. Until the early 19th century, life expectancy from birth was about thirty in most populations; global lifespan-averages exceeded 40 years for the first time in history, with over half achieving 70 or more years (three decades longer than a century earlier). Overview The 20th (twentieth) century began on January 1, 1901, and ended on December 31, 2000. The term is often used erroneously to refer to "the 1900s", the century between January 1, 1900 and December 31, 1999. It was the tenth and final century of the 2nd millennium. Unlike most century years, the year 2000 was a leap year, and the second century leap year in the Gregorian calendar after 1600. The century had the first global-scale total wars between world powers across continents and oceans in World War I and World War II. Nationalism became a major political issue in the world in the 20th century, acknowledged in international law along with the right of nations to self-determination, official decolonization in the mid-century, and related regional conflicts. The century saw a major shift in the way that many people lived, with changes in politics, ideology, economics, society, culture, science, technology, and medicine. The 20th century may have seen more technological and scientific progress than all the other centuries combined since the dawn of civilization. Terms like nationalism, globalism, environmentalism, ideology, world war, genocide, and nuclear war entered common usage. Scientific discoveries, such as the theory of relativity and quantum physics, profoundly changed the foundational models of physical science, forcing scientists to realize that the universe was more complex than previously believed, and dashing the hopes (or fears) at the end of the 19th century that the last few details of scientific knowledge were about to be filled in. It was a century that started with horses, simple automobiles, and freighters but ended with high-speed rail, cruise ships, global commercial air travel and the Space Shuttle. Horses and other pack animals, every society's basic form of personal transportation for thousands of years, were replaced by automobiles and buses within a few decades. These developments were made possible by the exploitation of fossil fuel resources, which offered energy in an easily portable form, but also caused concern about pollution and long-term impact on the environment. Humans explored space for the first time, taking their first footsteps on the Moon. Mass media, telecommunications, and information technology (especially computers, paperback books, public education, and the Internet) made the world's knowledge more widely available. Advancements in medical technology also improved the health of many people: the global life expectancy increased from 35 years to 65 years. Rapid technological advancements, however, also allowed warfare to reach unprecedented levels of destruction. World War II alone killed over 60 million people, while nuclear weapons gave humankind the means to annihilate itself in a short time. However, these same wars resulted in the destruction of the imperial system. For the first time in human history, empires and their wars of expansion and colonization ceased to be a factor in international affairs, resulting in a far more globalized and cooperative world. The last time major powers clashed openly was in 1945, and since then, violence has seen an unprecedented decline. The world also became more culturally homogenized than ever with developments in transportation and communications technology, popular music and other influences of Western culture, international corporations, and what was arguably a truly global economy by the end of the 20th century. Summary Technological advancements during World War I changed the way war was fought, as new inventions such as tanks, chemical weapons, and aircraft modified tactics and strategy. After more than four years of trench warfare in Western Europe, and up to 22 million dead, the powers that had formed the Triple Entente (France, Britain, and Russia, later replaced by the United States and joined by Italy and Romania) emerged victorious over the Central Powers (Germany, Austria-Hungary, the Ottoman Empire and Bulgaria). In addition to annexing many of the colonial possessions of the vanquished states, the Triple Entente exacted punitive restitution payments from them, plunging Germany in particular into economic depression. The Austro-Hungarian and Ottoman empires were dismantled at the war's conclusion. The Russian Revolution resulted in the overthrow of the Tsarist regime of Nicholas II and the onset of the Russian Civil War. The victorious Bolsheviks then established the Soviet Union, the world's first communist state. At the beginning of the period, the British Empire was the world's most powerful nation, having acted as the world's policeman for the past century. Fascism, a movement which grew out of post-war angst and which accelerated during the Great Depression of the 1930s, gained momentum in Italy, Germany, and Spain in the 1920s and 1930s, culminating in World War II, sparked by Nazi Germany's aggressive expansion at the expense of its neighbors. Meanwhile, Japan had rapidly transformed itself into a technologically advanced industrial power and, along with Germany and Italy, formed the Axis powers. Japan's military expansionism in East Asia and the Pacific Ocean brought it into conflict with the United States, culminating in a surprise attack which drew the US into World War II. After some years of dramatic military success, Germany was defeated in 1945, having been invaded by the Soviet Union and Poland from the East and by the United States, the United Kingdom, Canada, and France from the West. After the victory of the Allies in Europe, the war in Asia ended with the Soviet invasion of Manchuria and the dropping of two atomic bombs on Japan by the US, the first nation to develop nuclear weapons and the only one to use them in warfare. In total, World War II left some 60 million people dead. After the war, Germany was occupied and divided between the Western powers and the Soviet Union. East Germany and the rest of Eastern Europe became Soviet puppet states under communist rule. Western Europe was rebuilt with the aid of the American Marshall Plan, resulting in a major post-war economic boom, and many of the affected nations became close allies of the United States. With the Axis defeated and Britain and France rebuilding, the United States and the Soviet Union were left standing as the world's only superpowers. Allies during the war, they soon became hostile to one another as their competing ideologies of communism and democratic capitalism proliferated in Europe, which became divided by the Iron Curtain and the Berlin Wall. They formed competing military alliances (NATO and the Warsaw Pact) which engaged in a decades-long standoff known as the Cold War. The period was marked by a new arms race as the USSR became the second nation to develop nuclear weapons, which were produced by both sides in sufficient numbers to end most human life on the planet had a large-scale nuclear exchange ever occurred. Mutually assured destruction is credited by many historians as having prevented such an exchange, each side being unable to strike first at the other without ensuring an equally devastating retaliatory strike. Unable to engage one another directly, the conflict played out in a series of proxy wars around the world—particularly in China, Korea, Cuba, Vietnam, and Afghanistan—as the USSR sought to export communism while the US attempted to contain it. The technological competition between the two sides led to substantial investment in research and development which produced innovations that reached far beyond the battlefield, such as space exploration and the Internet. In the latter half of the century, most of the European-colonized world in Africa and Asia gained independence in a process of decolonization. Meanwhile, globalization opened the door for several nations to exert a strong influence over many world affairs. The US's global military presence spread American culture around the world with the advent of the Hollywood motion picture industry, Broadway, rock and roll, pop music, fast food, big-box stores, and the hip-hop lifestyle. Britain also continued to influence world culture, including the "British Invasion" into American music, leading many rock bands from other countries (such as Swedish ABBA) to sing in English. After the Soviet Union collapsed under internal pressure in 1991, most of the communist governments it had supported around the world were dismantled—with the notable exceptions of China, North Korea, Cuba, Vietnam, and Laos—followed by awkward transitions into market economies. Following World War II, the United Nations, successor to the League of Nations, was established as an international forum in which the world's nations could discuss issues diplomatically. It enacted resolutions on such topics as the conduct of warfare, environmental protection, international sovereignty, and human rights. Peacekeeping forces consisting of troops provided by various countries, with various United Nations and other aid agencies, helped to relieve famine, disease, and poverty, and to suppress some local armed conflicts. Europe slowly united, economically and, in some ways, politically, to form the European Union, which consisted of 15 European countries by the end of the 20th century. Nature of innovation and change Due to continuing industrialization and expanding trade, many significant changes of the century were, directly or indirectly, economic and technological in nature. Inventions such as the light bulb, the automobile, and the telephone in the late 19th century, followed by supertankers, airliners, motorways, radio, television, antibiotics, nuclear power, frozen food, computers and microcomputers, the Internet, and mobile telephones affected people's quality of life across the developed world. Scientific research, engineering professionalization and technological development—much of it motivated by the Cold War arms race—drove changes in everyday life. Social change At the beginning of the century, strong discrimination based on race and sex was significant in most societies. Although the Atlantic slave trade had ended in the 19th century, movements for equality for non-white people in the white-dominated societies of North America, Europe, and South Africa continued. By the end of the 20th century, in many parts of the world, women had the same legal rights as men, and racism had come to be seen as unacceptable, often backed up by legislation. Attitudes towards homosexuality also began to change in the later part of the century. When the Republic of India was constituted, the disadvantaged classes of the caste system in India became entitled to affirmative action benefits in education, employment and government. Earth at the end of the 20th century Communications and information technology, transportation technology, and medical advances had radically altered daily lives. Europe appeared to be at a sustainable peace for the first time in recorded history. The people of the Indian subcontinent, a sixth of the world population at the end of the 20th century, had attained an indigenous independence for the first time in centuries. China, an ancient nation comprising a fifth of the world population, was finally open to the world, creating a new state after the near-complete destruction of the old cultural order. With the end of colonialism and the Cold War, nearly a billion people in Africa were left in new nation states after centuries of foreign domination. The world was undergoing its second major period of globalization; the first, which started in the 18th century, having been terminated by World War I. Since the US was in a dominant position, a major part of the process was Americanization. The influence of China and India was also rising, as the world's largest populations were rapidly integrating with the world economy. Terrorism, dictatorship, and the spread of nuclear weapons were pressing global issues. The world was still blighted by small-scale wars and other violent conflicts, fueled by competition over resources and by ethnic conflicts. Disease threatened to destabilize many regions of the world. New viruses such as the West Nile virus continued to spread. Malaria and other diseases affected large populations. Millions were infected with HIV, the virus which causes AIDS. The virus was becoming an epidemic in southern Africa. Based on research done by climate scientists, the majority of the scientific community consider that in the long term environmental problems may threaten the planet's habitability. One argument is that of global warming occurring due to human-caused emission of greenhouse gases, particularly carbon dioxide produced by the burning of fossil fuels. This prompted many nations to negotiate and sign the Kyoto treaty, which set mandatory limits on carbon dioxide emissions. World population increased from about 1.6 billion people in 1901 to 6.1 billion at the century's end. Wars and politics The number of people killed during the century by government actions was in the hundreds of millions. This includes deaths caused by wars, genocide, politicide and mass murders. The deaths from acts of war during the two world wars alone have been estimated at between 50 and 80 million. Political scientist Rudolph Rummel estimated 262,000,000 deaths caused by democide, which excludes those killed in war battles, civilians unintentionally killed in war and killings of rioting mobs. According to Charles Tilly, "Altogether, about 100 million people died as a direct result of action by organized military units backed by one government or another over the course of the century. Most likely a comparable number of civilians died of war-induced disease and other indirect effects." It is estimated that approximately 70 million Europeans died through war, violence and famine between 1914 and 1945. The Armenian, Syriac and Greek genocide were the systematic destruction, mass murder and expulsion of the Armenians, Assyrians and Greeks in the Ottoman Empire during World War I, spearheaded by the ruling Committee of Union and Progress (CUP). Rising nationalism and increasing national awareness were among the many causes of World War I (1914–1918), the first of two wars to involve many major world powers including Germany, France, Italy, Japan, Russia/USSR, the British Empire and the United States. World War I led to the creation of many new countries, especially in Eastern Europe. At the time, it was said by many to be the "war to end all wars". After gaining political rights in the United States and much of Europe in the first part of the century, and with the advent of new birth control techniques, women became more independent throughout the century. Industrial warfare greatly increased in its scale and complexity during the first half of the 20th century. Notable developments included chemical warfare, the introduction of military aviation and the widespread use of submarines. The introduction of nuclear warfare in the mid-20th century marked the definite transition to modern warfare. Civil wars occurred in many nations. A violent civil war broke out in Spain in 1936 when General Francisco Franco rebelled against the Second Spanish Republic. Many consider this war as a testing battleground for World War II, as the fascist armies bombed some Spanish territories. The Great Depression in the 1930s led to the rise of Fascism and Nazism in Europe. World War II (1939–1945) involved Eastern Asia and the Pacific, in the form of Japanese aggression against China and the United States. Civilians also suffered greatly in World War II, due to the aerial bombing of cities on both sides, and the German genocide of the Jews and others, known as the Holocaust. During World War I, in the Russian Revolution of 1917, 300 years of Romanov reign were ended and the Bolsheviks, under the leadership of Vladimir Lenin, established the world's first Communist state. After the Soviet Union's involvement in World War II, communism became a major force in global politics, notably in Eastern Europe, China, Indochina and Cuba, where communist parties gained near-absolute power. The Cold War (1947–1989) caused an arms race and increasing competition between the two major players in the world: the Soviet Union and the United States. This competition included the development and improvement of nuclear weapons and the Space Race. This led to the proxy wars with the Western bloc, including wars in Korea (1950–1953) and Vietnam (1957–1975). The Soviet authorities caused the deaths of millions of their own citizens in order to eliminate domestic opposition. More than 18 million people passed through the Gulag, with a further 6 million being exiled to remote areas of the Soviet Union. The civil rights movement in the United States and the movement against apartheid in South Africa challenged racial segregation in those countries. The two world wars led to efforts to increase international cooperation, notably through the founding of the League of Nations after World War I, and its successor, the United Nations, after World War II. Nationalist movements in the subcontinent led to the independence and partition of Jawaharlal Nehru-led India and Muhammad Ali Jinnah-led Pakistan. Mahatma Gandhi's nonviolence and Indian independence movement against the British Empire influenced many political movements around the world, including the civil rights movement in the U.S., and freedom movements in South Africa and Burma. The creation of Israel in 1948, a Jewish state in the Middle East, at the end of the British Mandate for Palestine, fueled many regional conflicts. These were also influenced by the vast oil fields in many of the other countries of the predominantly Arab region. The end of colonialism led to the independence of many African and Asian countries. During the Cold War, many of these aligned with the United States, the USSR, or China for defense. After a long period of civil wars and conflicts with western powers, China's last imperial dynasty ended in 1912. The resulting republic was replaced, after another civil war, by a communist People's Republic in 1949. At the end of the 20th century, though still ruled by a communist party, China's economic system had largely transformed to capitalism. The Great Chinese Famine was a direct cause of the death of tens of millions of Chinese peasants between 1959 and 1962. It is thought to be the largest famine in human history. The Vietnam War caused two million deaths, changed the dynamics between the Eastern and Western Blocs, and altered North-South relations. The Soviet War in Afghanistan caused one million deaths and contributed to the downfall of the Soviet Union. The revolutions of 1989 released Eastern and Central Europe from Soviet supremacy. Soon thereafter, the Soviet Union, Czechoslovakia, and Yugoslavia dissolved, the latter violently over several years, into successor states, many rife with ethnic nationalism. Meanwhile, East Germany and West Germany were reunified in 1990. The Tiananmen Square protests of 1989, culminating in the deaths of hundreds of civilian protesters, were a series of demonstrations in and near Tiananmen Square in Beijing, China. Led mainly by students and intellectuals, the protests occurred in a year that saw the collapse of a number of communist governments around the world. European integration began in earnest in the 1950s, and eventually led to the European Union, a political and economic union that comprised 15 countries at the end of the 20th century. Culture and entertainment As the century began, Paris was the artistic capital of the world, where both French and foreign writers, composers and visual artists gathered. By the end of the century New York City had become the artistic capital of the world. Theater, films, music and the media had a major influence on fashion and trends in all aspects of life. As many films and much music originate from the United States, American culture spread rapidly over the world. 1953 saw the coronation of Queen Elizabeth II, an iconic figure of the century. Visual culture became more dominant not only in films but in comics and television as well. During the century a new skilled understanding of narrativist imagery was developed. Computer games and internet surfing became new and popular form of entertainment during the last 25 years of the century. In literature, science fiction, fantasy (with well-developed fictional worlds, rich in detail), and alternative history fiction gained unprecedented popularity. Detective fiction gained unprecedented popularity in the interwar period. In the United States in 1961 Grove Press published Tropic of Cancer a novel by Henry Miller redefining pornography and censorship in publishing in America. Music The invention of music recording technologies such as the phonograph record, and dissemination technologies such as radio broadcasting, massively expanded the audience for music. Prior to the 20th century, music was generally only experienced in live performances. Many new genres of music were established during the 20th century. Igor Stravinsky revolutionized classical composition. In the 1920s, Arnold Schoenberg developed the twelve-tone technique, which became widely influential on 20th-century composers. In classical music, composition branched out into many completely new domains, including dodecaphony, aleatoric (chance) music, and minimalism. Tango was created in Argentina and became extremely popular in the rest of the Americas and Europe. Blues and jazz music became popularized during the 1910s, 1920s and 1930s in the United States. Country music develops in the 1920s and 1930s in the United States. Blues and country went on to influence rock and roll in the 1950s, which along with folk music, increased in popularity with the British Invasion of the mid-to-late 1960s. Rock soon branched into many different genres, including folk rock, heavy metal, punk rock, and alternative rock and became the dominant genre of popular music. This was challenged with the rise of hip hop in the 1980s and 1990s. Other genres such as house, techno, reggae, and soul all developed during the latter half of the century and went through various periods of popularity. Synthesizers began to be employed widely in music and crossed over into the mainstream with new wave music in the 1980s. Electronic instruments have been widely deployed in all manners of popular music and has led to the development of such genres as house, synth-pop, electronic dance music, and industrial. Film, television and theatre Film as an artistic medium was created in the 20th century. The first modern movie theatre was established in Pittsburgh in 1905. Hollywood developed as the center of American film production. While the first films were in black and white, technicolor was developed in the 1920s to allow for color films. Sound films were developed, with the first full-length feature film, The Jazz Singer, released in 1927. The Academy Awards were established in 1929. Animation was also developed in the 1920s, with the first full-length cel animated feature film Snow White and the Seven Dwarfs, released in 1937. Computer-generated imagery was developed in the 1980s, with the first full-length CGI-animated film Toy Story was released in 1995. Julie Andrews, Harry Belafonte, Humphrey Bogart, Marlon Brando, James Cagney, Charlie Chaplin, Sean Connery, Tom Cruise, James Dean, Robert De Niro, Harrison Ford, Clark Gable, Cary Grant, Audrey Hepburn, Katharine Hepburn, Bruce Lee, Marilyn Monroe, Paul Newman, Jack Nicholson, Al Pacino, Sidney Poitier, Meryl Streep, Elizabeth Taylor, James Stewart, and John Wayne are among the most popular Hollywood stars of the 20th century. Oscar Micheaux, Sergei Eisenstein, D. W. Griffith, Cecil B. DeMille, Frank Capra, Howard Hawks, John Ford, Orson Welles, Martin Scorsese, John Huston, Alfred Hitchcock, Akira Kurosawa, Spike Lee, Ingmar Bergman, Federico Fellini, Walt Disney, Stanley Kubrick, Steven Spielberg, Ridley Scott, Woody Allen, Quentin Tarantino, James Cameron, William Friedkin, Ezz El-Dine Zulficar and George Lucas are among the most important and popular filmmakers of the 20th century. In theater, sometimes referred to as Broadway in New York City, playwrights such as Eugene O'Neill, Samuel Beckett, Edward Albee, Arthur Miller and Tennessee Williams introduced innovative language and ideas to the idiom. In musical theater, figures such as Rodgers and Hammerstein, Lerner and Loewe, and Irving Berlin had an enormous impact on both film and the culture in general. Modern Dance is born in America as both a 'rebellion' against centuries-old European ballet, as well as born from the oppression in America. Dancers and choreographers Alvin Ailey, Isadora Duncan, Vaslav Nijinsky, Ruth St. Denis, Mahmoud Reda, Martha Graham, José Limón, Doris Humphrey, Merce Cunningham, and Paul Taylor re-defined movement, struggling to bring it back to its 'natural' roots and along with Jazz, created a solely American art form. Alvin Ailey is credited with popularizing modern dance and revolutionizing African-American participation in 20th-century concert dance. His company gained the nickname "Cultural Ambassador to the World" because of its extensive international touring. Ailey's choreographic masterpiece Revelations is believed to be the best known and most often seen modern dance performance. Video games Video games—due to the great technological steps forward in computing since the second post-war period—are the new form of entertainment emerged in the 20th century alongside films. While already conceptualized in the 1940s–50s, video games only emerged as an industry during the 1970s, and then exploded into social and cultural phenomenon such as the golden age of arcade video games, with notable releases such as Taito's Space Invaders, Atari's Asteroids, and Namco's Pac-Man, the worldwide success of Nintendo's Super Mario Bros. and the release in the 1990s of Sony PlayStation console, the first one to break the record of 100 million units sold. Video game design becomes a discipline and a job. Some game designers in this century stand out for their work, as Shigeru Miyamoto, Hideo Kojima, Sid Meier and Will Wright. Art and architecture The art world experienced the development of new styles and explorations such as fauvism, expressionism, Dadaism, cubism, de stijl, surrealism, abstract expressionism, color field, pop art, minimal art, lyrical abstraction, and conceptual art. The modern art movement revolutionized art and culture and set the stage for both Modernism and its counterpart postmodern art as well as other contemporary art practices. Art Nouveau began as advanced architecture and design but fell out of fashion after World War I. The style was dynamic and inventive but unsuited to the depression of the Great War. In Europe, modern architecture departed from the decorated styles of the Victorian era. Streamlined forms inspired by machines became commonplace, enabled by developments in building materials and technologies. Before World War II, many European architects moved to the United States, where modern architecture continued to develop. The automobile increased the mobility of people in the Western countries in the early-to-mid-century, and in many other places by the end of the 20th century. City design throughout most of the West became focused on transport via car. Sport The popularity of sport increased considerably—both as an activity for all, and as entertainment, particularly on television. The modern Olympic Games, first held in 1896, grew to include tens of thousands of athletes in dozens of sports. The FIFA World Cup was first held in 1930, and was held every four years after World War II. Science Mathematics Multiple new fields of mathematics were developed in the 20th century. In the first part of the 20th century, measure theory, functional analysis, and topology were established, and significant developments were made in fields such as abstract algebra and probability. The development of set theory and formal logic led to Gödel's incompleteness theorems. Later in the 20th century, the development of computers led to the establishment of a theory of computation. Other computationally-intense results include the study of fractals and a proof of the four color theorem in 1976. Physics New areas of physics, like special relativity, general relativity, and quantum mechanics, were developed during the first half of the century. In the process, the internal structure of atoms came to be clearly understood, followed by the discovery of elementary particles. It was found that all the known forces can be traced to only four fundamental interactions. It was discovered further that two forces, electromagnetism and weak interaction, can be merged in the electroweak interaction, leaving only three different fundamental interactions. Discovery of nuclear reactions, in particular nuclear fusion, finally revealed the source of solar energy. Radiocarbon dating was invented, and became a powerful technique for determining the age of prehistoric animals and plants as well as historical objects. Astronomy A much better understanding of the evolution of the universe was achieved, its age (about 13.8 billion years) was determined, and the Big Bang theory on its origin was proposed and generally accepted. The age of the Solar System, including Earth, was determined, and it turned out to be much older than believed earlier: more than 4 billion years, rather than the 20 million years suggested by Lord Kelvin in 1862. The planets of the Solar System and their moons were closely observed via numerous space probes. Pluto was discovered in 1930 on the edge of the solar system, although in the early 21st century, it was reclassified as a dwarf planet instead of a planet proper, leaving eight planets. No trace of life was discovered on any of the other planets in the Solar System (or elsewhere in the universe), although it remained undetermined whether some forms of primitive life might exist, or might have existed, somewhere. Extrasolar planets were observed for the first time. Biology Genetics was unanimously accepted and significantly developed. The structure of DNA was determined in 1953 by James Watson, Francis Crick, Rosalind Franklin and Maurice Wilkins, following by developing techniques which allow to read DNA sequences and culminating in starting the Human Genome Project (not finished in the 20th century) and cloning the first mammal in 1996. The role of sexual reproduction in evolution was understood, and bacterial conjugation was discovered. The convergence of various sciences for the formulation of the modern evolutionary synthesis (produced between 1936 and 1947), providing a widely accepted account of evolution. Medicine Placebo-controlled, randomized, blinded clinical trials became a powerful tool for testing new medicines. Antibiotics drastically reduced mortality from bacterial diseases and their prevalence. A vaccine was developed for polio, ending a worldwide epidemic. Effective vaccines were also developed for a number of other serious infectious diseases, including influenza, diphtheria, pertussis (whooping cough), tetanus, measles, mumps, rubella (German measles), chickenpox, hepatitis A, and hepatitis B. Epidemiology and vaccination led to the eradication of the smallpox virus in humans. X-rays became powerful diagnostic tool for wide spectrum of diseases, from bone fractures to cancer. In the 1960s, computerized tomography was invented. Other important diagnostic tools developed were sonography and magnetic resonance imaging. Development of vitamins virtually eliminated scurvy and other vitamin-deficiency diseases from industrialized societies. New psychiatric drugs were developed. These include antipsychotics for treating hallucinations and delusions, and antidepressants for treating depression. The role of tobacco smoking in the causation of cancer and other diseases was proven during the 1950s (see British Doctors Study). New methods for cancer treatment, including chemotherapy, radiation therapy, and immunotherapy, were developed. As a result, cancer could often be cured or placed in remission. The development of blood typing and blood banking made blood transfusion safe and widely available. The invention and development of immunosuppressive drugs and tissue typing made organ and tissue transplantation a clinical reality. New methods for heart surgery were developed, including pacemakers and artificial hearts. Cocaine and heroin were widely illegalized after being found to be addictive and destructive. Psychoactive drugs such as LSD and MDMA were discovered and subsequently prohibited in many countries. Prohibition of drugs caused a growth in the black market drug industry, and expanded enforcement led to a larger prison population in some countries. Contraceptive drugs were developed, which reduced population growth rates in industrialized countries, as well as decreased the taboo of premarital sex throughout many western countries. The development of medical insulin during the 1920s helped raise the life expectancy of diabetics to three times of what it had been earlier. Vaccines, hygiene and clean water improved health and decreased mortality rates, especially among infants and the young. Notable diseases An influenza pandemic, Spanish Flu, killed anywhere from 17 to 100 million people between 1918 and 1919. A new viral disease, called the Human Immunodeficiency Virus, or HIV, arose in Africa and subsequently killed millions of people throughout the world. HIV leads to a syndrome called Acquired Immunodeficiency Syndrome, or AIDS. Treatments for HIV remained inaccessible to many people living with AIDS and HIV in developing countries, and a cure has yet to be discovered. Because of increased life spans, the prevalence of cancer, Alzheimer's disease, Parkinson's disease, and other diseases of old age increased slightly. Sedentary lifestyles, due to labor-saving devices and technology, along with the increase in home entertainment and technology such as television, video games, and the internet contributed to an "epidemic" of obesity, at first in the rich countries, but by the end of the 20th century spreading to the developing world. Energy and the environment The dominant use of fossil sources and nuclear power, considered the conventional energy sources. Widespread use of petroleum in industry—both as a chemical precursor to plastics and as a fuel for the automobile and airplane—led to the geopolitical importance of petroleum resources. The Middle East, home to many of the world's oil deposits, became a center of geopolitical and military tension throughout the latter half of the century. (For example, oil was a factor in Japan's decision to go to war against the United States in 1941, and the oil cartel, OPEC, used an oil embargo of sorts in the wake of the Yom Kippur War in the 1970s). The increase in fossil fuel consumption also fueled a major scientific controversy over its effect on air pollution, global warming, and global climate change. Pesticides, herbicides and other toxic chemicals accumulated in the environment, including in the bodies of humans and other animals. Population growth and worldwide deforestation diminished the quality of the environment. In the last third of the century, concern about humankind's impact on the Earth's environment made environmentalism popular. In many countries, especially in Europe, the movement was channeled into politics through Green parties. Increasing awareness of global warming began in the 1980s, commencing decades of social and political debate. Engineering and technology One of the prominent traits of the 20th century was the dramatic growth of technology. Organized research and practice of science led to advancement in the fields of communication, electronics, engineering, travel, medicine, and war. Basic home appliances including washing machines, clothes dryers, furnaces, exercise machines, refrigerators, freezers, electric stoves and vacuum cleaners became popular from the 1920s through the 1950s. Radios were popularized as a form of entertainment during the 1920s, which extended to television during the 1950s. The first airplane, the Wright Flyer, was flown in 1903. With the engineering of the faster jet engine in the 1940s, mass air travel became commercially viable. The assembly line made mass production of the automobile viable. By the end of the 20th century, billions of people had automobiles for personal transportation. The combination of the automobile, motor boats and air travel allowed for unprecedented personal mobility. In western nations, motor vehicle accidents became the greatest cause of death for young people. However, expansion of divided highways reduced the death rate. The triode tube was invented. New materials, most notably stainless steel, Velcro, silicone, teflon, and plastics such as polystyrene, PVC, polyethylene, and nylon came into widespread use for many various applications. These materials typically have tremendous performance gains in strength, temperature, chemical resistance, or mechanical properties over those known prior to the 20th century. Aluminum became an inexpensive metal and became second only to iron in use. Thousands of chemicals were developed for industrial processing and home use. Space exploration The Space Race between the United States and the Soviet Union gave a peaceful outlet to the political and military tensions of the Cold War, leading to the first human spaceflight with the Soviet Union's Vostok 1 mission in 1961, and man's first landing on another world—the Moon—with America's Apollo 11 mission in 1969. Later, the first space station was launched by the Soviet space program. The United States developed the first reusable spacecraft system with the Space Shuttle program, first launched in 1981. As the century ended, a permanent manned presence in space was being founded with the ongoing construction of the International Space Station. In addition to human spaceflight, unmanned space probes became a practical and relatively inexpensive form of exploration. The first orbiting space probe, Sputnik 1, was launched by the Soviet Union in 1957. Over time, a massive system of artificial satellites was placed into orbit around Earth. These satellites greatly advanced navigation, communications, military intelligence, geology, climate, and numerous other fields. Also, by the end of the 20th century, unmanned probes had visited the Moon, Mercury, Venus, Mars, Jupiter, Saturn, Uranus, Neptune, and various asteroids and comets. The Hubble Space Telescope, launched in 1990, greatly expanded our understanding of the Universe and brought brilliant images to TV and computer screens around the world. The Global Positioning System, a series of satellites that allow land-based receivers to determine their exact location, was developed and deployed. Digital revolution A technological revolution began in the late 20th century, variously called the Digital Revolution, the information revolution, the electronics revolution, the microelectronic revolution, the Information Age, the silicon revolution, the Silicon Age, and/or the third industrial revolution. The first transistor was invented by John Bardeen and Walter Houser Brattain at Bell Labs in 1947. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialised applications. The MOSFET (metal-oxide-silicon field-effect transistor), also known as the MOS transistor, was invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs, in November 1959. It was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses. The widespread adoption of MOSFETs revolutionized the electronics industry, becoming the fundamental building block of the Digital Revolution and "the base technology" of the late 20th to early 21st centuries. The MOSFET went on to become the most widely manufactured device in history. Semiconductor materials were discovered, and methods of production and purification developed for use in electronic devices. Silicon became one of the purest substances ever produced. The wide adoption of the MOSFET led to silicon becoming the dominant manufacturing material during the late 20th century to early 21st century, a period that has been called the Silicon Age, similar to how the Stone Age, Bronze Age and Iron Age were defined by the dominant materials during their respective ages of civilization. The MOS integrated circuit chip (a silicon integrated circuit chip built from MOSFETs) revolutionized electronics and computers. The MOS chip was invented in the early 1960s. The silicon-gate MOS chip later developed by Federico Faggin in 1968 was the basis for the first single-chip microprocessor, the Intel 4004, in 1971. MOS integrated circuits and microprocessors led to the microcomputer revolution, the proliferation of the personal computer in the 1980s, and then cell phones and the public-use Internet in the 1990s. Metal-oxide-semiconductor (MOS) image sensors, which first began appearing in the late 1960s, led to the transition from analog to digital imaging, and from analog to digital cameras, during the 1980s1990s. Discrete cosine transform (DCT) coding, a data compression technique first proposed in 1972, enabled practical digital media transmission in the 1990s, with image compression formats such as JPEG (1992), video coding formats such as H.26x (1988 onwards) and MPEG (1993 onwards), audio coding standards such as Dolby Digital (1991) and MP3 (1994), and digital television standards such as video-on-demand (VOD) and high-definition television (HDTV). The number and types of home appliances increased dramatically due to advancements in technology, the wide adoption of MOSFETs, electricity availability, the transition from analog to digital media, and increases in wealth and leisure time. The microwave oven became popular during the 1980s and have become a standard in all homes by the 1990s. Cable and satellite television spread rapidly during the 1980s and 1990s. Personal computers began to enter the home during the 1970s–1980s as well. Video games were popularized during the late 1970s to 1980s, with the golden age of arcade video games. The age of the portable music player was enabled by the development of the transistor radio, 8-track and cassette tapes in the 1960s, which slowly began to replace record players, culminating in the Sony Walkman in the late 1970s. These were in turn replaced by the digital compact disc (CD) during the 1980s to 1990s. The proliferation of the MDCT-based MP3 audio coding format on the Internet during the mid-to-late 1990s made digital distribution of music possible. Video cassette recorders (VCRs) were popularized in the 1970s, but by the end of the 20th century, DVD players were beginning to replace them, making the VHS obsolete by the end of the first decade of the 21st century. There was a rapid growth of the telecommunications industry towards the end of the 20th century, driven by the development of large-scale integration (LSI) complementary MOS (CMOS) technology, information theory, digital signal processing, and wireless communications such as cellular networks and mobile telephony. Religion The Vatican II council was held from 1962 to 1965, and resulted in significant changes in the Catholic Church. The Wahhabi sect of Sunni Islam gained in influence with the growth of Saudi Arabia. Multiple new religions were founded, including the Nation of Islam, Scientology, and the Pentecostalism movement. Atheism became considerably more common, both in secular Western countries, and Communist countries with a policy of state atheism. Economics The Great Depression was a worldwide economic slowdown that lasted throughout the early 1930s. The Soviet Union implemented a series of five-year plans for industrialization and economic development. Most countries abandoned the gold standard for their currency. The Bretton Woods system involved currencies being pegged to the United States dollar; after the system collapsed in 1971 most major currencies had a floating exchange rate. See also 20th-century inventions Death rates in the 20th century Infectious disease in the 20th century Modern art Short twentieth century Timelines of modern history List of 20th-century women artists List of notable 20th-century writers List of 20th-century American writers by birth year List of battles 1901–2000 List of stories set in a future now past References Sources . Climate Change 2013 Working Group 1 website. (pb: ) Further reading Brower, Daniel R. and Thomas Sanders. The World in the Twentieth Century (7th Ed, 2013) CBS News. People of the century. Simon and Schuster, 1999. Grenville, J. A. S. A History of the World in the Twentieth Century (1994). online free Hallock, Stephanie A. The World in the 20th Century: A Thematic Approach (2012) Langer, William. An Encyclopedia of World History (5th ed. 1973); highly detailed outline of events online free Morris, Richard B. and Graham W. Irwin, eds. Harper Encyclopedia of the Modern World: A Concise Reference History from 1760 to the Present (1970) online Pindyck, Robert S. "What we know and don’t know about climate change, and implications for policy." Environmental and Energy Policy and the Economy 2.1 (2021): 4-43. online Pollard, Sidney, ed. Wealth and Poverty: an Economic History of the 20th Century (1990), 260 pp; global perspective online free Stearns, Peter, ed. The Encyclopedia of World History (2001) External links The 20th Century Research Project Slouching Towards Utopia: The Economic History of the Twentieth Century Discovering Literature: 20th century at the British Library 2nd millennium Centuries Late modern period
10639106
https://en.wikipedia.org/wiki/Radvd
Radvd
The Router Advertisement Daemon (radvd) is an open-source software product that implements link-local advertisements of IPv6 router addresses and IPv6 routing prefixes using the Neighbor Discovery Protocol (NDP) as specified in . Daemon The Router Advertisement Daemon is used by system administrators in stateless autoconfiguration(.) methods of network hosts on Internet Protocol version 6 networks. When IPv6 hosts configure their network interface controllers, they multicast router solicitation (RS) requests onto the network to discover available routers. Radvd answers requests with router advertisement (RA) messages. In addition, radvd periodically multicasts RA packets to the attached link to update network hosts. The router advertisement messages contain the routing prefix used on the link, the link maximum transmission unit (MTU), and the address of the responsible default router. Radvd also supports the recursive DNS server (RDNSS) and DNS search list (DNSSL) options for NDP published in . See also Dynamic Host Configuration Protocol (DHCP) Domain Name System (DNS) Netsh on Microsoft Windows covers similar functionality References External links Radvd web site Source code IPv6 Free network-related software Free software programmed in C
7673119
https://en.wikipedia.org/wiki/Peter%20Hacker%20%28cricketer%29
Peter Hacker (cricketer)
Peter Hacker (born 16 July 1952) was an English cricketer. He was a right-handed batsman and a left-arm medium-fast bowler. Hacker made his County Championship debut for Nottinghamshire in 1975, having played in a tour by the Pakistanis nearly a year previous. He had represented the Second XI since 1973. Hacker spent seven years at Nottinghamshire, between 1975 and 1981, and spent some time in the 1979/80 season in South Africa playing in the Castle Bowl for the fourth-placed Orange Free State. He found himself out of the Nottinghamshire team after the following year's action, joining Derbyshire in time for the beginning of the 1982 season. The 1982 season was Hacker's final season of first-class cricket. Hacker was a part of the 1984 Minor Counties Championship runners-up team of Cheshire, and, in 1993, joined Lincolnshire. Hacker was a tailend batsman for Nottinghamshire, making his way up to the lower-order for Derbyshire. Hacker took part in one match during the warm-ups for the 1975 World Cup, and the semi-finals of the 1979 and 1981 Tilcon Trophy. External links Peter Hacker at Cricket Archive 1952 births English cricketers Living people Derbyshire cricketers Nottinghamshire cricketers Free State cricketers Lincolnshire cricketers Cheshire cricketers Peter Hacker - Cricket Archive
8989274
https://en.wikipedia.org/wiki/University%20B.D.T.%20College%20of%20Engineering
University B.D.T. College of Engineering
The University B.D.T. College of Engineering constituent college of VTU university (UBDTCE) is an engineering college located in Davangere, India. UBDTCE is one of the oldest engineering colleges in Karnataka. Overview University B.D.T. College of Engineering (UBDTCE), Davangere, is located in the central part of Karanataka. Started in 1951, the college was named after Brahmappa Devendrappa Tavanappanavar (BDT), after B.T. Chandranna donated, in rupees, 1.5 lakhs for the construction of the building in memory of his uncle (Brahmappa Tavanappanavar) and father (Devendrappa Tavanappanavar). The then-maharaja of Mysore Sri Jayachamarajendra Wodeyar Bahadur laid the foundation stone for the building on 7 August 1951 and inaugurated the building on 24 September 1956. At its start in 1951 it had only one branch, in civil engineering. Subsequently, other branches of engineering were incorporated, such as Mechanical and Electrical & Electronics (in 1957), Electronics & Communication (in 1972), Computer Science and Instrumentation Technology (in 1984) and Industrial & Production (in 1996). The present intake of the college from all these branches is 390 at the undergraduate level. A postgraduate course in Production Engineering Systems Technology was started in the year 1987. Later on, the college was transferred to Kuvempu University, as a constituent engineering college on 1 June 1992, and hence it became University B.D.T. College of Engineering. In 2003, seven additional postgraduate courses were introduced in different disciplines. Currently the total intake at the postgraduate level is 175. The college has produced about 40 Ph.Ds, and many research scholars are pursuing their doctoral degree in various frontiers of engineering and technology. At present about 2,000 students are studying in this college. As a consequence of the formation of Davangere, the University U.B.D.T. College of Engineering became a constituent college of Davangere University on 18 August 2009. Recently, with the intention of the overall development of the college, the government of Karnataka transferred the college to the Visvesvaraya Technological University (VTU), Belgaum, on 24 February 2011, as a constituent engineering college. The VTU intends to start MBA and MCA courses for the academic year in this college. The college undertakes consultancy works in civil, electrical and electronics, and other fields of engineering. The college was one among the fourteen institutions in the state of Karnataka to get the World Bank-aided TEQIP Project Phase-I and utilized a grant of rupees from Nine Crores for improving its infrastructural facilities and enhancing the quality of its academic standards. The institution is also expecting World Bank-aided TEQIP Project Phase-IT to further improve its research and development activities and infrastructure. The college is approved by AICTE, and undergraduate courses were accredited by NBA in the year 2001 for five years. Further, the faculty of Computer Science and Engineering, Electronics, and Communication Engineering were all accredited by the NBA in the year 2009 for a period of three years. For the other departments, the NBA accreditation process is still in progress. The college library is equipped with a fair number of rare collections of old books and was further enriched with the latest volumes during TEQIP phase-I. The students are actively involved in co-curricular and extracurricular activities. The annual cultural festival "Chaitra" takes place every year, which creates a platform for exhibiting the hidden talents of students. The institution is a member of the Red Cross Society, "Thern". Other facilities available are Science and Technology Entrepreneurs' Park (STEP), Industry Institute Partnership Cell (IIPC), ISTE Staff and Students Chapters, Co-operative Society, among others. Further, the institution has both a boys' and girls' hostel to accommodate students from outside. The Placement & Training Center is actively involved in placing the students in reputed organizations. Various companies such as TCS, Mindtree, Infosys, HCL, SLK Software, Siemens, Nokia, and Zindal visit the college regularly. The college is completing 60 years of fruitful existence in 2011 and is planning to celebrate its diamond jubilee soon. The College has the National Cadet Corps Unit and the cadets are actively participates in the activities organised by NCC Unit. The college NCC unit got a permanent NCC Officer in 2017, its after nearly 3.5 decades. Admission Admission to the college is made completely through the Common Entrance Test (CET) and PGCET conducted by the Karnataka Examination Authority (KEA). Selection is based on merit and reservation in accordance with the policies of the government of Karnataka. The CET will be generally conducted during the months of April/May of every year and PGCET during the months July/August. Programmes offered Graduate courses in engineering Electronics and Communications Engineering Electrical and Electronics Engineering Electronics and Instrumentation Engineering Mechanical Engineering Industrial Production Engineering Civil Engineering Computer science and Engineering Postgraduate courses in engineering Computer-aided design of structures and substructures Computer Science and Engineering Digital Communication and Networking Environmental Engineering Machine Design Power System and Power Electronics Production Engineering Systems and Technology Thermal Power Engineering Postgraduate courses M.C.A M.B.A References External links Official website of the College - www.ubdtce.org Alumni Association - www.ubdtaa.com Engineering colleges in Karnataka Education in Davangere Universities and colleges in Davanagere district 1951 establishments in Mysore State Educational institutions established in 1951
45335863
https://en.wikipedia.org/wiki/Apotheon
Apotheon
Apotheon is a platform game developed and published by Alientrap for Microsoft Windows, OS X, Linux, and PlayStation 4. The game was released on Microsoft Windows and PlayStation 4 on February 3, 2015 and on OS X and Linux on February 10, 2015. Apotheon is Alientrap's second commercial game and utilizes a unique art style based on ancient Greek pottery, particularly in the black-figure pottery style. Gameplay Apotheon is a 2D action game with an art style and heroic narrative based on Ancient Greek mythology. The player controls Nikandreos, a young warrior who, with the help of the goddess Hera, fights against the Olympian gods, using various weapons of the Ancient Greeks. Plot In Ancient Greece, the Gods have begun to punish the humans of Earth, resenting their arrogance and defiance among their societies. Zeus, the father of all the Gods, has forced his children to stop giving their gifts of life to the Earth, so that all humans are soon deprived of animals, food, and even daylight. With most of the Earth cast in darkness, man begins to fall prey to the savage destruction of their cities. Meanwhile, in the Village of Dion, invaders ransack the entire area and lay waste to most of the village. A young warrior named Nikandreos awakens to find his home destroyed and many of the raiders killing Dion's soldiers and citizens. Arming himself and with the help of the surviving soldiers, Nikandreos is able to cast away the invading forces and slay their leader. In the temple, Nikandreos meets face-to-face with Hera, the wife of Zeus, who congratulates him for his victory in saving the village. Hera informs Nikandreos of Zeus' anger with the humans, as well as her hatred for his many affairs with other Gods and mortals. Wanting revenge as well as a way to help Nikandreos save his people, Hera tasks him with slaying Zeus and his siblings so that Nikandreos might bring back their gifts and save humanity. Nikandreos agrees and ascends to Mount Olympus to begin his hunt against the Gods of Greece. Nikandreos first enters the Agora of Olympus, where he must find and take the gifts from the gods Artemis, Demeter, and Apollo. Heading into the Forest of Artemis, Nikandreos completes many hunts of fantastical creatures until he gains the goddess' attention. After a short dialogue, Artemis attacks Nikandreos and in the fight that follows the two are alternatively turned into deer and hunter. The battle ends with Nikandreos' victory and Artemis' death, and as the goddess of the hunt vanishes into nothingness, Nikandreos takes her magical bow and heads towards Apollo. Nikandreos reaches Apollo and attempts to slay him, but he is knocked unconscious and imprisoned within a cell. With the help of the god Helios, who had also been imprisoned by Apollo, Nikandreos escapes and is able to slay all of Apollo's captains and save Daphne from her own imprisonment as a tree. Nikandreos then slays Apollo and takes his lyre. However, while searching for Demeter, she reveals she cannot give her sheaf, feeling sorrow for the taking of her daughter Persephone. Nikandreos finally descends into the lair of Hades to find Persephone, his wife, who instead of fighting back, willingly gives her seeds to him so that he may bring life back to Earth. In return, Demeter sympathizes with Nikandreos and hands over her sheaf, too. While attempting to head back to his village though, Zeus halts Nikandreos and accuses him of arrogance and attempting to become a god, and thrusts him back to Earth. Nikandreos lands to find Dion in more ruin than before, with the wrath of the Gods now extending to physical aggression on Earth. Hera apologizes for her earlier refusal to help, but warns Nikandreos that Zeus has now tasked the remaining powerful gods Poseidon, Ares, and Athena to bring fury and destroy all of humanity. Heading toward the Acropolis of Olympus, Nikandreos goes to kill each of the final gods, in return becoming more and more powerful by stripping them of their powers. However, upon returning once more to Dion, Nikandreos finds his home and his entire people annihilated and in waste. Hera tells him though that the time to strike Zeus is now, with most of his children and brethren slain and the power now in Nikandreos' hands. In a final single raid on the Fortress of Zeus, Nikandreos boldly ascends to the heavens of Olympus. He finds Hera chained with Zeus knowing her betrayal, leading to a final battle between the man and the god. Nikandreos successfully slays Zeus, gaining the right to power from Zeus' thunderbolt. Hera boasts at her victory and demands Nikandreos to free her. However, Nikandreos is sick of the rule of Gods and Hera's manipulations. At this point the player can choose to leave Hera or to kill her with Zeus' thunderbolt. Descending back to Earth, Nikandreos finds a giant-sized Zeus bragging and roaring of his own victories as a God, refusing to believe in his own defeat by a mere mortal. With all of the powers of the Gods within him now, however, Nikandreos also grows to the size of a God, and brutally fights and finally kills Zeus. Seeing the Earth in sad ruins, Nikandreos roams the desolate landscape. In finding some clay and rubble, Nikandreos forms a human being and gives it life, signaling the rebirth of humanity and the fall of the Gods. It is left open if Nikandreos will be a loving god himself, or grow disillusioned with his creation like the gods before him. The name Apotheon means "exalted to the state of godhood", which reflects the deification of Nikandreos, the protagonist, at the end of the game. Release In August 2015, Alientrap joined forces with IndieBox, a monthly subscription box service, to produce an exclusive, custom-designed, physical release of Apotheon. This limited, individually-numbered collector's included a flash-drive with a DRM-free copy of the game, the official soundtrack, an instruction manual and Steam key, and several custom-made collectible items. The game was also later released for free for the PlayStation Network subscribers. Reception Apotheon received generally favorable reviews from critics, with a score on review aggregator Metacritic of 78/100 and 76/100 for the Microsoft Windows and PlayStation 4 versions respectively. Critics generally praised the game's aesthetic and entertaining combat. Writing for Game Informer, Bryan Vore called Apotheon a "damn fun game and one of my surprise early favorites of 2015". Patrick Hancock of Destructoid felt it "more than backs up its aesthetic prowess with rewarding combat and exploration systems in place". Many reviewers noticed a Metroidvania influence, John-Paul Jones of GameWatcher applauded its "homage to the engrossing Castlevania and Metroid games pioneered back in the 8-bit days". Nicholas Plouffe of Canadian Online Gamers also recommended the game to fans of the Metroidvania genre. Writing for Hardcore Gamer, Alex Carlson felt the gameplay mechanics were shallow. He concluded "Apotheon is a classic case of style over substance". Philippa Warr of Rock, Paper, Shotgun described Apotheon as "an attractive but shallow game whose more interesting ideas are marred by an unwieldy control system". Other reviewers of the PlayStation 4 version noted crashes and bugs that hampered their experience. Soundtrack Push Square has named Apotheon's soundtrack as the 2nd best PlayStation soundtrack of 2015. Paste Magazine placed it in number 3, while Fact Magazine and IGR placed it in number 6. Notes References External links 2015 video games Linux games MacOS games Metroidvania games Monochrome video games Multiplayer and single-player video games Platform games PlayStation 4 games PlayStation Network games Video games set in Greece Video games set in antiquity Video games with silhouette graphics Video games developed in Canada Video games based on Greek mythology Windows games
26953384
https://en.wikipedia.org/wiki/Online%20diary%20planner
Online diary planner
The early 1990s marked the advent of online diary planners. On the face of it, it seems that people who must attend countless meetings are the ones who need online diary planners the most. However, actual trends show that those who are in the habit of recording and chronicling their activities all the time are the real takers and users of online diary planners. Such people may be executives, event managers, doctors, students, and people from various walks of life. Terminally ill patients have often taken to writing online diaries. Through the medium of such online diaries, they have kept millions of others in the know of their personal plans, thoughts, their medical treatment procedures, etc. Since all sorts of people have been using online diary planners since their advent in the 1990s, the demand for perfect online organizers and personal information managers (PIMS) has been growing steadily with time. Initially, web-based Filofax came into being to satisfy the organizing need of users. But such applications had their disadvantages. This forced users to turn to solutions like ACT!, Time and Chaos. As these too did not prove to be adequate, those in need of online diary planners started looking towards Microsoft Outlook. Online calendars: Upgraded online diary planners Online calendars, a newer and upgraded version of online diary planners, soon appeared to replace the older online diary planners. The main difference between the newer online calendars and the older handheld computers and PIMS was that while the older devices stored all the appointments and meeting schedules of a person on their computer or handheld device, the newer calendar devices stored all information on the Internet. Hence, online calendars were automatically more accessible and less cumbersome as they could be accessed anytime, anywhere if users were in front of a PC, laptop or other machine connected to the Internet. Users did not need to carry them around as they had to carry the older versions because the older versions were not available on the Internet. Hence, diarists turned to Outlook. Right through the first half of the first decade of the 21st century, Outlook was considered to be an excellent application that could be synchronized with Pocket PCs. Further, the older versions required users to make backups of their data regularly, so they had to do that extra work. If they did not make a backup and their Psion, Palm or Pocket PC or other device crashed, they would be unable to recover any data. And if they did recover some data from the backup, to where would they restore it? To another Psion palmtop? Psion soon stopped producing palmtops. First generation online calendars However, some of the very first online calendars that were developed did not perform too well as online diary planners. Though they were impressive enough initially with their multifaceted social networking abilities, they could not manage to impress their users over a long period of time. They helped users to add appointments straightaway. Users did not need to sort through dropdown menus to choose meeting dates and times. That way, the online calendars functioned like secretaries. Users just had to key their meeting schedules into the application and the online diary planner would flash the data and remind them in time. But users soon began to be put off with these first-generation online calendars. They complained that they could view only one-month's appointments on the calendar. Which means if they wanted to schedule a conference over one month away, they would be unable to do so because the application allowed viewing a calendar only 30 days at a time. Also, if users set a meeting date that was far in the future, months from the date on which the details were entered, the online calendar would not show the details until and unless that month arrived. This disadvantage of the initially launched online calendars became a bane for most users. Then, there was the privacy issue. Such open calendars that intended users to connect with as many people as they could meant that others would be able to view the users’ calendars. Moreover, these online diary planners were unable to warn their users if more than one appointment was made within the same time slot. In other words, they could not prevent double bookings. Further, many people found the idea of an online calendar weird because they felt that it is impossible to access the Internet everywhere! Most first-generation online calendars could not carve out a decent market for themselves and thus had to give way to calendars created by giants of the information technology industry such as Microsoft and Yahoo!. To add insult to injury, users also disliked the appearance and the look and feel of first-generation online calendars. Microsoft Outlook and Yahoo! Calendar However these shortcomings of the first-generation online calendars prompted most users to go back to tried and tested ways. So people began to use Microsoft Outlook and synchronize the same with portable contrivances. However, this concept did not work with everybody. The Outlook applications on PCs, Palm handheld computers, Psion palm tops or diaries on cell phones that had been around for quite some time could not, however, meet the expectations of users. Despite helping users to automate the conference management process to some extent, these contrivances lacked the ability to guard against the common problem of double-booking appointments and meetings. So the Yahoo! Calendar was visited once again. Yahoo! was popular from the last decade of the 20th century. Yahoo! always came with downloadable and compatible software that helped to keep the office computer, the Yahoo! calendar and the handheld online diary working in tandem. Yahoo enables synchronization with Lotus Organizer, Outlook, ACT, Outlook Express and Palm. But there was one hitch. The Yahoo! software had to be downloaded to the office PC if a user wanted to use it from their office. But downloading software from the Internet is discouraged and proscribed by many companies. Which meant that users could not use the Yahoo! calendar in many offices, but could use it in some. This is why the calendar began to experience a drop in popularity. This, and the Yahoo! design being not particularly eye-catching and that it did not work easily with all browsers, are the chief reasons why the Yahoo! calendar could not hold onto its number one position as the best and most user-friendly online calendar and diary planner. Google Calendar Google Calendar launched with a mix of unique and existing features. Users can synchronize their calendar with Microsoft Outlook, making use of the conventional meeting request procedure found in other calendaring systems. When accepted, the proposed date will show on the recipient's Google Calendar. A reminder for the event can be sent to the involved parties' Gmail account before the event begins. Google Calendar sends meeting alerts as SMS to users' cell phones and smartphones via the official apps. Users are also able to download public holidays and important dates from the public calendar gallery. Google Calendar permits users to drag and drop events and happenings from one calendar, such as Microsoft Outlook, to another. However, Google Calendar has several flaws. Firstly, to use it, users must have an active Google account. Secondly, the calendar may, at times, double-book users. Other online diary planners and calendars Currently, the Internet is flooded with electronic diaries and online diary planners of various types and with various abilities. Myriads of online calendars, which target different people with different interests, have developed over the last couple of years. Some of these online calendars and online diary planners are AirSet, Meeting Diary, and Mypunchbowl. AirSet is a multipurpose online diary planner that allows connecting with colleagues, friends and family. Mypunchbowl is basically a party and wedding planner. Meeting Diary is an online diary planner for planning meetings, conferences and events. As the goal of every online diary planner is to help people plan for specific occasions and needs, we can easily classify online diary planners as online diary planners for parties, online diary planners for families and friends, online diary planners for meetings, conferences and events, personal online diary planners, online diary planners for trips and special-interest online diary planners. Online diary planners for offices and teams DayViewer.com An online calendar based diary planner enables task & time management in one. The Team Pages area are designed for teams and offices to help with team collaboration, improve productivity & team coordination, and to alleviate communication issues by providing a discussion area for the team. DayViewer is built with reactive web technology meaning that the pages are updated without the need to refresh the screen. Online diary planners for small businesses DiaryBooker.com is designed for managing appointments for small businesses such as hairdressers, beauticians, dentists, golf professionals, personal trainers, yoga instructors etc. At its most basic level the concept is that the business owner can book the appointments or allow the customers to book their own appointments via the web based interface. The customer will get an SMS reminder before the appointment and the business owner can keep an eye on the appointments from any internet enabled device. Online diary planners for parties Mypunchbowl, Purpletrail, Bestpartyever, and Partypotato are some online diary planners that serve as party planners. Mypunchbowl also employs the services of experts who advise people on party and wedding planning. Online diary planners for families and friends Cozi.com and AirSet are excellent online diary planners for connecting with family members. Online diary planners for meetings, conferences and events Meeting Diary, MeetingWizard, etc. are some popular online diary planners which are frequently used when planning meetings, conferences and events. Meeting Diary Meeting Diary's a unique online diary planner that manages meetings for users. It is a very smart innovation that redefines the meeting management process. It not only allows users to schedule meetings, it also allows them to be in possession of a lot of meeting-related information. Meeting Diary has a huge data storage capacity and is capable of storing every kind of meeting-related information in it. It is an extremely secure web-based application. Users' information remains absolutely confidential in the application and totally inaccessible to unauthorized people. Meeting Diary is a platform that enables managers to organize meetings and conferences swiftly and seamlessly. Meeting Wizard Meeting Wizard is meeting scheduling software that helps perform the meeting scheduling process smoothly. It allows importing emails and works across different time zones. Meeting Wizard's a meeting scheduling software that comes free of cost. Personal online diary planners Diary.com, My Personal Diary, The Journal, and rememberthemilk.com are online diary planners that are used for personal planning and organizing. Diary.com is both a private and a public platform, which is simultaneously extremely personal as well as shareable. Online diary planners for trips and holidays Meet Me In, Triporama, Priceline.com, Tripit, Triphobo.com, Hilton e-Events, Groople, Trip Planner, Hotel Planner, TripHub, and GroupAbout are some of the most famous online diary planners that help users to plan trips and holidays. CarnivalConnections.com, InterContinental Hotels Group and Carlson Hotels Worldwide help travel groups by providing them with customized websites. Triphobo.com, Groople and GroupAbout and most other online travel planners provide information on hotel bookings, sightseeing, leisure sports and activities, transportation and travel packages. Groople and GroupAbout have devised easy methods by which travel groups can build their own websites so that the websites address each travel group's own peculiar needs. Through these tailor-made websites, travel groups can link up with each other to share travel tips. Groople and GroupAbout have liaisons with various travel agencies and companies so that they can offer value-for-money and high-end travel and tour services to their users. Groople has liaisons with Kayak.com, Travelocity and SideStep and GroupAbout has liaisons with Orbitz and SideStep. Hotel Planner is an online travel planner. The site allows travelers to name the place they wish to travel to as well as to announce their other travel wishes. All such information is recorded on the site. Hotel Planner then asks most of the hotels of that particular place to give quotations for room and other charges, online. If the travel group agrees to a particular quotation, Hotel Planner designs a tailor-made website for the travelers so that the travelers can book the hotel rooms online. Priceline.com is another online diary planner that helps travelers plan for their trips smoothly. Travel groups of five to nine can book accommodation by sending their quotations or by agreeing straightaway to posted rates. However, if there are ten or more than ten travelers in the group, Priceline recommends them to ask the hotels directly for quotations. Meet Me in allows travel groups with a maximum number of four travelers to travel from two different places, get together at one place and avail of a discounted travel package. e-Events of Hilton Hotels permits travel groups to see the prices of rooms, book rooms and conference halls and construct a tailor-made site for themselves. TripHub has liaisons with Alaska Airlines and Orbitz. When travelers approach TripHub, the site recommends the names of travel companies and agencies that would meet the requirements of the travelers and would address the travelers' specific travel needs. Triporama allows tourists to interact with one another and discuss tour itineraries so that members of a travel group can arrive at a consensus when finalizing travel plans. Triporama also connects travelers to the web portals of travel companies such as Cruiseshipcenters international that promote package tours on Triporama. Trip Planner enables travelers to get an accurate travel plan. Trip Planner permits users to specify their starting and ending spots and tentative departure and arrival times. In association with New York City Transit Authority, Trip Planner aims to provide detailed and comprehensive travel itineraries to the travelers who avail of its services. Online travel planners have also been developed by hoteliers and sightseeing agencies. CarnivalConnections.com, a website that belongs to Carnival Cruise Lines, was developed to aid travel groups plan their tours. InterContinental Hotels Group and Carlson Hotels Worldwide provide travel groups with tailor-made websites so that travelers can book rooms and pay for the same over the Internet. However, many opine that booking trips via online travel planners may not be the best way of booking trips. Larger groups are bound to get better discounts if they do not restrict themselves to the Internet, i.e., booking trips on a face-to-face basis or through travel agencies might be more lucrative for larger travel groups. Special interest online diary planners Marco software is an online diary planner that targets wine fans, gardeners and photographers. References Calendaring software
41781486
https://en.wikipedia.org/wiki/Software-defined%20perimeter
Software-defined perimeter
A software-defined perimeter (SDP), also called a "black cloud", is an approach to computer security which evolved from the work done at the Defense Information Systems Agency (DISA) under the Global Information Grid (GIG) Black Core Network initiative around 2007. Software-defined perimeter (SDP) framework was developed by the Cloud Security Alliance (CSA) to control access to resources based on identity. Connectivity in a Software Defined Perimeter is based on a need-to-know model, in which device posture and identity are verified before access to application infrastructure is granted. Application infrastructure is effectively “black” (a DoD term meaning the infrastructure cannot be detected), without visible DNS information or IP addresses. The inventors of these systems claim that a Software Defined Perimeter mitigates the most common network-based attacks, including: server scanning, denial of service, SQL injection, operating system and application vulnerability exploits, man-in-the-middle, pass-the-hash, pass-the-ticket, and other attacks by unauthorized users. Background The premise of the traditional enterprise network architecture is to create an internal network separated from the outside world by a fixed perimeter that consists of a series of firewall functions that block external users from coming in, but allows internal users to get out. Traditional fixed perimeters help protect internal services from external threats via simple techniques for blocking visibility and accessibility from outside the perimeter to internal applications and infrastructure. But the weaknesses of this traditional fixed perimeter model are becoming ever more problematic because of the popularity of user-managed devices and phishing attacks, providing untrusted access inside the perimeter, and SaaS and IaaS extending the perimeter into the internet. Software defined perimeters address these issues by giving application owners the ability to deploy perimeters that retain the traditional model's value of invisibility and inaccessibility to outsiders, but can be deployed anywhere – on the internet, in the cloud, at a hosting center, on the private corporate network, or across some or all of these locations. Architecture In its simplest form, the architecture of the SDP consists of two components: SDP Hosts and SDP Controllers.[6] SDP Hosts can either initiate connections or accept connections. These actions are managed by interactions with the SDP Controllers via a control channel (see Figure 1). Thus, in a Software Defined Perimeter, the control plane is separated from the data plane to enable greater scalability. In addition, all of the components can be redundant for higher availability. The SDP framework has the following workflow (see Figure 2). One or more SDP Controllers are brought online and connected to the appropriate optional authentication and authorization services (e.g., PKI, device fingerprinting, geolocation, SAML, OpenID, OAuth, LDAP, Kerberos, multifactor authentication, and other such services). One or more Accepting SDP Hosts are brought online. These hosts connect to and authenticate to the Controllers. However, they do not acknowledge communication from any other Host and will not respond to any non-provisioned request. Each Initiating SDP Host that is brought on line connects with, and authenticates to, the SDP Controllers. After authenticating the Initiating SDP Host, the SDP Controllers determine a list of Accepting Hosts to which the Initiating Host is authorized to communicate. The SDP Controller instructs the Accepting SDP Hosts to accept communication from the Initiating Host as well as any optional policies required for encrypted communications. The SDP Controller gives the Initiating SDP Host the list of Accepting Hosts as well as any optional policies required for encrypted communications. The Initiating SDP Host initiates a mutual VPN connection to all authorized Accepting Hosts. SDP Deployment Models While the general workflow remains the same for all implementations, the application of SDPs can favor certain implementations over others. Client-to-gateway In the client-to-gateway implementation, one or more servers are protected behind an Accepting SDP Host such that the Accepting SDP Host acts as a gateway between the clients and the protected servers. This implementation can be used inside an enterprise network to mitigate common lateral movement attacks such as server scanning, OS and application vulnerability exploits, password cracking, man-in-the-middle, Pass-the-Hash (PtH), and others. Alternatively, it can be implemented on the Internet to isolate protected servers from unauthorized users and mitigate attacks such as denial of service, OS and application vulnerability exploits, password cracking, man-in-the-middle, and others. Client-to-server The client-to-server implementation is similar in features and benefits to the client-to-gateway implementation discussed above. However, in this case, the server being protected will be running the Accepting SDP Host software instead of a gateway sitting in front of the server running that software. The choice between the client-to-gateway implementation and the client-to-server implementation is typically based on number of servers being protected, load balancing methodology, elasticity of servers, and other similar topological factors.[13] Server-to-server In the server-to-server implementation, servers offering a Representational State Transfer (REST) service, a Simple Object Access Protocol (SOAP) service, a remote procedure call (RPC), or any kind of application programming interface (API) over the Internet can be protected from unauthorized hosts on the network. For example, in this case, the server initiating the REST call would be the Initiating SDP Host and the server offering the REST service would be the Accepting SDP Host. Implementing an SDP for this use case can reduce the load on these services and mitigate attacks similar to the ones mitigated by the client-to-gateway implementation. Client-to-server-to-client The client-to-server-to-client implementation results in a peer-to-peer relationship between the two clients and can be used for applications such as IP telephone, chat, and video conferencing. In these cases, the SDP obfuscates the IP addresses of the connecting clients. As a minor variation, a user can also have a client-to-gateway-to-client configuration if the user wishes to hide the application server as well. SDP Applications Enterprise application isolation For data breaches that involve intellectual property, financial information, HR data, and other sets of data that are only available within the enterprise network, attackers may gain entrance to the internal network by compromising one of the computers in the network and then move laterally to get access to the high value information asset. In this case, an enterprise can deploy an SDP inside its data center to partition the network and isolate high-value applications. Unauthorized users will not have network access to the protected application, thus mitigating the lateral movement these attacks depend on. Private cloud and hybrid cloud While useful for protecting physical machines, the software overlay nature of the SDP also allows it to be integrated into private clouds to leverage the flexibility and elasticity of such environments. In this role, SDPs can be used by enterprises to hide and secure their public cloud instances in isolation, or as a unified system that includes private and public cloud instances and/or cross-cloud clusters. Software-as-a-service (SaaS) vendors can use an SDP to protect their services. In this implementation, the software service would be an Accepting SDP Host, and all users desiring connectivity to the service would be the Initiating Hosts. This allows a SaaS to leverage the global reach of the Internet without the enabling the Internet's global attack surface. Infrastructure-as-a-service (IaaS) vendors can offer SDP-as-a-Service as a protected on-ramp to their customers. This allows their customers to take advantage of the agility and cost savings of IaaS while mitigating a wide range of potential attacks. Platform-as-a-service (PaaS) vendors can differentiate their offering by including the SDP architecture as part of their service. This gives end users an embedded security service that mitigates network-based attacks. A vast number of new devices are being connected to the Internet. Back-end applications that manage these devices and/or extract information from these devices can be mission-critical and can act as a custodian for private or sensitive data. SDPs can be used to hide these servers and the interactions with them over the Internet to provide improved security and up-time. See also Advanced Encryption Standard Global Information Grid IPsec Public-key infrastructure Transport Layer Security References External links Cloud Security Alliance “Introduction to the Software Defined Perimeter Working Group” Article from GCN - 1105 Public Sector Media Group "Black Cloud Darkens the Enterprise to all but Authorized Devices" Article from Light Reading - "Verizon and Vidder put SD-Perimeter around Enterprise Security" Article from CSO - "Goodbye NAC. Hello, software defined perimeter" IEEE "Software-Defined Perimeters: An Architectural View of SDP" Article from ComputerWeekly - "Gas distribution network SGN invests in software-defined perimeter" Computer security
42584349
https://en.wikipedia.org/wiki/EQUiSat
EQUiSat
EQUiSat was a 1U (one unit) CubeSat designed and built by Brown Space Engineering (formerly Brown CubeSat Team), an undergraduate student group at Brown University's School of Engineering. EQUiSat's mission was to test a battery technology that had never flown in space which powered an beacon that was designed to be visible from Earth. The satellite deorbited on 26 December 2020. Mission The primary mission of EQUiSat was to prove the accessibility of space to the masses through both demonstration of a low-cost DIY CubeSat and educational outreach. To further the primary mission, Brown Space Engineering maintains EQUiSat as a low-cost and rigorously documented open source project, allowing others to replicate EQUiSat's subsystems without large budgets or extensive expertise. The total cost of parts to reproduce EQUiSat is around $5,000. Brown Cubesat Team espouses a DIY philosophy to minimize costs, while also utilizing production processes that are widely achievable by and accessible to non-professionals. Brown Space Engineering's budget is very low compared to other CubeSats, and the goal is for the project to be replicated for under $5,000. EQUiSat's second mission was to test the viability of operating batteries in space. A battery had never been flown in space, but it carries certain advantages over batteries of different chemistry, such as high current draw capabilities with less risk of thermal runaway than lithium-ion batteries. Outreach The other way for Brown Space Engineering to increase the accessibility of space is by educating youth on the design and role of satellites in society. Brown Space Engineering is cooperating with schools and museums across the country to develop an educational outreach plan to teach students and the general public about EQUiSat and the impact of it and similar satellites on scientific advancement of society. Upon launch, the opportunity to easily locate, hear, and see EQUiSat in the night sky provided an important tangible component to these outreach efforts. Another mode of outreach is the availability of EQUiSat source code/CAD files online. Payload EQUiSat's primary payload was a high power LED array, which when flashed appeared on Earth as bright as the North Star. The payload was used to engage those on Earth, especially in pursuit of the project's primary mission, which was to make space more accessible to the public. The secondary payload was the lithium iron phosphate () batteries that powered the LEDs. The secondary mission of EQUiSat was to test the viability of batteries, which had never been flown in space, making the batteries more than power storage units but a payload themselves. Launch On February 6, 2014, NASA announced that it would launch EQUiSat as part of the CubeSat Launch Initiative (CSLI). EQUiSat launched aboard an International Space Station (ISS) resupply mission on May 21, 2018. It was released into orbit from the ISS on July 13, 2018. EQUiSat was placed into a 400 km altitude orbit at 52˚ inclination. Subsystems Optical Beacon (Flash) The flash subsystem was an optical beacon allowing those on Earth to visually track EQUiSat after launch. The beacon was an array of four extremely bright LEDs (~10,000 lumens each) that were flashed for .1 seconds three times in rapid succession every minute when EQUiSat was in the night sky. The array had an apparent magnitude of 3, approximately the same intensity as Polaris. In order to further increase light intensity for those on Earth, the high power LED array were all on one panel that will be directed towards Earth's northern hemisphere using passive attitude control. Radio A transceiver onboard EQUiSat transmitted a signal in the UHF 70 cm Amateur Radio band at 435-438 MHz, and consisted of a registered call sign beacon and sensor data. The transmissions could be received by amateur radio users, but they were also posted online to increase access for the general public. The radio also acted as a beacon to track the position of the satellite. The primary ground station, built in partnership with Brown's Amateur Radio Club, was the primary point of contact for EQUiSat, and was able to terminate communications with the satellite if necessary. The antenna was coiled for launch, as CubeSat specification mandates that no satellite parts may protrude from side rails by more than 1 cm before launch. Thus, a deployment system consisting of nylon wire holding the antenna taut was used. This nylon wire was wound around nichrome filaments, which will burn the wire 30 minutes after deployment. The antenna then sprung back into position. Attitude Control EQUiSat used a passive magnetic attitude control system (ACS), which required no reliance on an attitude determination system, no energy drain from torque coils or momentum wheels, and no reliance on the complex algorithms required to de-tumble and stabilize the satellite. Two pairs of hysteresis rods were used to impart a torque on the satellite to offset tumbling brought about by launch from the ISS and the antenna deployment. These hysteresis rods were not only able to impart a torque, but also dampen the transient response of these tumbles as they do so. This will reduce the tumbling over the course of several days. The ACS also makes use of a permanent magnet to align EQUiSat with the Earth's magnetic field. This kept it pointed towards the surface of the Earth in the northern hemisphere. Electronics The electronics subsystem tied together all other subsystems to allow the satellite to function properly. The electronics subsystem consisted of five custom built PCBs, each of which were physically stacked inside EQUiSat. The five boards were: Flash Panel: The Flash Panel housed the four LEDs, the antenna deployment system, four temperature sensors, an IR sensor and a photodiode. LED Driver Board: This board was located directly below the Flash Panel. It contained the four boost regulator circuits, one per LED. These boost regulators draw 60A at 6.6V from the batteries, which was then converted to 36V and 2.7A for the LEDs. It also contained the drive circuitry for the antenna deployment system. Battery Board: This board was located in between the two layers of batteries. It contains circuitry that performed Max Power Point Tracking to continuously optimize battery charging based upon the solar panel output. It also had controls for managing battery output and monitoring battery properties. Control Board: The Control Board contained the brains of the satellite; including the Atmel SAMD21J18A processor, memory, and demultiplexers that manage incoming data from all other boards. The Control Board also interfaced with the radio, and contains an IMU and a magnetometer. Radio Adapter Board: This was a more simple board that provided an interface between the radio and Control Board. The electronics subsystem was designed, tested and assembled completely in-house, aside from the PCB manufacturing. All components were commercial off-the-shelf, and may be easily purchased online. The PCBs were designed with PCB CAD software and the CAD files are uploaded to GitHub for easy public access. Software The electronics subsystem was backed up with software that ran on the processor. The processor ran a real-time operating system based on FreeRTOS. The usage of a real-time operating system is standard in small embedded systems and allowed EQUiSat to respond to events in a timely, deterministic manner. The software was responsible for data collection from the sensors mentioned in the electronics subsystem section. It then processed the data having read it from its built-in ADC and transmit data appropriately. The software was also able to process incoming transmissions from the primary ground station. Cosmic radiation provides the possibility of a bit flip while in orbit. This does not pose an issue if a bit in data memory was flipped as it was volatile and thus a reboot of the system solved this. If a bit is flipped in program memory, a watchdog timer triggered a reboot of the system where the program memory was overwritten by a copy stored in radiation-safe MRAM by the bootloader. This watchdog timer was reset to its original value on normal program operation, thus only triggered a reboot if it counted to zero due to a corrupted program. As for the rest EQUiSat's subsystems, the software files are available online. Power The power system included solar panels for power generation in space and two battery systems for power storage. The solar panels were produced from scrap gallium arsenide cells using a well documented production process. As a result, they costed 35 times less than comparably powerful off-the-shelf panels. The panels made up 5 sides of the CubeSat, and were made up of varying configuration of Triangular Advanced Solar Cells and TrisolX cells. As a result of the former manufacturer going out of business during development, only the top and bottom panels on EQUiSat contain these cells. The other three panels used the TrisolX cells. 24 cells in a 4S6P configuration, and three side panels contained 20 cells in a 4S5P configuration. The top and bottom panels were designed to output 8.76V at 140–170 mA for an average output power of just over 1.3 W in full sunlight. The other panels outputed a similar voltage for roughly .5-.7W power. EQUiSat contained two sets of batteries: one to power the flash system and another to power the radio system and microcontrollers. The batteries that powered the flash were A123 System 18650 LiFePO4 cells in a 2S2P configuration. The batteries that powered the radio and microcontroller were two LIR2450 lithium-ion rechargeable coin cell batteries in parallel. EQUiSat alternated between battery systems, with priority going to the LIR2450 batteries first. Structure The chassis and other components were manufactured in-house to maximize cost accessibility. The chassis was milled from a solid block of Al 6061 using a three-axis CNC mill, lathe and taps. This provided the body of EQUiSat and the fastening points for all components. In addition, the block to securely place the six batteries in was milled out of Delrin. Perfecting the manufacturing process was done using machinable wax, to reduce material waste. The chassis, along with other machined components and the complete assembly, was designed in CAD software. The CNC toolpaths and G-code were produced from these files. See also List of CubeSats References External links Official homepage EQUiSat resources Student satellites Brown University CubeSats
12595680
https://en.wikipedia.org/wiki/2008%20Rose%20Bowl
2008 Rose Bowl
The 2008 Rose Bowl Game presented by Citi, the 94th Rose Bowl Game, played on January 1, 2008 at the Rose Bowl Stadium in Pasadena, California, was a college football bowl game. The contest was televised on ABC, the 20th straight year the network aired the Rose Bowl, starting at 4:30pm EST. The game's main sponsor was Citi. The 2008 Rose Bowl featured the 7th-ranked USC Trojans hosting the 13th-ranked Illinois Fighting Illini. As with the previous year's game, the contest was a semi-traditional Rose Bowl in that while it was a Big Ten versus Pac-10 matchup, the Big Ten representative was an at-large team because the conference champion, Ohio State, which lost to Illinois earlier in the season, was selected to play in the BCS National Championship Game. USC was making its third straight appearance in the Rose Bowl, while Illinois had not played in the game since 1984. Though Illinois won the Big Ten Conference title in 2001, the then-rotating BCS title game moved them to the Sugar Bowl. Game summary USC took an early 21–0 lead, including a touchdown pass by backup quarterback Garrett Green on a trick play. Illinois verged on closing to 21–10 midway through the third quarter, but receiver Jacob Willis fumbled into the end zone after a catch and Trojans linebacker Brian Cushing recovered for a touchback. USC converted that miscue into a touchdown and then cornerback Cary Harris intercepted Illinois quarterback Juice Williams's pass on the first play of the ensuing possession. Five plays later, Trojans freshman Joe McKnight scored on a 6-yard run, making the score 35–10. USC gained a Rose Bowl-record 633 yards of offense in defeating Illinois 49–17. Scoring summary Game records Most TD passes career, Rose Bowl History- 7, John David Booty References Rose Bowl Rose Bowl Game Illinois Fighting Illini football bowl games USC Trojans football bowl games Rose Bowl January 2008 sports events in the United States 21st century in Pasadena, California
32855218
https://en.wikipedia.org/wiki/Indiana%20University%20School%20of%20Informatics
Indiana University School of Informatics
The Luddy School of Informatics, Computing, and Engineering is an academic unit of Indiana University located on the Indiana University Bloomington (IUB) campus and, under the name Indiana University School of Informatics and Computing (SOIC), on the Indiana University – Purdue University Indianapolis (IUPUI) campus. On the Bloomington campus, the School consists of the Department of Informatics, the Department of Computer Science, the Department of Information and Library Science, and the Department of Intelligent Systems Engineering. On the Indianapolis campus, the School consist of the Department of Human-Centered Computing, the Department of BioHealth Informatics, and the Department of Library and Information Science. Schoolwide programs include the BS in Informatics, MS in Bioinformatics, MS in Human-Computer Interaction, and PhD in Informatics. Bloomington-specific programs include the BS, MS, and PhD in Intelligent Systems Engineering; BS, MS, and PhD in Computer Science; MS in Informatics; MS in Secure Computing; Master of Library Science; Master of Information Science; and PhD in Information Science. Indianapolis-specific programs include the BS in Biomedical Informatics; BS in Health Information Management; BS in Media Arts and Science; BS/MS in Biomedical Informatics/Bioinformatics or Health Informatics; BS/MS in Health Information Management and Health Informatics; BS/MS in Informatics/Applied Data Science, Bioinformatics, Health Informatics, or HCI; BS/MS in HCI or Media Arts and Science; MS in Media Arts and Science; MS in Health Informatics; and Master of Library and Information Science. In addition, the School confers a number of undergraduate and PhD minors and undergraduate and graduate certificates. History The School of Informatics was founded in 2000 as the first school of its kind in the United States. That fall, the first classes were offered on both campuses. Two years later, in 2002, the School hired its first full professor (Bill Aspray) and conferred its first degrees (22 students). By the end of 2004, the School had buildings of its own – a former sorority house on the north side of the Bloomington campus, and the newly constructed Informatics and Communications Technology Complex (ICTC) building on the Indianapolis (IUPUI) campus. In 2005, the Department of Computer Science joined the School in Bloomington, significantly changing the research and course offerings of the five-year-old organization. At that time, the School had 1,500 students and had graduated 600 students. In 2007, with the retirement of founding dean Mike Dunn, Bobby Schnabel, former vice provost/associate vice chancellor at the University of Colorado-Boulder, took over. Raj Acharya, the founding director and head of the School of Electrical Engineering and Computer Science at Penn State University, replaced Schnabel in 2016. The School has continued to grow, with nearly 150 faculty, over 2,000 students, and multiple buildings between the two campuses. In July 2013, the School of Informatics merged with the School of Library and Information Science and became the School of Informatics and Computing on both campuses. Administrators from both schools claimed that the closeness of the subject matter studied in Library and Information Science and Informatics made the union practical and would offer students and faculty a greater breadth. In August 2017, the name of the School of Informatics and Computing on the Bloomington campus was officially changed to the School of Informatics, Computing, and Engineering (SICE) following the addition of a new major in Intelligent Systems Engineering. In January 2018, the Bloomington school moved into Luddy Hall. In October 2019, the Bloomington school was renamed Luddy School of Informatics, Computing and Engineering in honor of Indiana University alumnus and information technology pioneer Fred Luddy. Departments Bloomington campus Computer Science Informatics Information and Library Science Intelligent Systems Engineering Indianapolis campus BioHealth Informatics Human-Centered Computing Library and Information Science Programs Bloomington campus Undergraduate programs include: Bachelor of Science in Computer Science, Data Science, Informatics, Intelligent Systems Engineering Accelerated BS/MS Program in Computer Science, Information Science, Intelligent Systems Engineering, Library Science Graduate programs include: Master of Information Science Master of Library Science Master of Science in Bioinformatics, Computer Science, Data Science, Human-Computer Interaction, Informatics, Intelligent Systems Engineering, Secure Computing Graduate Certificate in Cybersecurity, Data Science, Information Architecture Accelerated Master's Program in Computer Science, Information Science, Intelligent Systems Engineering, Library Science Ph.D. in Computer Science, Informatics, Information Science, Intelligent Systems Engineering Indianapolis campus Undergraduate programs include: Bachelor of Science in Biomedical Informatics, Health Information Management, Informatics, Media Arts and Science, Applied Data and Information Science, Artificial Intelligence, Accelerated BS/MS Program in Applied Data Science, Bioinformatics, Health Informatics, Human-Computer Interaction, Media Arts and Science Graduate programs include: Master of Science in Applied Data Science, Bioinformatics, Health Informatics, Human-Computer Interaction, Media Arts and Science Ph.D. in Data Science, Informatics Luddy Hall The current home of Luddy on the Bloomington campus Luddy Hall broke ground in 2015. The building was named for the Luddy family, many members of which are IU alumni, including tech entrepreneur and IU donor Fred Luddy. The building opened at the end of the Fall 2017 semester, and SICE moved in for the Spring 2018 semester. The building cost of $39.8 million. The building contains five research labs and 19 conference rooms, and also houses faculty offices and serves as the meeting place for most Luddy classes. Rankings Department of Information and Library Science ranked 9th by U.S. News & World Report in 2017 Information and Library Science Discipline ranked 2nd in 2017 Academic Ranking of World Universities References External links Indiana University School of Informatics, Computing, and Engineering, Bloomington campus Indiana University School of Informatics and Computing, Indianapolis campus Indiana University 1999 establishments in Indiana Schools of informatics
30126030
https://en.wikipedia.org/wiki/Internet%20and%20Technology%20Law%20Desk%20Reference
Internet and Technology Law Desk Reference
Internet and Technology Law Desk Reference is a non-fiction book about information technology law, written by Michael Dennis Scott. The book uses wording from legal cases to define information technology jargon, and gives citations to individual lawsuits. Scott received his B.S. degree from Massachusetts Institute of Technology and graduated with a J.D. from the University of California, Los Angeles. He has taught as a law professor at Southwestern Law School. The book was published by Aspen Law and Business in 1999. Multiple subsequent editions were published under the imprint Aspen Publishers. Internet and Technology Law Desk Reference was recommended by the Cyberlaw Research Resources Guide at the James E. Rogers College of Law, and has been used as a reference in law journals including University of Pennsylvania Journal of International Economic Law, and Berkeley Technology Law Journal. Author Michael Dennis Scott is a lawyer; in 1999 he resided in Los Angeles. Scott graduated in 1967 from Massachusetts Institute of Technology with a Bachelor of Science degree in mathematics and computer science. He received his J.D. degree in 1974 from the University of California, Los Angeles. He is a member of the United States Patent Bar and the California State Bar. He was employed by Perkins Coie LLP in 1999. Comtex News Network described Scott in 1999 as, "a veteran Internet law expert". He taught as a professor in the subject of legal studies at Southwestern Law School. He is the author of legal books including Scott on Outsourcing Law & Practice, Scott on Multimedia Law, Intellectual Property Licensing Law Desk Reference, and Telecommunications Law Desk Reference. Scott serves as editor-in-chief of newsletters E-Commerce Law Report and The Cyberspace Lawyer. He maintains a law-related blog at www.singularitylaw.com. Scott was a cofounder of the World Computer Law Congress, and a director of the Computer Law Association. Contents Internet and Technology Law Desk Reference is a reference work on the subject of law. The reference utilizes written opinions from judges in lawsuits and court-approved wording to provide definitions for information technology related legal jargon. Entries are organized in alphabetical order, with citations given to individual lawsuits. Publication history Internet and Technology Law Desk Reference was published in 1999 by Aspen Law and Business. Subsequent editions were released by Aspen Law and Business in 2001, 2002, and 2003. Under the imprint Aspen Publishers, the book was released in later editions in 2004, 2005, 2007, and 2009. Reception Shaun Esposito of the James E. Rogers College of Law recommended the reference work in his Cyberlaw Research Resources Guide, and wrote, "It could be useful both in defining unfamiliar terms and in starting research on any topic listed in the work." In 2000, board members of the CBA Journal Lawrence M. Friedman and John Levin used the book to compile a self-assessment tool for readers to determine their proficiency with technology and internet terminology. The University of Chicago Legal Forum described Internet and Technology Law Desk Reference as a publication involved in "compiling internet definitions used in court opinions". The book has been utilized as a reference in law journals including University of Pennsylvania Journal of International Economic Law, Notre Dame Law Review, Berkeley Technology Law Journal, and Boston College Law Review. See also Code and Other Laws of Cyberspace Cyber Rights The Hacker Crackdown The Law of Cyber-Space Small Pieces Loosely Joined Who Controls the Internet? References Further reading External links Singularity Law, website of book's author About Professor Michael Scott, bio profile Michael D. Scott, page at Southwestern Law School 1999 non-fiction books Books about the Internet Works about computer law Law books Works about intellectual property law
20858620
https://en.wikipedia.org/wiki/Windows%20Setup
Windows Setup
Windows Setup is an installer that prepares a hard disk drive for a Microsoft Windows operating system installation by executing two processes: a) initializing the drive and b) copying system files to that drive in order for the operating system to be run locally (see Volume). The early versions of Windows required an existing compatible version of DOS operating system in order to be installed. The Windows NT family, from 3.1 through 6.0 featured text-based installation that prompted users to a GUI wizard in the final steps. The 9x family installer was similar to NT despite it being MS-DOS-based. Additionally, it did not need preinstalled DOS as a requirement. With the release of Windows NT 6.0 (Vista), Microsoft introduced a fully graphical setup environment and UEFI support (partial in Windows Vista and 7, full UEFI support on Windows 8 onwards). Windows 1.x and Windows 2.x The installation of Windows 1.x, Windows 2.0, Windows 2.1x requires that a compatible version of MS-DOS is installed. The user must specify any hardware such as mice or printers during installation. After the installation, Windows was to be started either manually by typing "WIN.COM" at the command prompt, or configured for automatic startup by adding WIN.COM to the end of AUTOEXEC.BAT. Windows 3.x The installation of Windows 3.0, Windows 3.1x and Windows 3.2 requires that a compatible DOS operating system is already installed. The installer attempts to detect network cards, mice, and other hardware on its own but will rely on the user to specify hardware if it cannot find them. After the installation, Windows was to be started either manually by typing "WIN.COM" at the command prompt, or configured for automatic startup by adding WIN.COM to the end of AUTOEXEC.BAT. Windows 9x Windows 95-98 and Windows Me do not require MS-DOS. The first phase of setup prepares the hard disk partition for use by Windows by formatting it to a compatible file system, then runs scandisk, and, if the hard disk appears to be ready for installation in terms of free space and disk integrity, then it will copy files to the selected installation folder (typically C:\WINDOWS). The first phase of setup resembles the interface of Windows 3.x. Once this phase is finished, the computer reboots and setup resumes from the hard disk, but still requires the installation media to continue copying files and drivers. At this point the user will be asked to provide a product key. Windows NT Before Windows Vista The setup process introduced with Windows NT 3.1 remained in effect until the release of Windows Vista. The general process is: The user inserts the installation media, initiates the process, and Setup loads various hardware and file-system drivers. If any third-party drivers are needed in order to detect an SCSI or RAID system, setup pauses and requests the supply of a driver on a floppy disk. See F6 disk. The user is then presented with a text-based interface which gives three options 1) install Windows, 2) repair an existing installation, or 3) quit setup. If the user decides to install Windows, he/she is presented with an agreement that they must accept before Setup will continue. Prior to Windows 2000, the user was required to scroll to the bottom of the agreement before they were permitted to agree. The user must create or select a partition, then a filesystem (either NTFS or FAT). If either of these file systems is already present and there is no version of Windows already on the disk, it is also possible to leave the current file system intact. The hard disk is checked for errors and space requirements, then, if it passes the check, Windows will be installed. After the text-based phase of Setup is finished, the computer reboots and starts a graphical phase of setup from the hard disk, prompting the user to reinsert the installation media, to enter the product key, and then it continues copying files and drivers. All versions of Windows NT up to Windows Server 2003, except for Windows XP Home Edition, prompt the user to enter an Administrator password. On Windows 2000, Windows XP and Windows Server 2003, the Recovery Console is included to repair damaged installations. It allows the user to repair disk and boot record errors, and copy missing or corrupted files to the destination folders. After Windows Vista Windows Vista and subsequent operating systems all utilize Windows Preinstallation Environment (Windows PE) as the installation environment. Windows PE features a graphical user interface with mouse support from the beginning, rather than requiring a text-only phase as in previous versions. The concept of F6 disks has been improved to provide support for computers without floppy drives; the loading of drivers from CD-ROMs and USB flash drives is now supported. Support for installing Windows onto FAT partitions has been dropped; Windows must be installed onto an NTFS partition. Windows 8 and later Windows 8 introduces a new secondary installer known as the Upgrade Assistant, replacing Windows Setup for upgrade installations. Designed to be simpler and faster than previous installation methods, it analyses the system's hardware and software for compatibility with Windows 8, allows the user to purchase, download, and install the operating system, and migrate files and settings from the previous Windows installation in the case of a clean install. Windows Setup is still used when booting from installation media. References Windows components Installation software
1447381
https://en.wikipedia.org/wiki/NetBIOS%20over%20TCP/IP
NetBIOS over TCP/IP
NetBIOS over TCP/IP (NBT, or sometimes NetBT) is a networking protocol that allows legacy computer applications relying on the NetBIOS API to be used on modern TCP/IP networks. NetBIOS was developed in the early 1980s, targeting very small networks (about a dozen computers). Some applications still use NetBIOS, and do not scale well in today's networks of hundreds of computers when NetBIOS is run over NBF. When properly configured, NBT allows those applications to be run on large TCP/IP networks (including the whole Internet, although that is likely to be subject to security problems) without change. NBT is defined by the RFC 1001 and RFC 1002 standard documents. Services NetBIOS provides three distinct services: Name service for name registration and resolution (ports: 137/udp and 137/tcp) Datagram distribution service for connectionless communication (port: 138/udp) Session service for connection-oriented communication (port: 139/tcp) NBT implements all of those services. Name service In NetBIOS, each participant must register on the network using a unique name of at most 15 characters. In legacy networks, when a new application wanted to register a name, it had to broadcast a message saying "Is anyone currently using that name?" and wait for an answer. If no answer came back, it was safe to assume that the name was not in use. However, the wait timeout was a few seconds, making the name registration a very lengthy process, as the only way of knowing that a name was not registered was to not receive any answer. NBT can implement a central repository, or Name Service, that records all name registrations. An application wanting to register a name would therefore contact the name server (which has a known network address) and ask whether the name is already registered, using a "Name Query" packet. This is much faster, as the name server returns a negative response immediately if the name is not already in the database, meaning it is available. The Name Service, according to RFCs 1001 and 1002, is called NetBIOS Naming Service or NBNS. Microsoft WINS is an implementation of NBNS. It is worth saying that due to constant development of the way in which the Name Service handles conflict or merges, "group names" varies from vendor to vendor and can even be different by version e.g. with the introduction of a service pack. The packet formats of the Name Service are identical to DNS. The key differences are the addition of NetBIOS "Node Status" query, dynamic registration and conflict marking packets. They are encapsulated in UDP. Later implementation includes an optional Scope part of the name, making NetBIOS name hierarchical like DNS, but this is seldom used. In addition, to start a session or to send a datagram to a particular host rather than to broadcast the datagram, NBT will have to determine the IP address of the host with a given NetBIOS name; this is done by broadcasting a "Name Query" packet, and/or sending it to the NetBIOS name server. The response will have the IP address of the host with that name. NBNS is one of the first proper dynamic peer-to-peer distributed name registration services. The NBNS protocol was brought into disrepute by Microsoft: it earned a bad name for being 'chatty', swamping networks with dynamic registration traffic on multiple protocols (IPX/SPX, NBF and TCP/IP) as people badly misconfigured their machines and their networks. The principles implemented in NBNS have been reimplemented many times, including in such systems as zeroconf and MobileIP. Datagram distribution service Datagram mode is "connectionless"; NetBIOS datagrams are sent over UDP. A datagram is sent with a "Direct Unique" or "Direct Group" packet if it's being sent to a particular NetBIOS name, or a "Broadcast" packet if it's being sent to all NetBIOS names on the network. Session service Session mode lets two computers establish a connection for a "conversation", allows larger messages to be handled, and provides error detection and recovery. Sessions are established by exchanging packets. The computer establishing the session attempts to make a TCP connection to port 139 on the computer with which the session is to be established. If the connection is made, the computer establishing the session then sends over the connection a "Session Request" packet with the NetBIOS names of the application establishing the session and the NetBIOS name to which the session is to be established. The computer with which the session is to be established will respond with a "Positive Session Response" indicating that a session can be established or a "Negative Session Response" indicating that no session can be established (either because that computer isn't listening for sessions being established to that name or because no resources are available to establish a session to that name). Data is transmitted during an established session by Session Message packets. TCP handles flow control and retransmission of all session service packets, and the dividing of the data stream over which the packets are transmitted into IP datagrams small enough to fit in link-layer packets. Sessions are closed by closing the TCP connection. Security vulnerabilities NBT exposes information and interfaces that are often appropriate for a LAN under an organization's administrative control, but which are not appropriate for a less trusted network such as the Internet. For example, the NetBIOS Name Service (NBNS), running over UDP or TCP port 137, allows any computer to register its hostname with other computers. An attacker could contact any host and claim that they are a particular service the host regularly contacts, such as a file server. This could result in a middleperson attack against listening hosts, and ultimately in the compromise of credentials used by the listening hosts to access network services over NBT. Tools such as NBNSpoof can be used to perform this attack. Exposure of NBT to the Internet also discloses, as a practical matter, that the host answering on NBT ports is running Windows. This can be used to better target malicious activity that might be specific to one operating system. Decreasing relevance in post-NT Client-Server Networks In relation to post-MS Windows 2000 / NT, client-server based networks, NetBIOS is effectively becoming a legacy protocol. NetBIOS was also developed for non-routable LANs. In most post year 2000 networks operating Windows 2000 or later, NetBIOS effectively offers backwards compatibility for network devices that predate compatibility with DNS. A central role of NetBIOS in Client-Server networks (and also those networks that have networked peripheral hardware that also predates DNS compatibility) is to provide name resolution to computers and networked peripherals. Further, it allows for such networked hardware to be accessed and shared and also enables the mapping and browsing of network folders, shares and shared printers, faxes, etc. In its primary capacity, it acts as a session-layer protocol transported over TCP/IP to provide name resolution to a computer and shared folders. To that end, Windows 2000-based, Client-Server networks - and later - do not require this insecure means of name resolving and addressing or navigating of network shares. Troubleshooting NetBIOS nbtstat The nbtstat command is a diagnostic tool for NetBIOS over TCP/IP. Its primary design is to help troubleshoot NetBIOS name resolution problems. The command is included in several versions of Microsoft Windows. There are several commands involved with nbtstat that allows several options such as: local cache lookup, WINS Server query, broadcast, LMHOSTS lookup, and Hosts lookup. It is not for DNS server query. When a network is functioning normally, NetBIOS over TCP/IP (NetBT) resolves NetBIOS names to IP addresses. It does this through several options for NetBIOS name resolution, including local cache lookup, WINS server query, broadcast, LMHOSTS lookup, Hosts lookup, and DNS server query. The command removes and corrects preloaded entries using a number of case-sensitive switches. The nbtstat -a < name > command performs a NetBIOS adapter status command on the computer name specified by < name >. The adapter status command returns the local NetBIOS name table for that computer as well as the MAC address of the adapter card. The nbtstat -A < IP address > command performs the same function using a target IP address rather than a name. Syntax nbtstat [-a RemoteName] [-A IPAddress] [-c] [-n] [-r] [-R] [-RR] [-s] [-S] [Interval] The common parameters are: nbtstat -c: displays the contents of the NetBIOS name cache, the table of NetBIOS names and their resolved IP addresses. nbtstat -n: displays the names that have been registered locally on the system. nbtstat -r: displays the count of all NetBIOS names resolved by broadcast and querying a WINS server. nbtstat -R: purges and reloads the remote cache name table. nbtstat -RR: sends name release packets to WINs and then starts Refresh. nbtstat -s: lists the current NetBIOS sessions and their status, including statistics. nbtstat -S: lists sessions table with the destination IP addresses. See also Samba Server Message Block DNS LDAP NIS WINS References External links KB204279 - KB article describing the more modern, direct hosting of SMB nbtscan - open-source program to scan IP networks for NetBIOS name information Network protocols Request for Comments
3328147
https://en.wikipedia.org/wiki/OCFS2
OCFS2
The Oracle Cluster File System (OCFS, in its second version OCFS2) is a shared disk file system developed by Oracle Corporation and released under the GNU General Public License. The first version of OCFS was developed with the main focus to accommodate Oracle's database management system that used cluster computing. Because of that it was not a POSIX-compliant file system. With version 2 the POSIX features were included. OCFS2 (version 2) was integrated into the version 2.6.16 of Linux kernel. Initially, it was marked as "experimental" (Alpha-test) code. This restriction was removed in Linux version 2.6.19. With kernel version 2.6.29 in late 2008, more features were included into ocfs2, such as access control lists and quotas. OCFS2 used a distributed lock manager which resembles the OpenVMS DLM but is much simpler. Oracle announced version 1.6 in November 2010 which included a copy on write feature called reflink. See also GlusterFS GFS2 General Parallel File System (GPFS) List of file systems Lustre (file system) MooseFS QFS Notes and references External links OCFS2 project page OCFS project page Shared disk file systems Oracle software Distributed file systems supported by the Linux kernel
167550
https://en.wikipedia.org/wiki/Transcription%20%28linguistics%29
Transcription (linguistics)
Transcription in the linguistic sense is the systematic representation of spoken language in written form. The source can either be utterances (speech or sign language) or preexisting text in another writing system. Transcription should not be confused with translation, which means representing the meaning of a source-language text in a target language (e.g. Los Angeles into The Angels) or with transliteration which means representing the spelling of a text from one script to another. In the academic discipline of linguistics, transcription is an essential part of the methodologies of (among others) phonetics, conversation analysis, dialectology, and sociolinguistics. It also plays an important role for several subfields of speech technology. Common examples for transcriptions outside academia are the proceedings of a court hearing such as a criminal trial (by a court reporter) or a physician's recorded voice notes (medical transcription). This article focuses on transcription in linguistics. Phonetic and orthographic transcription There are two main types of linguistic transcription. Phonetic transcription focuses on phonetic and phonological properties of spoken language. Systems for phonetic transcription thus furnish rules for mapping individual sounds or phones to written symbols. Systems for orthographic transcription, by contrast, consist of rules for mapping spoken words onto written forms as prescribed by the orthography of a given language. Phonetic transcription operates with specially defined character sets, usually the International Phonetic Alphabet. The type of transcription chosen depends mostly on the context of usage. Because phonetic transcription strictly foregrounds the phonetic nature of language, it is mostly used for phonetic or phonological analyses. Orthographic transcription, however, has a morphological and a lexical component alongside the phonetic component (which aspect is represented to which degree depends on the language and orthography in question). This form of transcription is thus more convenient wherever semantic aspects of spoken language are transcribed. Phonetic transcription is more systematic in a scientific sense, but it is also more difficult to learn, more time-consuming to carry out and less widely applicable than orthographic transcription. As a theory Mapping spoken language onto written symbols is not as straightforward a process as may seem at first glance. Written language is an idealization, made up of a limited set of clearly distinct and discrete symbols. Spoken language, on the other hand, is a continuous (as opposed to discrete) phenomenon, made up of a potentially unlimited number of components. There is no predetermined system for distinguishing and classifying these components and, consequently, no preset way of mapping these components onto written symbols. Literature is relatively consistent in pointing out the nonneutrality of transcription practices. There is not and cannot be a neutral transcription system. Knowledge of social culture enters directly into the making of a transcript. They are captured in the texture of the transcript (Baker, 2005). Transcription systems Transcription systems are sets of rules which define how spoken language is to be represented in written symbols. Most phonetic transcription systems are based on the International Phonetic Alphabet or, especially in speech technology, on its derivative SAMPA. Examples for orthographic transcription systems (all from the field of conversation analysis or related fields) are: CA (conversation analysis) Arguably the first system of its kind, originally sketched in (Sacks et al. 1978), later adapted for the use in computer readable corpora as CA-CHAT by (MacWhinney 2000). The field of Conversation Analysis itself includes a number of distinct approaches to transcription and sets of transcription conventions. These include, among others, Jefferson Notation. To analyze conversation, recorded data is typically transcribed into a written form that is agreeable to analysts. There are two common approaches. The first, called narrow transcription, captures the details of conversational interaction such as which particular words are stressed, which words are spoken with increased loudness, points at which the turns-at-talk overlap, how particular words are articulated, and so on. If such detail is less important, perhaps because the analyst is more concerned with the overall gross structure of the conversation or the relative distribution of turns-at-talk amongst the participants, then a second type of transcription known as broad transcription may be sufficient (Williamson, 2009). Jefferson Transcription System The Jefferson Transcription System is a set of symbols, developed by Gail Jefferson, which is used for transcribing talk. Having had some previous experience in transcribing when she was hired in 1963 as a clerk typist at the UCLA Department of Public Health to transcribe sensitivity-training sessions for prison guards, Jefferson began transcribing some of the recordings that served as the materials out of which Harvey Sacks' earliest lectures were developed. Over four decades, for the majority of which she held no university position and was unsalaried, Jefferson's research into talk-in-interaction has set the standard for what became known as conversation analysis (CA). Her work has greatly influenced the sociological study of interaction, but also disciplines beyond, especially linguistics, communication, and anthropology. This system is employed universally by those working from the CA perspective and is regarded as having become a near-globalized set of instructions for transcription. DT (discourse transcription) A system described in (DuBois et al. 1992), used for transcription of the Santa Barbara Corpus of Spoken American English (SBCSAE), later developed further into DT2. GAT (Gesprächsanalytisches Transkriptionssystem – Conversation analytic transcription system) A system described in (Selting et al. 1998), later developed further into GAT2 (Selting et al. 2009), widely used in German speaking countries for prosodically oriented conversation analysis and interactional linguistics. HIAT (Halbinterpretative Arbeitstranskriptionen – Semiinterpretative working transcriptions) Arguably the first system of its kind, originally described in (Ehlich and Rehbein 1976) – see (Ehlich 1992) for an English reference - adapted for the use in computer readable corpora as (Rehbein et al. 2004), and widely used in functional pragmatics. Software Transcription was originally a process carried out manually, i.e. with pencil and paper, using an analogue sound recording stored on, e.g., a Compact Cassette. Nowadays, most transcription is done on computers. Recordings are usually digital audio files or video files, and transcriptions are electronic documents. Specialized computer software exists to assist the transcriber in efficiently creating a digital transcription from a digital recording. Two types of transcription software can be used to assist the process of transcription: one that facilitates manual transcription and the other automated transcription. For the former, the work is still very much done by a human transcriber who listens to a recording and types up what is heard in a computer, and this type of software is often a multimedia player with functionality such as playback or changing speed. For the latter, automated transcription is achieved by a speech-to-text engine which converts audio or video files into electronic text. Some of the software would also include the function of annotation. See also Interlinear gloss Phonetic transcription Speech recognition Subtitle (captioning) Textual scholarship Transcription (service) Transcription software References Further reading Hepburn, A., & Bolden, G. B. (2013). The conversation analytic approach to transcription. In J. Sidnell & T. Stivers (Eds.), The handbook of Conversation Analysis (pp. 57–76). Oxford: Blackwell. PDF DuBois, John / Schuetze-Coburn, Stephan / Cumming, Susanne / Paolino, Danae (1992): Outline of Discourse Transcription. In: Edwards/Lampert (1992), 45-89. Haberland, H. & Mortensen, J. (2016) Transcription as second order entextualisation: The challenge of heteroglossia. In: Capone, A. & Mey, J. L. (eds.): Interdisciplinary Studies in Pragmatics, Culture and Society, 581-600. Cham: Springer. Jenks, C.J. (2011) Transcribing Talk and Interaction: Issues in the Representation of Communication Data. Amsterdam: John Benjamins. MacWhinney, Brian (2000): The CHILDES project: tools for analyzing talk. Mahwah, NJ: Lawrence Erlbaum. Ochs, E. (1979) Transcription as theory. In: Ochs, E. & Schieffelin, B. B. (ed.): Developmental pragmatics, 43-72. New York: Academic Press. Sacks, H.; Schegloff, E. & Jefferson, G. (1978) A simplest systematics for the organization of turn taking for conversation. In: Schenkein, J. (ed.): Studies in the Organization of Conversational Interaction, 7-56. New York: Academic Press. External links Transcription in Action - website from UC Santa Barbara Documentation and examples for the HIAT transcription system Transcription - a website with resources for transcription in conversation analysis Phonetics Subtitling Writing
524319
https://en.wikipedia.org/wiki/Shake%20%28software%29
Shake (software)
Shake is a discontinued image compositing package used in the post-production industry developed by Nothing Real for Windows and later acquired by Apple Inc. Shake was widely used in visual effects and digital compositing for film, video and commercials. Shake exposed its node graph architecture graphically. It enabled complex image processing sequences to be designed through the connection of effects "nodes" in a graphical workflow interface. This type of compositing interface allowed great flexibility, including the ability to modify the parameters of an earlier image processing step "in context" (while viewing the final composite). Many other compositing packages, such as Blender, Blackmagic Fusion, Nuke and Cineon, also used a similar node-based approach. Shake was available for Mac OS X and Linux. Support for Microsoft Windows and IRIX was discontinued in previous versions. On July 30, 2009, Apple discontinued Shake. No direct product replacement was announced by Apple, but some features are now available in Final Cut Studio and Motion, such as the SmoothCam filter. History In 1996, Arnaud Hervas and Allen Edwards founded Nothing Real, and released Shake 1.0 as a command-line tool for image processing to high-end visual effects facilities in early 1997. Emmanuel Mogenet joined the R&D as a senior developer in the summer of 1997 as Shake 2.0 was being rewritten with a full user interface. In the fall of 1997, Dan Candela (R&D), Louis Cetorelli (head of support) and Peter Warner (designer/expert user) were added to the team. After initially working as a consultant in early 1998, Ron Brinkmann also joined in early 1998 as product manager. This core group were all among the original Sony Imageworks employees. Shake 2.0 was first shown at the 1998 NAB conference as an alpha demo with a minimal set of nodes, a node view and the player. A more complete beta version of Shake was shown at the 1998 SIGGRAPH conference. Version 2 was released in early 1999 for Windows NT and IRIX, costing $9900 US per license, or $3900 for a render-only license. Over the next few years, Shake rapidly became the standard compositing software in the visual effects industry for feature films. In 2002, Apple Computer acquired Nothing Real. A few months later, version 2.5 was released, introducing Mac OS X compatibility. To strengthen the Mac's position in production studios, the Mac version held a price of , and users of the non-Mac operating systems were given the offer of doubling the number of licenses at no extra cost by migrating to Mac OS X. In 2003, version 3 of Shake was announced, which introduced the Qmaster software, discontinued support for Microsoft Windows, and allowed unlimited network render clients at no additional cost. A year later, the release of Shake 3.5 at the National Association of Broadcasters show saw the price drop to $2999 for Mac OS X and $4999 for Linux and IRIX. In April 2005 Apple announced Shake 4 at a pre-NAB event. New features included 3D multi-plane compositing, 32-bit Keylight and Primatte keying, optical flow image processing (time-remapping and image stabilization), Final Cut Pro 5 integration and extensions to their open, extensible scripting language and SDK. Shake 4 had no IRIX version. At the NAB event in April 2006, Apple announced that Shake 4.1 would be a Universal Binary version and would ship in May that year. It was actually released on June 20, 2006 and was rebranded as a companion for Final Cut Studio; as such, its price was dropped from $2999 to $499 for Mac OS X but remained the same for Linux. At the same time, Apple announced that they would end support for Shake. Rumor web sites claimed that Apple was working on a next-generation compositing application codenamed Phenomenon. Existing maintenance program subscribers had the option to license the Shake source code for . On July 30, 2009, Apple removed Shake from its online store and website. Shake had been officially been declared end of life status 3 years prior but continued being sold in the Apple Store for $499 until that time. The Shake website now redirects to Apple's Final Cut Pro X website. Uses Shake was used in such films as Peter Jackson's The Lord of the Rings and King Kong, as well as Harry Potter films and Cloverfield. It was used by The Embassy to create a television advertisement for Citroën with a dancing car. Shake was used by Broadway Video for restoring the release of Saturday Night Live: The Complete First Season DVD box set. It was in use by CBS Digital for creating new visual effects for Star Trek Remastered. Other major productions using Shake include the 2005 adaptation of War of the Worlds, Star Wars: Episode III – Revenge of the Sith, Fantastic Four, Mission: Impossible III, Poseidon, The Incredibles, Hulk, Doctor Who, The Dark Knight and Pirates of the Caribbean: Dead Man's Chest, and for the restoration of South Pacific. Shake was used for video post-production, but in this field Autodesk's Flint, Flame, and Inferno systems were usually used in conjunction with Shake for a fast turnaround of projects. Shake's historical strength had been the ability to work better with very high resolution formats such as 2K, 4K, and IMAX used in the motion picture industry. References External links Ron Brinkmann's Original design sketches for Shake, circa 1998 Apple Inc. software IRIX software MacOS graphics-related software Compositing software Linux audio video-related software Proprietary commercial software for Linux 1997 software
45002035
https://en.wikipedia.org/wiki/Michael%20Kearns%20%28computer%20scientist%29
Michael Kearns (computer scientist)
Michael Kearns is an American computer scientist, professor and National Center Chair at the University of Pennsylvania, the founding director of Penn's Singh Program in Networked & Social Systems Engineering (NETS), the founding director of Warren Center for Network and Data Sciences, and also holds secondary appointments in Penn's Wharton School and department of Economics. He is a leading researcher in computational learning theory and algorithmic game theory, and interested in machine learning, artificial intelligence, computational finance, algorithmic trading, computational social science and social networks. He previously led the Advisory and Research function in Morgan Stanley's Artificial Intelligence Center of Excellence team, and is currently an Amazon Scholar within Amazon Web Services. Biography Kearns was born into an academic family, where his father David R Kearns is Professor Emeritus at University of California, San Diego in chemistry, who won Guggenheim Fellowship in 1969, and his uncle Thomas R. Kearns is Professor Emeritus at Amherst College in Philosophy and Law, Jurisprudence, and Social Thought. His paternal grandfather Clyde W. Kearns was a pioneer in insecticide toxicology and was a professor at University of Illinois at Urbana–Champaign in Entomology, and his maternal grandfather Chen Shou-Yi (1899–1978) was a professor at Pomona College in history and literature, who was born in Canton (Guangzhou, China) into a family noted for their scholarship and educational leadership. Kearns received his B.S. degree at the University of California at Berkeley in math and computer science in 1985, and Ph.D. in computer science from Harvard University in 1989, under the supervision of Turing award winner Leslie Valiant. His doctoral dissertation was The Computational Complexity of Machine Learning, later published by MIT press as part of the ACM Doctoral Dissertation Award Series in 1990. Before joining AT&T Bell Labs in 1991, he continued with postdoctoral positions at the Laboratory for Computer Science at MIT hosted by Ronald Rivest, and at the International Computer Science Institute (ICSI) in UC Berkeley hosted by Richard M. Karp, both of whom are Turing award winners. Kearns is currently a full professor and National Center Chair at the University of Pennsylvania, where his appointment is split across the Department of Computer and Information Science, and Statistics and Operations and Information Management in the Wharton School. Prior to joining the Penn faculty in 2002, he spent a decade (1991–2001) in AT&T Labs and Bell Labs, including as head of the AI department with colleagues including Michael L. Littman, David A. McAllester, and Richard S. Sutton; Secure Systems Research department; and Machine Learning department with members such as Michael Collins and the leader Fernando Pereira. Other AT&T Labs colleagues in Algorithms and Theoretical Computer Science included Yoav Freund, Ronald Graham, Mehryar Mohri, Robert Schapire, and Peter Shor, as well as Sebastian Seung, Yann LeCun, Corinna Cortes, and Vladimir Vapnik (the V in VC dimension). Kearns was named Fellow of the Association for Computing Machinery (2014) for contributions to machine learning, and a fellow of the American Academy of Arts and Sciences (2012). His former graduate students and postdoctoral visitors include Ryan W. Porter and John Langford. Kearns' work has been reported by media, such as MIT Technology Review (2014) Can a Website Help You Decide to Have a Kid?, Bloomberg News (2014) Schneiderman (and Einstein) Pressure High-Speed Trading and NPR audio (2012) Online Education Grows Up, And For Now, It's Free. Academic life Computational learning theory Kearns and Umesh Vazirani published An introduction to computational learning theory, which has been a standard text on computational learning theory since it was published in 1994. Weak learnability and the origin of Boosting algorithms The question "is weakly learnability equivalent to strong learnability?" posed by Kearns and Valiant (Unpublished manuscript 1988, ACM Symposium on Theory of Computing 1989) is the origin of boosting machine learning algorithms, which got a positive answer by Robert Schapire (1990, proof by construction, not practical) and Yoav Freund (1993, by voting, not practical) and then they developed the practical AdaBoost (European Conference on Computational Learning Theory 1995, Journal of Computer and System Sciences 1997), an adaptive boosting algorithm that won the prestigious Gödel Prize (2003). Honors and awards 2021. Member of the U. S. National Academy of Sciences. 2014. ACM Fellow. For contributions to machine learning, artificial intelligence, and algorithmic game theory and computational social science. 2012. American Academy of Arts and Sciences Fellow. Selected works 2019. The Ethical Algorithm: The Science of Socially Aware Algorithm Design. (with Aaron Roth). Oxford University Press. 1994. An introduction to computational learning theory. (with Umesh Vazirani). MIT press. Widely used as a text book in computational learning theory courses. 1990. The computational complexity of machine learning. MIT press. Based on his 1989 doctoral dissertation; ACM Doctoral Dissertation Award Series in 1990 1989. Cryptographic limitations on learning Boolean formulae and finite automata. (with Leslie Valiant) Proceedings of the twenty-first annual ACM symposium on Theory of computing (STOC'89). The open question: is weakly learnability equivalent to strong learnability?; The origin of boosting algorithms; Important publication in machine learning. See also Boosting (machine learning) References External links Tribute Day for Leslie Valiant's 60 birthday, May 2009 the speakers include Stephen Cook and Michael O. Rabin, both of whom are Turing award winners, and Vijay Vazirani. Year of birth missing (living people) Living people American computer scientists UC Berkeley College of Letters and Science alumni Harvard University alumni University of Pennsylvania faculty Fellows of the Association for Computing Machinery Scientists at Bell Labs Fellows of the American Academy of Arts and Sciences Game theorists Machine learning researchers Members of the United States National Academy of Sciences
10683901
https://en.wikipedia.org/wiki/HP%20Media%20Vault
HP Media Vault
The HP Media Vault is a home printer and file server from Hewlett-Packard that runs the Linux operating system although you can install Debian on the MV2 First Generation The Media Vault's processor is a Broadcom BCM4785 MIPS-based system-on-a-chip running Linux and BusyBox v1.00-pre2 based firmware. It has 64 megabytes of RAM, one Gigabit Ethernet interface, and three USB 2.0 ports. The capacity of the device may be expanded using the empty drive bay which can house an off-the-shelf Serial ATA hard drive. The maximum expanded capacity of MV1 (first-generation) devices is approximately 1.2TB due to memory limitations. One of the advantages of the system is that if the primary drive is lost (which includes some system software which works in conjunction with the firmware) the system can be restored onto a replacement SATA hard drive using HP's nasrecovery software. Since the device supports standard communications protocols (listed below), it can be accessed by Windows, Linux, Mac, and any other OS that supports the needed protocols. Protocols CIFS DAAP (only by user customization) DLNA FTP HTTP NFS Telnet (disabled by default) Second Generation The second generation of the MediaVault products is powered by an ARM9 Marvell Orion processor, and has 128MB of RAM. It has 1 Gigabit network connector, and 2 USB 2.0 ports. There are 2 internal disk bays which support any off the shelf SATA hard drive up to 1TB in size. Support Lee Devlin was the hardware architect for the HP Media Vault and he maintains an unofficial support site for the device. The site includes information on hard drive replacement, restoring a previous snapshot of your pc, photos of the device internals as well as setting up a Firefly/iTunes Experimental server amongst many other articles. There is also a yahoo group that offers support. External links HP Media Vault Frequently Asked Questions/Knowledge Base Yahoo HP Media Vault group HP Media Vault Flash Demo References HP storage servers Home servers
53180451
https://en.wikipedia.org/wiki/RedMonk
RedMonk
RedMonk is an industry analyst firm focused on software developers and headquartered in Portland, Maine, United States. It was founded on the premise of the increasing influence of software developers in the technology industry. RedMonk co-founder Stephen O'Grady authored a book on "The New Kingmakers: How Developers Conquered the World" which details this premise and a book on "The Software Paradox: The Rise and Fall of the Commercial Software Market" which considers the changing role of commercial software. RedMonk covers trends in the software industry such as the top-used programming languages. History RedMonk was founded in 2002 by James Governor and Stephen O'Grady. Awards RedMonk co-founder James Governor was awarded the Women in Marketing Equality Advocate award in 2016. RedMonk was highly ranked in a number of categories from the Institute of Industry Analyst Relations in 2008, specifically: Analyst of the year #3: James Governor, RedMonk Analyst firm of the year #4: RedMonk Most relevant #5: RedMonk Most import firm #6: RedMonk See also Industry analyst Software developer References External links Official website Market research companies of the United States Companies based in Portland, Maine 2002 establishments in Maine Software development
63922571
https://en.wikipedia.org/wiki/Creatio
Creatio
Creatio (formerly bpm'online) is a Software as a service (SaaS) low-code solution for process management and CRM (customer relationship management). As of 2020, the Creatio solution stack consisted of Studio Creatio (low-code platform), Sales Creatio (sales force automation software), Marketing Creatio (marketing automation tool) and Service Creatio (help desk software). History Creatio was introduced as bpm'online on February 8, 2011. bpm'online CRM system was the first product developed on the platform. In November 2011, bpm'online platform won CRM Idol EMEA award. Paul Greenberg, a CRM expert, explained the jury’s choice by saying, “It is one of the best graphical designers I’ve seen and actually does make it easy for the non-techie to develop a process that can be injected into a sales component or a marketing function or across the entire enterprise… Clean and easy to use with a very powerful toolset that allows complex process creation from, maybe not novices, but non-technical users.” At that time it had customer data management, sales management, campaign management, time management and document management functions. Version 5.2, released in December 2011, introduced significant upgrades that allowed organizations to better manage customer data and interactions across social media (Facebook, LinkedIn and Twitter) and introduced Google Maps integration. In January 2012, the software was included into “The CRM Watchlist”, compiled by Paul Greenberg, for the first time. bpm'online also won this award in 2013, 2014, 2016 and 2020. Early in 2013, apps for Android and iOS were launched. In June 2013, version 7.0, the next generation of the application development platform, was released. Version 7.3, released in June 2014, featured an enterprise social network and Microsoft Exchange integration among other options. In March 2015, Forrester Research for the first time included bpm’online in its “Forrester Wave” review of top-10 CRM solutions for midsize organizations. In April, the product was first featured in Gartner Magic Quadrant for the Customer Engagement Center (customer service and support software) and it has been included in the report every year since then. In June 2015, version 7.6 was released with major updates to sales, marketing and customer service modules. In February 2016, Forrester Research for the first time included bpm’online in its “Forrester Wave” research on cloud-based dynamic case management, reviewing version 7.7 of the platform. Versions 7.8 and 7.9, released in 2016, introduced more functionality to simplify the platform for users with no technical background and enhance their ability to set up user interfaces and develop business processes. In October 2017, version 7.11 was released. Some key updates included new machine learning capabilities and predictive algorithms, improved marketing campaigns designer, extended BPM and case management capabilities as well as mobile app enhancements. In March 2018, Forrester Research included bpm’online into its “Forrester Wave” research on cloud-based dynamic case management, reviewing version 7.11 of the platform. The reviewers placed bpm’online platform in “Strong performers” category. They highlighted that bpm’online has over “150 case templates and apps available in an external community” and added that the platform is often as “a lighter and low-code alternative to Pegasystems.” In April, Gartner included bpm’online into its Magic Quadrant for Enterprise High-Productivity Application Platform as a Service. In March 2019, Forrester Research included bpm’online low-code platform into its top-10 digital process automation providers research. The researchers evaluated version 7.13 of the platform and placed it into “Strong performers” category. In April 2019, version 7.14 was released, which added a customer and partner portal to the platform in addition to several design tools for further customization of the system. In 2019, Gartner acknowledged bpm’online as the “Leader” in two Magic Quadrants, CRM Lead Management and Sales Force Automation (the software entered these quadrants as a “Niche player” in 2016 for the first time). The research company also included bpm’online into its review of Enterprise Low-Code Application Platforms. In 2019, the software was included in Forrester Wave research of Customer Service Solutions. In October 2019, the software was renamed from bpm’online to Creatio. In February 2020, Creatio once again was named the winner of “The CRM Watchlist”. In March 2020, a number of healthcare agencies, hospitals, nonprofits and other organizations battling COVID-19 were granted free access to Service Creatio platform. In July 2020, Creatio’s low-code platform became the winner of “People's Choice” Stevie Awards in “Digital Process Automation Solution” nomination. In February 2021, Creatio raised $68 million in a round led by Volition Capital with participation from Horizon Capital. Products Creatio is a Software as a service (SaaS) low-code solution for process management and CRM (customer relationship management. It can be used to automate business tasks, implement rules and develop third party integrations. The framework was built in .NET, customizations and scripts are built either in C# (server side code) or JavaScript (client side code). In addition to its Studio Creatio (low-code platform) Creatio offers three CRM applications: Sales Creatio (sales force automation software), Marketing Creatio (marketing automation tool) and Service Creatio (help desk software). There is also a Studio Creatio Free, a free tool for managing business processes and building applications. The most recent version of the Creatio platform (7.17) was released in October 2020. Notes Works cited External links Customer relationship management software .NET software 2011 software Cloud applications Marketing software
68760
https://en.wikipedia.org/wiki/Connection%20Machine
Connection Machine
A Connection Machine (CM) is a member of a series of massively parallel supercomputers that grew out of doctoral research on alternatives to the traditional von Neumann architecture of computers by Danny Hillis at Massachusetts Institute of Technology (MIT) in the early 1980s. Starting with CM-1, the machines were intended originally for applications in artificial intelligence (AI) and symbolic processing, but later versions found greater success in the field of computational science. Origin of idea Danny Hillis and Sheryl Handler founded Thinking Machines Corporation (TMC) in Waltham, Massachusetts, in 1983, moving in 1984 to Cambridge, MA. At TMC, Hillis assembled a team to develop what would become the CM-1 Connection Machine, a design for a massively parallel hypercube-based arrangement of thousands of microprocessors, springing from his PhD thesis work at MIT in Electrical Engineering and Computer Science (1985). The dissertation won the ACM Distinguished Dissertation prize in 1985, and was presented as a monograph that overviewed the philosophy, architecture, and software for the first Connection Machine, including information on its data routing between central processing unit (CPU) nodes, its memory handling, and the programming language Lisp applied in the parallel machine. Very early concepts contemplated just over a million processors, each connected in a 20-dimensional hypercube, which was later scaled down. Designs Each CM-1 microprocessor has its own 4 kilobits of random-access memory (RAM), and the hypercube-based array of them was designed to perform the same operation on multiple data points simultaneously, i.e., to execute tasks in single instruction, multiple data (SIMD) fashion. The CM-1, depending on the configuration, has as many as 65,536 individual processors, each extremely simple, processing one bit at a time. CM-1 and its successor CM-2 take the form of a cube 1.5 meters on a side, divided equally into eight smaller cubes. Each subcube contains 16 printed circuit boards and a main processor called a sequencer. Each circuit board contains 32 chips. Each chip contains a router, 16 processors, and 16 RAMs. The CM-1 as a whole has a 12-dimensional hypercube-based routing network (connecting the 212 chips), a main RAM, and an input-output processor (a channel controller). Each router contains five buffers to store the data being transmitted when a clear channel is not available. The engineers had originally calculated that seven buffers per chip would be needed, but this made the chip slightly too large to build. Nobel Prize-winning physicist Richard Feynman had previously calculated that five buffers would be enough, using a differential equation involving the average number of 1 bits in an address. They resubmitted the design of the chip with only five buffers, and when they put the machine together, it worked fine. Each chip is connected to a switching device called a nexus. The CM-1 uses Feynman's algorithm for computing logarithms that he had developed at Los Alamos National Laboratory for the Manhattan Project. It is well suited to the CM-1, using as it did, only shifting and adding, with a small table shared by all the processors. Feynman also discovered that the CM-1 would compute the Feynman diagrams for quantum chromodynamics (QCD) calculations faster than an expensive special-purpose machine developed at Caltech. To improve its commercial viability, TMC launched the CM-2 in 1987, adding Weitek 3132 floating-point numeric coprocessors and more RAM to the system. Thirty-two of the original one-bit processors shared each numeric processor. The CM-2 can be configured with up to 512 MB of RAM, and a redundant array of independent disks (RAID) hard disk system, called a DataVault, of up to 25 GB. Two later variants of the CM-2 were also produced, the smaller CM-2a with either 4096 or 8192 single-bit processors, and the faster CM-200. Due to its origins in AI research, the software for the CM-1/2/200 single-bit processor was influenced by the Lisp programming language and a version of Common Lisp, *Lisp (spoken: Star-Lisp), was implemented on the CM-1. Other early languages included Karl Sims' IK and Cliff Lasser's URDU. Much system utility software for the CM-1/2 was written in *Lisp. Many applications for the CM-2, however, were written in C*, a data-parallel superset of ANSI C. With the CM-5, announced in 1991, TMC switched from the CM-2's hypercubic architecture of simple processors to a new and different multiple instruction, multiple data (MIMD) architecture based on a fat tree network of reduced instruction set computing (RISC) SPARC processors. To make programming easier, it was made to simulate a SIMD design. The later CM-5E replaces the SPARC processors with faster SuperSPARCs. A CM-5 was the fastest computer in the world in 1993 according to the TOP500 list, running 1024 cores with Rpeak of 131.0 GFLOPS, and for several years many of the top 10 fastest computers were CM-5s. Visual design Connection Machines were noted for their striking visual design. The CM-1 and CM-2 design teams were led by Tamiko Thiel. The physical form of the CM-1, CM-2, and CM-200 chassis was a cube-of-cubes, referencing the machine's internal 12-dimensional hypercube network, with the red light-emitting diodes (LEDs), by default indicating the processor status, visible through the doors of each cube. By default, when a processor is executing an instruction, its LED is on. In a SIMD program, the goal is to have as many processors as possible working the program at the same time – indicated by having all LEDs being steady on. Those unfamiliar with the use of the LEDs wanted to see the LEDs blink – or even spell out messages to visitors. The result is that finished programs often have superfluous operations to blink the LEDs. The CM-5, in plan view, had a staircase-like shape, and also had large panels of red blinking LEDs. Prominent sculptor-architect Maya Lin contributed to the CM-5 design. Exhibits The very first CM-1 is on permanent display in the Computer History Museum, Mountain View, California, which also has two other CM-1s and CM-5. Other Connection Machines survive in the collections of the Museum of Modern Art New York and the Living Computers: Museum + Labs Seattle (CM-2s with LED grids simulating the processor status LEDs), and in the Smithsonian Institution National Museum of American History, the Computer Museum of America in Roswell, Georgia, and the Swedish National Museum of Science and Technology (Tekniska Museet) in Stockholm, Sweden. References in popular culture A CM-5 was featured in the film Jurassic Park in the control room for the island (instead of a Cray X-MP supercomputer as in the novel). The computer mainframes in Fallout 3 were inspired heavily by the CM-5. See also Blinkenlights Brewster Kahle – lead engineer on the Connection Machine projects Danny Hillis – inventor of the Connection Machine David E. Shaw – creator of NON-VON machine, which preceded the Connection machine slightly FROSTBURG – a CM-5 used by the NSA Goodyear MPP ICL DAP MasPar Parallel computing References Further reading Hillis, D. 1982 "New Computer Architectures and Their Relationship to Physics or Why CS is No Good", Int J. Theoretical Physics 21 (3/4) 255-262. Lewis W. Tucker, George G. Robertson, "Architecture and Applications of the Connection Machine," Computer, vol. 21, no. 8, pp. 26–38, August, 1988. Arthur Trew and Greg Wilson (eds.) (1991). Past, Present, Parallel: A Survey of Available Parallel Computing Systems. New York: Springer-Verlag. Charles E. Leiserson, Zahi S. Abuhamdeh, David C. Douglas, Carl R. Feynman, Mahesh N. Ganmukhi, Jeffrey V. Hill, W. Daniel Hillis, Bradley C. Kuszmaul, Margaret A. St. Pierre, David S. Wells, Monica C. Wong, Shaw-Wen Yang, and Robert Zak. "The Network Architecture of the Connection Machine CM-5". Proceedings of the fourth annual ACM Symposium on Parallel Algorithms and Architectures. 1992. W. Daniel Hillis and Lewis W. Tucker. The CM-5 Connection Machine: A Scalable Supercomputer. In Communications of the ACM, Vol. 36, No. 11 (November 1993). External links Gallery of CM-5 images CM-5 Manuals Tamiko Thiel on the visual design of the CM-1/2/200 Feynman and the Connection Machine Liquid Selves, an animated short film rendered on a CM-2 A preserved CM-2a at the Corestore Computer Museum Supercomputers Parallel computing Massively parallel computers Thinking Machines supercomputers Computer-related introductions in 1984
2079041
https://en.wikipedia.org/wiki/I-War%20%281995%20video%20game%29
I-War (1995 video game)
I-War is a shooter video game developed by Imagitec Design and published by Atari Corporation exclusively for the Atari Jaguar in North America and Europe on December 15, 1995. It was the last title developed by Imagitec for the Jaguar before the company ended their relationship with Atari Corp., who would discontinue the platform in April 1996. When the databases of the Override mainframe supercomputer began mutating and blocking the I-Way computer network, the player is tasked in piloting an antivirus tank vehicle and enter into the virtual world to clear out the network, eliminate computer viruses and eradicate the mutated databases. The game was originally announced in late 1994 under a different title. I-War received mixed to negative reception when it was originally released. Gameplay I-War is a shooter game that is primarily played in a first-person perspective, where the player takes control of an antivirus tank vehicle in order to enter into the virtual world of a worldwide computer network nicknamed "I-Way" and fight computer viruses, in addition of destroying mutated databases and collect datapods as main objectives through 21 different levels, each one increasing in scope and complexity as the player progresses through the game, with later levels introducing new enemy types and weapons that can be equipped for the player's tank. Before starting the game, the player has the option to choose between three different types of tanks, with each having their own advantages and disadvantages. There are three difficulty levels that the player can choose at the options menu, while other settings are available by entering a cheat code. There is also a two-player versus mode. During gameplay, the player can change the camera angles by pressing their respective number on the controller's keypad, in addition of activating a level map and other features. Unlike other games in the genre that were released for the system such as Cybermorph and Hover Strike, the levels in I-War are sorted into enclosed chambers with one-way teleportation in order to avoid repeating levels. Once the set number of mutated databases are destroyed and collecting the number of datapods within the respective level, the player has to come back to the starting point and enter into the Data Link bonus rounds, which involves grabbing datapods in an attempt to gain an extra life by either increasing or decreasing the tank's speed, while also acting as transition points to upper levels. After completing the level, the player can choose to save or not their progress, which is kept via the cartridge's internal EEPROM, while high-scores and other setting changes are automatically saved internally. If all lives are lost, the game is over. Plot In the future and after 20 years of being in development, the Override mainframe supercomputer, which is buried on the South Pole in order to keep its core at very low temperatures from overheating, went online on schedule and its main function is to handle information of the ever-increasingly complex internet, called I-Way, through advanced processing capabilities that the technology inside of the supercomputer offers and as such, society started depending on it and worked for years without exhibiting issues until its databases started mutating and computer viruses began to clog the I-Way, leading to delays, slow information transfers, among other issues that brings the Override to the point of self-destruction as a result of the now-mutated databases. In response to the situation, the player is assigned to pilot an antivirus tank vehicle in order to destroy mutated databases and viruses that are clogging the I-Way, in addition to collecting datapods. After traversing multiple nodes, the player finally arrives to the Override Central Block and destroys the boss database by overloading it with viruses, saving I-Way in the process until next time. Development and release I-War was originally advertised under the name Redemption in late 1994 and was also known internally as Dreadnaught, with plans to be originally released around the second quarter of 1995. The music was composed by Alastair Lindsay. Development of the game was completed on December 11, 1995, a few days before release. The game was showcased during the Fun 'n' Games Day event hosted by Atari. Reception and legacy I-War received mixed to negative reception since its release. In 1997, two years after its release, the game's trademark was abandoned. Notes References External links I-War at AtariAge I-War at GameFAQs I-War at Giant Bomb I-War at MobyGames 1995 video games Atari games Atari Jaguar games Atari Jaguar-only games Commercial video games with freely available source code Imagitec Design games Multiplayer and single-player video games Science fiction video games Shooter video games Split-screen multiplayer games Tank simulation video games Video games about virtual reality Video games developed in the United Kingdom Video games scored by Alastair Lindsay Video games set in the future Works set in computers
38339377
https://en.wikipedia.org/wiki/Perspecsys
Perspecsys
Perspecsys Inc. is a cloud computing security company that provides cloud data protection software. Perspecsys has offices in the Toronto area; Tysons Corner, Virginia; San Francisco, California; London, England; Paris, France; and Berlin, Germany. Perspecsys specializes in cloud data privacy, data residency/sovereignty, and data security software that enables compliance with industry regulations and directives, and security requirements when adopting cloud. Banking and financial services, healthcare, retail, and government entities must adhere to strict guidelines when handling sensitive personal data in cloud applications that include: PCI DSS, ITAR, FERPA, HIPAA, and HITECH. Technology The AppProtex Cloud Data Protection Gateway secures data in software as a service and platform as a service provider applications through the use of encryption or tokenization. Gartner refers to this type of technology as a cloud encryption gateway, and categorizes providers of this technology as cloud access security brokers. The United States Patent and Trademark Office (USPTO) has granted Perspecsys U.S. Patent No. 9,021,135 for its System and Method for Tokenization of Data for Storage in a Cloud. The main component of the cloud encryption gateway is the AppProtex Cloud Data Protection Gateway Server, which acts as an intercepting software proxy. The gateway server provides the core data privacy, residency, and security services for the gateway. AppProtex Discovery & Analyze capabilities allow visibility into information users are sharing with cloud applications. Users may define encryption, and tokenization options at the field-level. The cloud data protection gateway allows encryption with any third-party JCA/JCE-compliant cryptographic module, including FIPS 140-2 (Federal Information Processing Standard) validated modules. Cloud data is secured, and end-users maintain full functionality, such as the ability to search, sort, and e-mail using data that has been either encrypted or tokenized. The data that flows between the cloud application, and the end user is interpreted by the Gateway. For cloud applications that feature email, the AppProtex Communications Server can enable the secure transfer of email. Additionally, AppProtex Server facilitates the deployment of the cloud security gateway via IaaS partners such as Amazon Web Services, CSC, and Fujitsu. Perspecsys is a Salesforce AppExchange Partner and provides tokenization or encryption of Salesforce.com, Chatter, Force.com, and Wave Analytics Cloud Data. The AppProtex Cloud Data Protection Gateway secures cloud data across cloud applications, including Oracle CRM on Demand, Oracle Fusion CRM, ServiceNow, SuccessFactors, AppExtremes, and Xactly Incent. Standards Perspecsys cloud encryption gateway uses either tokenization or encryption for cloud security. Its tokenization option was evaluated by Coalfire, a PCI DSS Qualified Security Assessor (QSA) and a FedRAMP-accredited Third Party Assessment Organization (3PAO), to ensure that it adheres to industry guidelines. The gateway also allows encryption modules from other third-party providers (such as McAfee, Voltage Security, SafeNet, and Symantec) to encrypt cloud data, including modules that are FIPS 140-2 (Federal Information Processing Standard) validated, issued by the National Institute of Standards and Technology (NIST). Funding & Acquisition In May 2013, Perspecsys secured $12 million in Series B funding, co-led by Paladin Capital Group and Ascent Venture Partners and joined by return backer Intel Capital and other existing institutional investors. Together with Series A funding, this new round of financing brings the total investment in Perspecsys to over $20 million. Perspecsys’ Series A round of funding totaled $8 million and was led by Intel Capital, the global investment branch of technology company, Intel in May, 2011. GrowthWorks, and MaRS Investment Accelerator Fund have also invested in Perspecsys. On July 30, 2015 Blue Coat Systems announced it had acquired Perspecsys in order to expand its cloud security offerings. The acquisition price is estimated to be $180–200M. Subsequently on the 12th of June 2016, Symantec Corporation announced that it would be acquiring Blue Coat Systems for approximately $4.65 Billion in cash. Greg Clark, Chief Executive Officer of Blue Coat, was appointed Chief Executive Officer of Symantec. References External links Cloud computing providers Cloud storage gateways Cloud applications Software companies of Canada
12769879
https://en.wikipedia.org/wiki/Mkfs
Mkfs
In computer operating systems, mkfs is a command used to format a block storage device with a specific file system. The command is part of Unix and Unix-like operating systems. In Unix, a block storage device must be formatted with a file system before it can be mounted and accessed through the operating system's filesystem hierarchy. History The command was originally implemented in the first version of Unix as a method to initialize either a DECtape (using the "t" argument) or an RK03 disk pack (using the "r" argument). The initialization process would write formatting data to the device so that it contained an empty file system. It created the super-block, i-list, and free list on the storage device and established the root directory with entries for "." and ".." (self and parent, respectively). The RK03 disk packs had 4872 available blocks after initialization, while the tapes had 578 blocks (at 512 bytes/block). The mkfs executable was kept in the /etc directory instead of a binary directory so it would not be inadvertently called and destroy information. Later implementations of Unix-like operating systems included the mkfs command, including HP-UX, Minix, SunOS and Linux. Syntax The basic syntax of the command, which is common to all modern implementations, is: $ mkfs -t <fs type> <device> where 'fs type' is the type of the filesystem and 'device' is the target UNIX device to write the filesystem data to. Usually the "device" is a drive partition. Often the command is simply a wrapper for another command that performs the formatting for a specific file system. For example, $ mkfs -t ext3 /dev/sda1 would call the command mke2fs while passing along the appropriate arguments to format the device /dev/sda1 with the ext3 filesystem. The default options for the command are stored in the file mke2fs.conf, usually in the /etc directory. Depending on the implementation and the specific file system requested, the command may have many options that can be specified such as inode size, block size, volume label, and other features. (See file system for details) The filesystem-specific commands that mkfs calls may be invoked directly by the user from the command line. In Linux, the convention has been to name the filesystem-specific commands as: mkfs.<fs-type>. Where <fs-type> is an abbreviation for the file system, e.g., mkfs.ext2, mkfs.msdos, mkfs.minix, etc. File systems supported by the command vary by implementation and include: MSDOS, SCO bfs, CPM, ext2, ext3, ext4, minix, fat (vfat), HFS, VXFS, RF disk, RK disk, DECtape, and NTFS. See also dd — convert and copy a file e2fsprogs — a set of utilities for maintaining the ext2, ext3 and ext4 file systems fdisk — examine and write partition table fsck — file system check mkisofs — make an iso file system mount — mount a file system parted — partition manager References External links mkfs manual Unix file system-related software
62995549
https://en.wikipedia.org/wiki/12%20Step%20foot%20controller
12 Step foot controller
The 12 Step foot controller is a bass pedal-style programmable MIDI controller pedal keyboard made by Keith McMillen Instruments which was released in 2011. It has small, soft, rubbery keys that are played with the feet. As a MIDI controller, it does not make or output any musical sounds by itself; rather, it sends MIDI (Musical Instrument Digital Interface) messages about which notes are played (and with which types of expression or pressure) to an external synth module or computer music program running on a laptop or other computer. Each key on the 12 Step senses the velocity, aftertouch pressure, and the amount of tilt the player is applying with her feet. The messages from the player's foot presses can be sent via USB to a computer-based virtual instrument or to a synthesizer or other electronic or digital musical instrument. The expressive nuances in playing the 12 Step can be used to make a virtual instrument or synthesizer's melodic line change in sound or timbre. For example, a melody line could be played to get louder and softer by pressing the keys harder or more gently; by continuing to hold down a long note, the player could trigger effects on the synth patch such as vibrato; and by tilting the foot on the key, they could trigger a pitch bend (depending on the user's programming of the 12 Step and the design of the synth patch). The 12 Step's keys can be used to play individual notes in many octaves, enabling it to be used to play anything from deep-pitched basslines or high-pitched melody lines. As well, the keys on the 12 step can be programmed to play chords of up to five unique notes per rubbery key (e.g., the C note can be programmed to play a C major chord, the D note can be programmed to play a "d minor" chord, and so on). The programmable chord feature enables performers to play chords with their feet and accompany themselves or be a one man band. The 12 Step has 59 factory preset programming choices, including a chromatic scale and many different types of chords (major, minor, dominant seventh, power chords, etc.). The user can also program their own chords for each key of the instrument. The 12 Step's keys can also be used to trigger "clips", backing tracks, or song sections in digital audio workstations, music sequencers, and music apps. History In 2005, Keith McMillen founded Keith McMillen Instruments (KMI), a hardware and software company that designs music and stage equipment that interfaces with computers. He founded the company after touring as a musician with large, cumbersome gear and recognized the need for equipment compact enough to easily carry on an airplane. The resulting devices are "polyphonic multidimensional controllers," and in addition to USB and MIDI capability, some can use the proposed MIDI extension MPE, which enables polyphonic aftertouch and sophisticated responsiveness. Keith McMillen Instruments' engineers design a range of MIDI devices and controllers. They noted that most bass pedal-type pedal keyboards did not give the player much expressive control. Most 1980s and 1990s-era bass pedal MIDI controllers are simply an on-off switch, so players could not add expressive changes of dynamics or nuance to their foot-played musical lines. Keith McMillen Instruments' first exploration of foot controllers was the Soft Step, which was released in 2011. The SoftStep has "ten pressure- and direction-sensitive backlit keys, [and] a 4-character LED display" and it could send messages to computer audio programs, enabling musicians to, say, start a sequencer or trigger a device. The buttons on the SoftStep are user programmable, so each person could customize their SoftStep to control different functions on their computer music or electronic gear's set up. The one drawback of the SoftStep is that even though it can be programmed to play individual notes on a synthesizer, it was not intended to be used as a musical instrument. The company engineers set out to create a new programmable foot controller, the 12 Step, that was designed for expressive pedal keyboard playing. Elements The pedal keyboard of the 12 Step has 13 button-style keys laid out in a musical keyboard fashion, appearing like the layout of the chromatic octave starting on C on a piano keyboard. The notes that would be the black keys (accidentals) on a piano keyboard are raised. There is also an "Enter" button which is used to access other commands (when the "Enter" button is activated, the pedal keyboard notes temporarily turn into command buttons to change the preset, registration bank (a group of presets, much like one might see in a digital pipe organ, or change octaves up or down). The 12 Step is USB plug-and-play, which means that it can be plugged directly into a compatible computer without needing software drivers. As such, a musician with a virtual instrument on her computer could play scales and melodies using the virtual instrument just by plugging the 12 Step into a USB port on the computer. The 12 Step can be plugged directly into some 2010s-era synth modules and hardware electronic instruments that have USB ports. The 12 Step can be used to play 1980s and 1990s-era synthesizers and hardware instruments that are pre-USB (e.g., a DX-7 synth or drum machine) or which do not have a USB connection, and which only have 5-pin MIDI connectors by using the KMI MIDI Expander, a Keith McMillen Instruments-made unit that is sold separately. The KMI MIDI Expander is a small metal-cased unit with jacks for 5-pin MIDI cable "in" and "out" and USB connectors for power from a wall outlet and to connect to the 12 Step, and LEDs that light up when MIDI messages are sent in or out of the unit. The MIDI expander transforms the 12 Step's USB output to MIDI messages that can be sent over 5-pin MIDI connectors. Each of the keys on the 12 Step have bright white backlighting from an LED, so the keys can be seen on a dark stage. As well, each key has a red LED light that turns on if you press the key, to help you know if you are pressing the intended key. The keys have no moving parts; instead, they have sensors embedded into soft, rubbery keys. The keys sense velocity (how hard or soft the foot hits the key), poly aftertouch pressure (whether or not your foot continues to press the key after the initial strike, which can be used to add nuance to sustained notes, such as by triggering vibrato or other effects) and pitch bend (a gliding glissando sound). It weighs 1.0 lb (453 g) and measures 17.5 x 4 x 0.75" (445 x 102 x 19mm). The keyboard can be set to play one note only, or set to a "poly" mode, which can sound multiple notes at the same time. The "one note only" setting has long been a standard feature of pedal keyboards intended for use playing basslines, because in many cases, having two bass notes sounding simultaneously can be unduly "muddy". Each preset also has settings for legato, hold, or "toggle". In "Legato" mode, it is like playing a piano with the damper pedal pressed; each note you press keeps sustaining. The programming automatically replaces any subsequent note in a smooth, seamless legato fashion. If you press the middle button, it mutes any lingering notes or chords. "Toggle" allows the user to switch between modes. One feature in the 12 Step not found in other MIDI foot controllers is that each key can be programmed to play up to five notes. This way, a violin player performing a pop song as a one man band could program the 12 Step keys to play the chords she needs. For example, the C button could be programmed to play a C major triad chord, the D key could play a d minor triad, the G key could play a G dominant seventh chord, the A key could play an "a minor" seventh chord, and so on. Since the chords are user programmed, the chord voicings for these chords could cover multiple octaves. A bass player in a power trio could program the 12 Step to play power chords, enabling her to provide chordal accompaniment for the lead guitarist's guitar solo with her feet while she plays bass with her hands. While the presets often provide chords in close voicing (all notes within an octave), there are no technical restrictions on octaves for programming (at least within the standard range of MIDI notes). As such, open voicing chords can be programmed, such as jazz chord voicings that add higher extensions (e.g. 11ths or 13ths). As well, since the chords are programmed, there are none of the limitations that a human keyboard player might face; the 12 Step can perform 10ths, 11ths, and 13ths that would be challenging or impossible for a pianist to play with one hand. As well, since the 12 Step allows users to program any combination of five notes per rubbery key, the 12 Step could also be used to provide a deep sub-bass note and a four note chord in a standard accompaniment register (i.e. around middle C on a piano). A 12 Step programmed in this fashion could provide a one man band with a simple bass part and chordal accompaniment. The first preset is a chromatic scale starting in C. But even users who only want to play individual notes are not limited to that scale or arrangement. The user could create presets for all of the different keys that they use, so that the keys of the 12 Step could be used to play in different musical keys, while maintaining the familiar C major pattern. For example, if a performer wished to play a song in C# Major, the entire chromatic scale of the 12 Step could be transposed up a semitone. Thus, by playing the song using the keys (the buttons on the 12 Step) for C Major, the synthesizer would produce a sound transposed to C# Major. Whatever is programmed into the keys, whether it is individual notes or chords, can be transposed up or down by several octaves by using the "Select" key and then pressing the octave up or octave down keys (which are the regular note keys, which serve as function buttons once the "Select" button is engaged). The back of the 12 Step has several connectors: a 1/8" expression pedal input, a USB port for connecting to the optional MIDI expander unit, and a USB port for connecting to a computer or hardware electronic device (synth module, sequencer, etc.). Each key has a little red LED light that illuminates when the key is pressed, which helps the performer confirm which note they have pressed. A small alphanumeric LED panel can show up to four characters (some of the preset names include "bEnD", "POLY" and "5OCt" (the last one for a five octave preset) (see list of 12 Step presets. The drumset preset ("dSEt") automatically transmits to the General MIDI electronic drums channel. The keys on the 12 Step sound drum and cymbal sounds in this setting. The 12 Step has 59 factory presets, such as a chromatic scale, major chords, minor chords, suspended fourth ("sus") chords, power chords, diatonic chords (in the key of C major, this would be the chords C major, d minor, e minor, F Major, G7 and so on), to name a few. The user can program and save up to 128 presets (in total, so to get all 128 presets as user-programmed presets, the factory presets would have to be replaced) and give them names that will appear on the display panel. The user can select factory or user-created presets by pressing the "Select" key and then pressing the numbered keys (the keys C, D, E, F, G, A, B and the high C that are usually played, which become function keys numbered 1 through 8 when the "Select" button is engaged and its LED is flashing. The user can program the 12 Step to do a "program change" when a certain preset is selected. Using this feature, a user could program the 12 Step to send a program change message to their synthesizer module, selecting a certain synth patch or sound when a certain 12 Step preset is chosen. For example, a user could program the 12 Step to change their synth module to an electric bass sound when the "BASS" 12 Step preset is selected. The user needs to download the free 12 Step Editor program to do programming of new presets or make changes to the unit's settings (such as the touch sensitivity of the keys), using a laptop, desktop or tablet computer. The 12 Step gets its power from the USB bus from the computer it is plugged into or from the Expander unit's port (the Expander is powered by a wall adapter). List of factory presets The factory presets include scales, dyads (two notes played simultaneously), chords, and articulation changes and other functions. The names that are provided are the preset names that appear in the four-character LED display on the 12 Step. Users can change the display names using the 12 Step editing software (which must be used on a computer). Scales These include CHrO ("Chromatic Scale"), which automatically loads when 12 Step powers up; LEAd ("Blues Lead"); bLUE ("Blues Bass"); PEnt ("Major Pentatonic"); and -Pnt ("Minor Pentatonic"). Dyads The presets include wide range of dyads, which are two notes that are sounded simultaneously. OCt ("Octave"); 5OCt ("5 Octaves"); -3rd ("Minor 3rds"); 3rd ("Major 3rds"); dIA3 ("Diatonic 3rds"); 4tHS ("4ths"); dIA4 ("Diatonic 4ths"); StC4 ("Stacked 4ths"); 5tHS ("5ths"); dIA5 ("Diatonic 5ths"); StC5 ("Stacked 5ths"); -6tH ("Minor 6ths"); 6tHS ("Major 6ths"); dIA6 ("Diatonic 6ths"); -7tH ("Minor 7ths"); 7tHS ("Major 7ths"); -9tH ("Minor 9ths"); 9tHS ("Major 9ths"); 10S ("Minor 10ths"); 10tH ("Major 10ths"); trtn ("Tritone"). Chords Some of the chord presets also have an articulation or playing style feature which is turned on simultaneously, such as POtG, "Power Chords Toggle", which sets up power chords for each key of the 12 Step, while also activating the "toggle" playing feature. Some of the chord presets are designed for musicians and bands that use Drop D tuning. EPO ("Power Chords Legato"); SUS9 ("Sus9 Chords")-trd ("Minor Triads"); trAd ("Major Triads"); dtrd ("Diatonic Triads"; -145 ("1-4-5-7 Minor Chords"); 1457 ("1-4-5-7 Major Chords"); dI ("Diminished Chords"); AUG ("Augmented Triads"); PO ("Power Chords Normal"); POtG ("Power Chords Toggle"); InPO ("Inverted Power Chords"); d_LO ("Drop D −12"); drOP ("Drop D Legato"); -6CH ("Minor 6th Chords"); 6CHd ("Major 6th Chords"); -FL7 ("Minor 7 Chords"); FLt7 ("Dominant 7 Chords"); dI7C ("Diatonic 7th Chords (Major)"); SUS4 ("Sus 4 Chords"). Articulation or other functions Notable presets in this group are CLIP ("Live Clip Launching), which is set up to launch sound clips in Ableton Live; and A__b ("2 Voices"), which enables one 12 Step to control two different synth voices (or two different synth modules) on two different MIDI channels. bEnD ("Tilt Pitch Bend"); LGtO ("Legato"); tOGL ("Toggle"); PrES ("Pressure Volume"); CLIP ("Live Clip Launching); POLY ("Polyphonic") AFtr ("Poly Aftertouch"); A__b ("2 Voices"); CrOS ("Voice XFade"); PAn ("Key Number Panning"); dSEt ("Drum Set"). Reception Kev Choice and Albert Mathias, the reviewers from Keyboard Mag, call the 12 Step a "...welcome alternative to heavy, cumbersome foot controllers" that are hard to program. The reviewers state that the manual and software editor program are "...well written and clear". They state that playing some basslines on the small keys, such as "...walking bass lines, for example, might make one wish for shoes with bigger heels or pointier tips", and they suggest sitting down to play the instrument. The reviewers state that the 12 Step is easy to use with Propellerhead, Reason and Ableton Live. They say that in addition to its usefulness for pianists and other instrumentalists, it has potential for use by "...deejays, emcees, or anyone on stage responsible for triggering samples or generating tones". Alex Maiolo from Tape Op had a positive review of the 12 Step, which noted that "most [foot controllers] are expensive and bulky", making them infeasible for the typical musician who only needs occasional foot-triggered notes. He calls it a "...quick, cheap, easy, reliable, portable bass pedal solution" . He noted that given that bands in the mid-2010s are touring with smaller numbers of personnel, yet still trying to recreate their studio recording sounds in live shows, the 12 Step could solve this problem, as it can be used as a "clip launcher for hardware and software samplers". Juan Alderete and Nick Reinhart from Pedals and Effects were pleased that Keith McMillen is making a variety of lightweight music gear for travelling musicians and they positively note the 12 Step's "bullet proof" and durable construction. They note that since "...most bands these days seem to be either two or three musicians, the 12 Step is a great way to step up your band's sound" by adding "Moog bass, or control samples via my feet", so a band can "sound like [they] do on the record" in live shows. Reviewer Nick Batt demonstrated and commented on the 12 Step in a 2012 video review for SonicState. Batt praised the 12 Step, saying "a lot of thought has gone into it", and he stated that as of 2012, no other MIDI foot controller offered the same level of sophisticated control options (apart from the McMillen SoftStep). Batt states that he found pressing the small keys hard while wearing shoes, but he acknowledged that there are Hammond organists who have made videos of themselves playing rapid organ basslines on the 12 Step. He said the strong point of the unit is its "potential for customizable control" of computer music applications such as Ableton, such as triggering clips or "scenes", particularly for solo performers or one man bands. In Sam Mallery's review for B&H Photo, he calls Keith McMillen Instruments "...among the most innovative and forward-thinking manufacturers in the pro audio industry today" and says that it is "no surprise that the 12 Step is so intelligently designed and easy to use". He praises the "...illuminated and expressive keys" and states that musicians and DJs will find the 12 Step useful for shows. References External links Keith McMillen Instruments – 12 Step Electric and electronic keyboard instruments MIDI controllers
252814
https://en.wikipedia.org/wiki/HSL%20and%20HSV
HSL and HSV
HSL (for hue, saturation, lightness) and HSV (for hue, saturation, value; also known as HSB, for hue, saturation, brightness) are alternative representations of the RGB color model, designed in the 1970s by computer graphics researchers to more closely align with the way human vision perceives color-making attributes. In these models, colors of each hue are arranged in a radial slice, around a central axis of neutral colors which ranges from black at the bottom to white at the top. The HSL representation models the way different paints mix together to create colour in the real world, with the lightness dimension resembling the varying amounts of black or white paint in the mixture (e.g. to create "light red", a red pigment can be mixed with white paint; this white paint corresponds to a high "lightness" value in the HSL representation). Fully saturated colors are placed around a circle at a lightness value of ½, with a lightness value of 0 or 1 corresponding to fully black or white, respectively. Meanwhile, the HSV representation models how colors appear under light. The difference between HSL and HSV is that a color with maximum lightness in HSL is pure white, but a color with maximum value/brightness in HSV is analogous to shining a white light on a colored object (e.g. shining a bright white light on a red object causes the object to still appear red, just brighter and more intense, while shining a dim light on a red object causes the object to appear darker and less bright). The issue with both HSV and HSL is that these approaches do not effectively separate colour into their three value components according to human perception of color. This can be seen when the saturation settings are altered — it is quite easy to notice the difference in perceptual lightness despite the "V" or "L" setting being fixed. Basic principle HSL and HSV are both cylindrical geometries (), with hue, their angular dimension, starting at the red primary at 0°, passing through the green primary at 120° and the blue primary at 240°, and then wrapping back to red at 360°. In each geometry, the central vertical axis comprises the neutral, achromatic, or gray colors ranging, from top to bottom, white at lightness 1 (value 1) to black at lightness 0 (value 0). In both geometries, the additive primary and secondary colors—red, yellow, green, cyan, blue and magenta—and linear mixtures between adjacent pairs of them, sometimes called pure colors, are arranged around the outside edge of the cylinder with saturation 1. These saturated colors have lightness 0.5 in HSL, while in HSV they have value 1. Mixing these pure colors with black—producing so-called shades—leaves saturation unchanged. In HSL, saturation is also unchanged by tinting with white, and only mixtures with both black and white—called tones—have saturation less than 1. In HSV, tinting alone reduces saturation. Because these definitions of saturation—in which very dark (in both models) or very light (in HSL) near-neutral colors are considered fully saturated (for instance, from the bottom right in the sliced HSL cylinder or from the top right)—conflict with the intuitive notion of color purity, often a conic or biconic solid is drawn instead (), with what this article calls chroma as its radial dimension (equal to the range of the RGB values), instead of saturation (where the saturation is equal to the chroma over the maximum chroma in that slice of the (bi)cone). Confusingly, such diagrams usually label this radial dimension "saturation", blurring or erasing the distinction between saturation and chroma. As described below, computing chroma is a helpful step in the derivation of each model. Because such an intermediate model—with dimensions hue, chroma, and HSV value or HSL lightness—takes the shape of a cone or bicone, HSV is often called the "hexcone model" while HSL is often called the "bi-hexcone model" (). Motivation The HSL color space was invented for television in 1938 by Georges Valensi as a method to add color encoding to existing monochrome (i.e. only containing the L signal) broadcasts, allowing existing receivers to receive new color broadcasts (in black and white) without modification as the luminance (black and white) signal is broadcast unmodified. It has been used in all major analog broadcast television encoding including NTSC, PAL and SECAM and all major digital broadcast systems and is the basis for composite video. Most televisions, computer displays, and projectors produce colors by combining red, green, and blue light in varying intensities—the so-called RGB additive primary colors. The resulting mixtures in RGB color space can reproduce a wide variety of colors (called a gamut); however, the relationship between the constituent amounts of red, green, and blue light and the resulting color is unintuitive, especially for inexperienced users, and for users familiar with subtractive color mixing of paints or traditional artists' models based on tints and shades (). Furthermore, neither additive nor subtractive color models define color relationships the same way the human eye does. For example, imagine we have an RGB display whose color is controlled by three sliders ranging from , one controlling the intensity of each of the red, green, and blue primaries. If we begin with a relatively colorful orange , with sRGB values , , , and want to reduce its colorfulness by half to a less saturated orange , we would need to drag the sliders to decrease R by 31, increase G by 24, and increase B by 59, as pictured below. In an attempt to accommodate more traditional and intuitive color mixing models, computer graphics pioneers at PARC and NYIT introduced the HSV model for computer display technology in the mid-1970s, formally described by Alvy Ray Smith in the August 1978 issue of Computer Graphics. In the same issue, Joblove and Greenberg described the HSL model—whose dimensions they labeled hue, relative chroma, and intensity—and compared it to HSV (). Their model was based more upon how colors are organized and conceptualized in human vision in terms of other color-making attributes, such as hue, lightness, and chroma; as well as upon traditional color mixing methods—e.g., in painting—that involve mixing brightly colored pigments with black or white to achieve lighter, darker, or less colorful colors. The following year, 1979, at SIGGRAPH, Tektronix introduced graphics terminals using HSL for color designation, and the Computer Graphics Standards Committee recommended it in their annual status report (). These models were useful not only because they were more intuitive than raw RGB values, but also because the conversions to and from RGB were extremely fast to compute: they could run in real time on the hardware of the 1970s. Consequently, these models and similar ones have become ubiquitous throughout image editing and graphics software since then. Some of their uses are described below. Formal derivation Color-making attributes The dimensions of the HSL and HSV geometries—simple transformations of the not-perceptually-based RGB model—are not directly related to the photometric color-making attributes of the same names, as defined by scientists such as the CIE or ASTM. Nonetheless, it is worth reviewing those definitions before leaping into the derivation of our models. For the definitions of color-making attributes which follow, see: Hue The "attribute of a visual sensation according to which an area appears to be similar to one of the perceived colors: red, yellow, green, and blue, or to a combination of two of them". Radiance (Le,Ω) The radiant power of light passing through a particular surface per unit solid angle per unit projected area, measured in SI units in watt per steradian per square metre (). Luminance (Y or Lv,Ω) The radiance weighted by the effect of each wavelength on a typical human observer, measured in SI units in candela per square meter (). Often the term luminance is used for the relative luminance, Y/Yn, where Yn is the luminance of the reference white point. Luma (Y′) The weighted sum of gamma-corrected , , and values, and used in , for JPEG compression and video transmission. Brightness (or value) The "attribute of a visual sensation according to which an area appears to emit more or less light". Lightness The "brightness relative to the brightness of a similarly illuminated white". Colorfulness The "attribute of a visual sensation according to which the perceived color of an area appears to be more or less chromatic". Chroma The "colorfulness relative to the brightness of a similarly illuminated white". Saturation The "colorfulness of a stimulus relative to its own brightness". Brightness and colorfulness are absolute measures, which usually describe the spectral distribution of light entering the eye, while lightness and chroma are measured relative to some white point, and are thus often used for descriptions of surface colors, remaining roughly constant even as brightness and colorfulness change with different illumination. Saturation can be defined as either the ratio of colorfulness to brightness, or that of chroma to lightness. General approach HSL, HSV, and related models can be derived via geometric strategies, or can be thought of as specific instances of a "generalized LHS model". The HSL and HSV model-builders took an RGB cube—with constituent amounts of red, green, and blue light in a color denoted —and tilted it on its corner, so that black rested at the origin with white directly above it along the vertical axis, then measured the hue of the colors in the cube by their angle around that axis, starting with red at 0°. Then they came up with a characterization of brightness/value/lightness, and defined saturation to range from 0 along the axis to 1 at the most colorful point for each pair of other parameters. Hue and chroma In each of our models, we calculate both hue and what this article will call chroma, after Joblove and Greenberg (1978), in the same way—that is, the hue of a color has the same numerical values in all of these models, as does its chroma. If we take our tilted RGB cube, and project it onto the "chromaticity plane" perpendicular to the neutral axis, our projection takes the shape of a hexagon, with red, yellow, green, cyan, blue, and magenta at its corners (). Hue is roughly the angle of the vector to a point in the projection, with red at 0°, while chroma is roughly the distance of the point from the origin. More precisely, both hue and chroma in this model are defined with respect to the hexagonal shape of the projection. The chroma is the proportion of the distance from the origin to the edge of the hexagon. In the lower part of the adjacent diagram, this is the ratio of lengths , or alternatively the ratio of the radii of the two hexagons. This ratio is the difference between the largest and smallest values among R, G, or B in a color. To make our definitions easier to write, we'll define these maximum, minimum, and chroma component values as M, m, and C, respectively. To understand why chroma can be written as , notice that any neutral color, with , projects onto the origin and so has 0 chroma. Thus if we add or subtract the same amount from all three of R, G, and B, we move vertically within our tilted cube, and do not change the projection. Therefore, any two colors and project on the same point, and have the same chroma. The chroma of a color with one of its components equal to zero is simply the maximum of the other two components. This chroma is M in the particular case of a color with a zero component, and in general. The hue is the proportion of the distance around the edge of the hexagon which passes through the projected point, originally measured on the range but now typically measured in degrees . For points which project onto the origin in the chromaticity plane (i.e., grays), hue is undefined. Mathematically, this definition of hue is written piecewise: Sometimes, neutral colors (i.e. with ) are assigned a hue of 0° for convenience of representation. These definitions amount to a geometric warping of hexagons into circles: each side of the hexagon is mapped linearly onto a 60° arc of the circle (). After such a transformation, hue is precisely the angle around the origin and chroma the distance from the origin: the angle and magnitude of the vector pointing to a color. Sometimes for image analysis applications, this hexagon-to-circle transformation is skipped, and hue and chroma (we'll denote these H2 and C2) are defined by the usual cartesian-to-polar coordinate transformations (). The easiest way to derive those is via a pair of cartesian chromaticity coordinates which we'll call α and β: (The atan2 function, a "two-argument arctangent", computes the angle from a cartesian coordinate pair.) Notice that these two definitions of hue (H and H2) nearly coincide, with a maximum difference between them for any color of about 1.12°—which occurs at twelve particular hues, for instance , —and with for every multiple of 30°. The two definitions of chroma (C and C2) differ more substantially: they are equal at the corners of our hexagon, but at points halfway between two corners, such as , we have , but , a difference of about 13.4%. Lightness While the definition of hue is relatively uncontroversial—it roughly satisfies the criterion that colors of the same perceived hue should have the same numerical hue—the definition of a lightness or value dimension is less obvious: there are several possibilities depending on the purpose and goals of the representation. Here are four of the most common (; three of these are also shown in ): The simplest definition is just the arithmetic mean, i.e. average, of the three components, in the HSI model called intensity (). This is simply the projection of a point onto the neutral axis—the vertical height of a point in our tilted cube. The advantage is that, together with Euclidean-distance calculations of hue and chroma, this representation preserves distances and angles from the geometry of the RGB cube. In the HSV "hexcone" model, value is defined as the largest component of a color, our M above (). This places all three primaries, and also all of the "secondary colors"—cyan, yellow, and magenta—into a plane with white, forming a hexagonal pyramid out of the RGB cube. In the HSL "bi-hexcone" model, lightness is defined as the average of the largest and smallest color components (), i.e. the mid-range of the RGB components. This definition also puts the primary and secondary colors into a plane, but a plane passing halfway between white and black. The resulting color solid is a double-cone similar to Ostwald's, shown above. A more perceptually relevant alternative is to use luma, , as a lightness dimension (). Luma is the weighted average of gamma-corrected R, G, and B, based on their contribution to perceived lightness, long used as the monochromatic dimension in color television broadcast. For sRGB, the Rec. 709 primaries yield , digital NTSC uses according to Rec. 601 and some other primaries are also in use which result in different coefficients. (SDTV) (Adobe) (HDTV) (UHDTV, HDR) All four of these leave the neutral axis alone. That is, for colors with , any of the four formulations yields a lightness equal to the value of R, G, or B. For a graphical comparison, see fig. 13 below. Saturation When encoding colors in a hue/lightness/chroma or hue/value/chroma model (using the definitions from the previous two sections), not all combinations of lightness (or value) and chroma are meaningful: that is, half of the colors denotable using , , and fall outside the RGB gamut (the gray parts of the slices in figure 14). The creators of these models considered this a problem for some uses. For example, in a color selection interface with two of the dimensions in a rectangle and the third on a slider, half of that rectangle is made of unused space. Now imagine we have a slider for lightness: the user's intent when adjusting this slider is potentially ambiguous: how should the software deal with out-of-gamut colors? Or conversely, If the user has selected as colorful as possible a dark purple and then shifts the lightness slider upward, what should be done: would the user prefer to see a lighter purple still as colorful as possible for the given hue and lightness or a lighter purple of exactly the same chroma as the original color To solve problems such as these, the HSL and HSV models scale the chroma so that it always fits into the range for every combination of hue and lightness or value, calling the new attribute saturation in both cases (fig. 14). To calculate either, simply divide the chroma by the maximum chroma for that value or lightness. The HSI model commonly used for computer vision, which takes H2 as a hue dimension and the component average I ("intensity") as a lightness dimension, does not attempt to "fill" a cylinder by its definition of saturation. Instead of presenting color choice or modification interfaces to end users, the goal of HSI is to facilitate separation of shapes in an image. Saturation is therefore defined in line with the psychometric definition: chroma relative to lightness (). See the Use in image analysis section of this article. Using the same name for these three different definitions of saturation leads to some confusion, as the three attributes describe substantially different color relationships; in HSV and HSI, the term roughly matches the psychometric definition, of a chroma of a color relative to its own lightness, but in HSL it does not come close. Even worse, the word saturation is also often used for one of the measurements we call chroma above (C or C2). Examples All parameter values shown below are given as percentages (interval scaled by a factor 100), except those for H and H2 which are in the interval . Use in end-user software The original purpose of HSL and HSV and similar models, and their most common current application, is in color selection tools. At their simplest, some such color pickers provide three sliders, one for each attribute. Most, however, show a two-dimensional slice through the model, along with a slider controlling which particular slice is shown. The latter type of GUI exhibits great variety, because of the choice of cylinders, hexagonal prisms, or cones/bicones that the models suggest (see the diagram near the top of the page). Several color choosers from the 1990s are shown to the right, most of which have remained nearly unchanged in the intervening time: today, nearly every computer color chooser uses HSL or HSV, at least as an option. Some more sophisticated variants are designed for choosing whole sets of colors, basing their suggestions of compatible colors on the HSL or HSV relationships between them. Most web applications needing color selection also base their tools on HSL or HSV, and pre-packaged open source color choosers exist for most major web front-end frameworks. The CSS 3 specification allows web authors to specify colors for their pages directly with HSL coordinates. HSL and HSV are sometimes used to define gradients for data visualization, as in maps or medical images. For example, the popular GIS program ArcGIS historically applied customizable HSV-based gradients to numerical geographical data. Image editing software also commonly includes tools for adjusting colors with reference to HSL or HSV coordinates, or to coordinates in a model based on the "intensity" or luma defined above. In particular, tools with a pair of "hue" and "saturation" sliders are commonplace, dating to at least the late-1980s, but various more complicated color tools have also been implemented. For instance, the Unix image viewer and color editor xv allowed six user-definable hue (H) ranges to be rotated and resized, included a dial-like control for saturation (SHSV), and a curves-like interface for controlling value (V)—see fig. 17. The image editor Picture Window Pro includes a "color correction" tool which affords complex remapping of points in a hue/saturation plane relative to either HSL or HSV space. Video editors also use these models. For example, both Avid and Final Cut Pro include color tools based on HSL or a similar geometry for use adjusting the color in video. With the Avid tool, users pick a vector by clicking a point within the hue/saturation circle to shift all the colors at some lightness level (shadows, mid-tones, highlights) by that vector. Since version 4.0, Adobe Photoshop's "Luminosity", "Hue", "Saturation", and "Color" blend modes composite layers using a luma/chroma/hue color geometry. These have been copied widely, but several imitators use the HSL (e.g. PhotoImpact, Paint Shop Pro) or HSV geometries instead. Use in image analysis HSL, HSV, HSI, or related models are often used in computer vision and image analysis for feature detection or image segmentation. The applications of such tools include object detection, for instance in robot vision; object recognition, for instance of faces, text, or license plates; content-based image retrieval; and analysis of medical images. For the most part, computer vision algorithms used on color images are straightforward extensions to algorithms designed for grayscale images, for instance k-means or fuzzy clustering of pixel colors, or canny edge detection. At the simplest, each color component is separately passed through the same algorithm. It is important, therefore, that the features of interest can be distinguished in the color dimensions used. Because the R, G, and B components of an object's color in a digital image are all correlated with the amount of light hitting the object, and therefore with each other, image descriptions in terms of those components make object discrimination difficult. Descriptions in terms of hue/lightness/chroma or hue/lightness/saturation are often more relevant. Starting in the late 1970s, transformations like HSV or HSI were used as a compromise between effectiveness for segmentation and computational complexity. They can be thought of as similar in approach and intent to the neural processing used by human color vision, without agreeing in particulars: if the goal is object detection, roughly separating hue, lightness, and chroma or saturation is effective, but there is no particular reason to strictly mimic human color response. John Kender's 1976 master's thesis proposed the HSI model. Ohta et al. (1980) instead used a model made up of dimensions similar to those we have called I, α, and β. In recent years, such models have continued to see wide use, as their performance compares favorably with more complex models, and their computational simplicity remains compelling. Disadvantages While HSL, HSV, and related spaces serve well enough to, for instance, choose a single color, they ignore much of the complexity of color appearance. Essentially, they trade off perceptual relevance for computation speed, from a time in computing history (high-end 1970s graphics workstations, or mid-1990s consumer desktops) when more sophisticated models would have been too computationally expensive. HSL and HSV are simple transformations of RGB which preserve symmetries in the RGB cube unrelated to human perception, such that its R, G, and B corners are equidistant from the neutral axis, and equally spaced around it. If we plot the RGB gamut in a more perceptually-uniform space, such as CIELAB (see below), it becomes immediately clear that the red, green, and blue primaries do not have the same lightness or chroma, or evenly spaced hues. Furthermore, different RGB displays use different primaries, and so have different gamuts. Because HSL and HSV are defined purely with reference to some RGB space, they are not absolute color spaces: to specify a color precisely requires reporting not only HSL or HSV values, but also the characteristics of the RGB space they are based on, including the gamma correction in use. If we take an image and extract the hue, saturation, and lightness or value components, and then compare these to the components of the same name as defined by color scientists, we can quickly see the difference, perceptually. For example, examine the following images of a fire breather (). The original is in the sRGB colorspace. CIELAB L* is a CIE-defined achromatic lightness quantity (dependent solely on the perceptually achromatic luminance Y, but not the mixed-chromatic components X or Z, of the CIEXYZ colorspace from which the sRGB colorspace itself is derived), and it is plain that this appears similar in perceptual lightness to the original color image. Luma is roughly similar, but differs somewhat at high chroma, where it deviates most from depending solely on the true achromatic luminance (Y, or equivalently L*) and is influenced by the colorimetric chromaticity (x,y, or equivalently, a*,b* of CIELAB). HSL L and HSV V, by contrast, diverge substantially from perceptual lightness. {{multiple image | align = center | image1 = Fire breathing 2 Luc Viatour.jpg | width1 = 220 | alt1 = A full-color image shows a high-contrast and quite dramatic scene of a fire breather with a large orange-yellow flame extending from his lips. He wears dark but colorful orange-red clothing. | caption1 = Fig. 13a. Color photograph (sRGB colorspace). | image2 = Fire-breather CIELAB L*.jpg | width2 = 220 | alt2 = A grayscale image showing the CIELAB lightness component of the photograph appears to be a faithful rendering of the scene: it looks roughly like a black-and-white photograph taken on panchromatic film would look, with clear detail in the flame, which is much brighter than the man's outfit or the background. | caption2 = Fig. 13b. CIELAB L* (further transformed back to sRGB for consistent display). | image3 = Fire-breather 601 Luma Y'.jpg | width3 = 220 | alt3 = A grayscale image showing the luma appears roughly similar to the CIELAB lightness image, but is a bit brighter in areas which were originally very colorful. | caption3 = Fig. 13c. Rec. 601 luma {{nobr|Y'''}}. | footer = }} Though none of the dimensions in these spaces match their perceptual analogs, the value of HSV and the saturation of HSL are particular offenders. In HSV, the blue primary and white are held to have the same value, even though perceptually the blue primary has somewhere around 10% of the luminance of white (the exact fraction depends on the particular RGB primaries in use). In HSL, a mix of 100% red, 100% green, 90% blue—that is, a very light yellow —is held to have the same saturation as the green primary even though the former color has almost no chroma or saturation by the conventional psychometric definitions. Such perversities led Cynthia Brewer, expert in color scheme choices for maps and information displays, to tell the American Statistical Association: If these problems make HSL and HSV problematic for choosing colors or color schemes, they make them much worse for image adjustment. HSL and HSV, as Brewer mentioned, confound perceptual color-making attributes, so that changing any dimension results in non-uniform changes to all three perceptual dimensions, and distorts all of the color relationships in the image. For instance, rotating the hue of a pure dark blue toward green will also reduce its perceived chroma, and increase its perceived lightness (the latter is grayer and lighter), but the same hue rotation will have the opposite impact on lightness and chroma of a lighter bluish-green— to (the latter is more colorful and slightly darker). In the example below (), the image on the left (a) is the original photograph of a green turtle. In the middle image (b), we have rotated the hue (H) of each color by , while keeping HSV value and saturation or HSL lightness and saturation constant. In the image on the right (c), we make the same rotation to the HSL/HSV hue of each color, but then we force the CIELAB lightness (L*, a decent approximation of perceived lightness) to remain constant. Notice how the hue-shifted middle version without such a correction dramatically changes the perceived lightness relationships between colors in the image. In particular, the turtle's shell is much darker and has less contrast, and the background water is much lighter. Because hue is a circular quantity, represented numerically with a discontinuity at 360°, it is difficult to use in statistical computations or quantitative comparisons: analysis requires the use of circular statistics. Furthermore, hue is defined piecewise, in 60° chunks, where the relationship of lightness, value, and chroma to R, G, and B depends on the hue chunk in question. This definition introduces discontinuities, corners which can plainly be seen in horizontal slices of HSL or HSV. Charles Poynton, digital video expert, lists the above problems with HSL and HSV in his Color FAQ, and concludes that: Other cylindrical-coordinate color models The creators of HSL and HSV were far from the first to imagine colors fitting into conic or spherical shapes, with neutrals running from black to white in a central axis, and hues corresponding to angles around that axis. Similar arrangements date back to the 18th century, and continue to be developed in the most modern and scientific models. Color conversion formulae To convert from HSL or HSV to RGB, we essentially invert the steps listed above (as before, ). First, we compute chroma, by multiplying saturation by the maximum chroma for a given lightness or value. Next, we find the point on one of the bottom three faces of the RGB cube which has the same hue and chroma as our color (and therefore projects onto the same point in the chromaticity plane). Finally, we add equal amounts of R, G, and B to reach the proper lightness or value. To RGB HSL to RGB Given a color with hue , saturation , and lightness , we first find chroma: Then we can find a point along the bottom three faces of the RGB cube, with the same hue and chroma as our color (using the intermediate value X for the second largest component of this color): In the above equation, the notation refers to the remainder of the Euclidean division of by 2. is not necessarily an integer. When is an integer, the "neighbouring" formula would yield the same result, as or , as appropriate. Finally, we can find R, G, and B by adding the same amount to each component, to match lightness: HSL to RGB alternative The polygonal piecewise functions can be somewhat simplified by a clever use of minimum and maximum values as well as the remainder operation. Given a color with hue , saturation , and lightness , we first define the function: where and: And output R,G,B values (from ) are: The above alternative formulas allow for shorter implementations. In the above formulas the operation also returns the fractional part of the module e.g. , and . The base shape is constructed as follows: is a "triangle" for which values greater or equal to −1 start from k=2 and end at k=10, and the highest point is at k=6. Then by we change values bigger than 1 to equal 1. Then by we change values less than −1 to equal −1. At this point we get something similar to the red shape from fig. 24 after a vertical flip (where the maximum is 1 and the minimum is −1). The R,G,B functions of use this shape transformed in following way: modulo-shifted on (by ) (differently for R,G,B) scaled on (by ) and shifted on (by ). We observe following shape properties (Fig. 24 can help to get intuition about them): HSV to RGB Given an HSV color with hue , saturation , and value , we can use the same strategy. First, we find chroma: Then we can, again, find a point along the bottom three faces of the RGB cube, with the same hue and chroma as our color (using the intermediate value X for the second largest component of this color): As before, when is an integer, "neighbouring" formulas would yield the same result. Finally, we can find R, G, and B by adding the same amount to each component, to match value: HSV to RGB alternative Given a color with hue , saturation , and value , first we define function : where and: And output R,G,B values (from ) are: Above alternative equivalent formulas allow shorter implementation. In above formulas the returns also fractional part of module e.g. the formula . The values of . The base shape is constructed as follows: is "triangle" for which non-negative values starts from k=0, highest point at k=2 and "ends" at k=4, then we change values bigger than one to one by , then change negative values to zero by – and we get (for ) something similar to green shape from Fig. 24 (which max value is 1 and min value is 0). The R,G,B functions of use this shape transformed in following way: modulo-shifted on (by ) (differently for R,G,B) scaled on (by ) and shifted on (by ). We observe following shape properties(Fig. 24 can help to get intuition about this): HSI to RGB Given an HSI color with hue , saturation , and intensity , we can use the same strategy, in a slightly different order: Where is the chroma. Then we can, again, find a point along the bottom three faces of the RGB cube, with the same hue and chroma as our color (using the intermediate value X for the second largest component of this color): Overlap (when is an integer) occurs because two ways to calculate the value are equivalent: or , as appropriate. Finally, we can find R, G, and B by adding the same amount to each component, to match lightness: Luma, chroma and hue to RGB Given a color with hue , chroma , and luma , we can again use the same strategy. Since we already have H and C, we can straightaway find our point along the bottom three faces of the RGB cube: Overlap (when is an integer) occurs because two ways to calculate the value are equivalent: or , as appropriate. Then we can find R, G, and B by adding the same amount to each component, to match luma: Interconversion HSV to HSL Given a color with hue , saturation , and value , HSL to HSV Given a color with hue , saturation , and luminance , From RGB This is a reiteration of the previous conversion. Value must be in range . With maximum component (i. e. value) and minimum component , range (i. e. chroma) and mid-range (i. e. lightness) , we get common hue: and distinct saturations: Swatches Mouse over the swatches below to see the R, G, and B'' values for each swatch in a tooltip. HSL HSV See also TSL color space Notes References Bibliography Agoston's book contains a description of HSV and HSL, and algorithms in pseudocode for converting to each from RGB, and back again. This computer vision literature review briefly summarizes research in color image segmentation, including that using HSV and HSI representations. This book doesn't discuss HSL or HSV specifically, but is one of the most readable and precise resources about current color science. The standard computer graphics textbook of the 1990s, this tome has a chapter full of algorithms for converting between color models, in C. Joblove and Greenberg's paper was the first describing the HSL model, which it compares to HSV. This book only briefly mentions HSL and HSV, but is a comprehensive description of color order systems through history. This paper explains how both HSL and HSV, as well as other similar models, can be thought of as specific variants of a more general "GLHS" model. Levkowitz and Herman provide pseudocode for converting from RGB to GLHS and back. . Especially the sections about "Modern Color Models" and "Modern Color Theory". MacEvoy's extensive site about color science and paint mixing is one of the best resources on the web. On this page, he explains the color-making attributes, and the general goals and history of color order systems—including HSL and HSV—and their practical relevance to painters. This self-published frequently asked questions page, by digital video expert Charles Poynton, explains, among other things, why in his opinion these models "are useless for the specification of accurate color", and should be abandoned in favor of more psychometrically relevant models. This is the original paper describing the "hexcone" model, HSV. Smith was a researcher at NYIT's Computer Graphics Lab. He describes HSV's use in an early digital painting program. External links Demonstrative color conversion applet HSV Colors by Hector Zenil, The Wolfram Demonstrations Project. HSV to RGB by CodeBeautify. Color space
63208508
https://en.wikipedia.org/wiki/Internet%20security%20awareness
Internet security awareness
Internet security awareness or Cyber security awareness refers to how much end-users know about the cyber security threats their networks face, the risks they introduce and mitigating security best practices to guide their behavior. End users are considered the weakest link and the primary vulnerability within a network. Since end-users are a major vulnerability, technical means to improve security are not enough. Organizations could also seek to reduce the risk of the human element (end users). This could be accomplished by providing security best practice guidance for end users' awareness of cyber security. Employees could be taught about common threats and how to avoid or mitigate them. Cyber security awareness, training, education A cyber security risk mitigating end user program could consist of a combination of multiple approaches including cyber security awareness, cyber security training, and cyber security education.  According to, and adopted from, see the below table that provides a comparison of the approaches. Threats Threat agents or threat actors are the perpetrators of the threat and usually look for the easiest way to gain access into a network, which is often the human element.  However, these cyber threats can be mitigated. Some common threats include but are not limited to below. Social engineering is when someone uses a compelling story, authority, or other means to convince someone to hand over sensitive information such as usernames and passwords. An end user with cyber security awareness will have the ability to recognize these types of attacks which improves their ability to avoid them. Phishing is a form of social engineering. It is a popular attack that attempts to trick users into clicking a link within an email or on a website in hopes that they divulge sensitive information.  This attack generally relies on a bulk email approach and the low cost of sending phishing emails. Few targets are fooled, but so many are targeted that this is still a profitable vector. Spear phishing is an email crafted and sent to a specific person to whom it may appear to be legitimate. It is a form of phishing, but it is more convincing and more likely to succeed than traditional phishing emails because it tailors the email to the victim.   Its deployment can range from a bulk automated process, such as accessing the address book of a past victim and sending simple phishing attacks to their contacts (thus appearing to come from a recognized past contact), to more sophisticatedly hand-written communications to target specific recipients. Vishing or voice phishing is a form of social engineering that involves contacting individuals via traditional landlines, telephony (i.e., Voice over IP), automated text-to-speech systems, or other forms of voice communications to trick them into divulging sensitive information like credit card data. Smishing or SMS phishing is social engineering that leverages SMS or text messages as the vector to trick end users into divulging sensitive information. Tailgating is a physical security social engineering attack in which an unauthorized individual can access a location by following an authorized user into the location without the authorized user's knowledge. Piggybacking is a physical security social engineering attack in which an unauthorized individual can access a location by following an authorized user into the location with the authorized user's knowledge. Malware is software created and used for malicious intent.  It includes a range of software to include but is not limited to viruses, trojan horses, worms, rootkits, spyware, and crypto-jacking. Ransomware is another cyber threat where attacks are carried out on the computer system but are often the result of a social engineering attack.  This type of malware encrypts data and holds it for ransom which could paralyze the whole computer system. Internet of Things (IoT) based attacks are a form of cyber threat in the 21st century and beyond that leverage vulnerabilities in the embedded devices found in, i.e., cars, refrigerators, and smart speakers or digital assistants. Topics There are various approaches within the cyber security risk mitigating end user program (see table above). And while this article is geared towards cyber security awareness, the following topics could also be leveraged for cybersecurity training, and cyber security education. As reflected in the above table, there are several different delivery methods that can be taken to provide cyber security awareness. Some of which include using posters, guides, tips or even video and newsletters.  Some possible Cyber security awareness topics according to include but are not limited to the following. Anti-Malware Protection: Anti-malware is sometimes referred to as anti-virus.  This type of application is used to protect systems against malicious software by preventing, detecting, and deleting the malware.  Some popular malware includes computer viruses, ransomware, rootkit, trojan horses, and worms.  Security end user awareness guidelines include device scans for malware and updating the anti-malware application definitions. Data Protection and Privacy: There are various types of data that might be mandated to be protected from unauthorized disclosure, including personally identifiable information (PII), protected health information (PHI), intellectual property (IP), and other sensitive information.  Security awareness guidelines include teaching related to data classification, encryption, data anonymization, and data masking or data obfuscation.  Permissions and who can access data, which includes file sharing via email attachments, are additional safeguards that could be discussed. Another data protection control that could be included is backing up data as it could be restored if the original becomes unavailable. Device Management: involves knowing how to protect mobile devices and computers.  Device Management is also concerned with security related to Bring Your Own Device (BYOD).  Security awareness guidelines include encryption, protecting the system with a password, PIN, or multi-factor authentication, and other forms of credential.  Additional awareness tips include end-users downloading, installing, and reviewing applications and the requested permissions from unknown sources.  According to, another awareness tip is to read reviews and comments about the application before installing it.  Additionally, the use of public WIFI is another discussion point.  Device management also relates to maintaining an accurate inventory of assets from purchase to disposition. This includes knowing when to wipe a device and media sanitization. Incident Response: An incident is any observable event of malicious intent.  Security awareness guidelines for end-users include what types of events are considered suspicious or malicious, who should be contacted if an incident occurs, and what actions should be taken in the event of an incident. Internet of Things Security: are remotely controlled capable, resource constrained devices with embedded sensor chips that interact with people and objects to collect data and provide it to remote sources on the Internet for additional analysis in an effort to personalize and customize a user's experience. These devices include but are not limited to smart speakers, wearable devices like smart watch, surveillance cameras, lights, door locks, thermostats, appliances and cars.  Guidelines include maintaining an asset inventory, patch control, and changing default credentials. Password Management: A password is a string of secret characters used to authenticate a user's account. Security awareness guidelines suggest presenting requirements for creating a strong password or Passphrase, how frequently passwords should be changed, and how to protect passwords.  Additionally, guidelines suggest the need to change all default passwords and to not share passwords with others.  Additional protection options could include making end-users aware of using multi-factor authentication, password managers, and awareness of various password-related threats like password cracking. Patching: Software and system changes to update, improve, or resolve weaknesses are usually released via a patch.  Security awareness guidelines include the timely installation of security patches as well as implementing vulnerability assessment and vulnerability management. Removable Media: are storage devices that could be added or removed from a running computer, such as CDs, DVDs, removable SD cards, and USB drives (including flash drives, thumb drives, external hard drives). Security awareness guidelines include drive encryption and following the policy and guidelines presented at the organizational level regarding the use of personal removable media on organizational systems. Safe Web Browsing: Security awareness guidelines regarding securely navigating websites include looking for the padlock icon on the URL bar before entering sensitive information like credentials, credit card information, or personally identifiable information.   Another visual indicator is "https" reflecting in the web address.  The padlock and "https" indicate that the entered information will be secure. Lastly, guidance could be shared to set privacy options on the browser or use the incognito option to limit the information shared. Yet another guideline is to consider using a virtual private network (VPN). Social Engineering involves interacting with humans in hopes that they will disclose sensitive information. Security awareness guidelines include not opening suspicious emails from unrecognized senders, not clicking on suspicious links in emails or on websites, not opening attachments in emails, not disclosing information, and not responding to suspicious emails or contacts provided therein. See also Cybersecurity Cybersecurity standards Cybercrime Countermeasure (computer) Social engineering (security) Threat (computer) Malware References Internet security Cybercrime Cyberwarfare Cyber security awareness
52693565
https://en.wikipedia.org/wiki/2017%20in%20science
2017 in science
A number of significant scientific events occurred in 2017. The United Nations declared 2017 the International Year of Sustainable Tourism for Development. Events January 4 January A study published in the journal Science Advances casts further doubt on the existence of a recent "pause" in global warming, with more evidence that ocean temperatures have been underestimated. After 60 wins and 0 losses over 7 days, Google reveals that a mysterious player of Go, named "Master", is actually an improved version of its AlphaGo AI. Researchers at Michigan State University demonstrate a chemical compound and potential new drug able to stop the spread of melanoma by 90%. NASA announces its two choices for the next Discovery Program missions – the Lucy mission, to visit several asteroids, including six Jupiter Trojans; and the Psyche mission, to visit the large metallic asteroid 16 Psyche. 5 January – A Japanese insurance firm, Fukoku Mutual Life Insurance, announces that 34 of its office workers will be replaced with IBM’s Watson AI. 6 January A large portion of the Larsen C ice shelf is reported to be on the verge of breaking away from Antarctica. It is expected to become one of the top 10 biggest icebergs ever recorded, leaving the whole shelf vulnerable to future collapse, which would raise global sea levels by 10cm. Researchers at MIT design one of the strongest lightweight materials known, by compressing and fusing flakes of graphene. The new material is highly porous. Computer simulations predict it is possible to make materials with a density of just 5 percent of steel, but 10 times stronger. NASA scientists release an image (also see related comparison image) of the Earth and Moon as viewed 127 million miles away from the planet Mars by the Mars Reconnaissance Orbiter. (related image taken by the Curiosity rover on the surface of Mars) 9 January – Researchers at King's College London report a way of using an Alzheimer's drug to stimulate the renewal of living stem cells in tooth pulp. 10 January – Researchers discover that glia, not neurons, are most affected by brain aging. 11 January A new species of gibbon, named Hoolock tianxing, is identified in southwest China. Carnegie Mellon University announces "Libratus", an artificial intelligence program designed to beat humans at poker. 12 January – Scientists at the Scripps Research Institute report the discovery of TZAP, a protein that binds the ends of chromosomes and determines how long telomeres can be. 14 January Researchers at the University of Sydney use big data to predict how a quantum system will change and to prevent its breakdown from occurring. SpaceX resumes flights, following a launch pad explosion in September 2016. A reusable Falcon 9 rocket successfully delivers 10 satellites into orbit for a client, Iridium, before returning to a landing pad in the ocean. 16 January Astronomers working on the Japanese Akatsuki space probe mission report detecting a possible gravity wave that occurred on the planet Venus in December 2015. Researchers publish evidence that humans first entered North America in around 24,000 BP (Before Present), during the height of the last ice age. This is 10,000 years earlier than previously thought. 17 January – The Chinese government announces plans for the first prototype exascale supercomputer by the end of the year. 18 January Researchers at Harvard develop a customisable "soft robot" that fits around a heart and helps it beat, potentially offering a new treatment option for patients with heart failure. Independent analyses by NASA and the National Oceanic and Atmospheric Administration (NOAA) show that 2016 was the hottest year on record, at 0.99 °C (1.78 °F) above the mid-20th century global mean average. This follows record warmth in the two preceding years 2015 and 2014. 19 January A study published in Nature warns that some of the most important crops in the U.S. are at risk of "abrupt and substantial yield losses" from rising temperatures later this century, with harvests potentially declining by 20% for wheat, 40% for soybean and almost 50% for maize. Researchers at Northwestern University develop an AI system that performs at human levels on a standard visual intelligence test. 23 January Researchers demonstrate a prototype 3D printer that can print fully functional human skin. Scientists at the Scripps Research Institute create the first stable semisynthetic organism. This can hold two synthetic bases, called X and Y, in its genetic code indefinitely. The team says it could lead to entirely new life forms using synthetic DNA, with many potential uses in medicine. 26 January Researchers at the Salk Institute create the first human-pig hybrid embryo, containing genetic information from both species. Scientists at Harvard report creating a small amount of metallic hydrogen for the first time, a century after it was theorised. The claim is disputed. 27 January – A report from the EU's Joint Research Centre concludes that if global temperatures rise by 4 °C, the flood risk in countries representing more than 70% of the global population and of the global GDP will increase by more than 500%. 30 January – News reports that a new safe battery has been invented. It is based on solid lithium, and is claimed to have twice the storage capacity of lithium-ion batteries. It is featured on a newly released PBS NOVA TV program entitled Search for the Super Battery. February 1 February Researchers led by the University of Sussex publish the first practical blueprint for how to build a quantum computer. Researchers develop a new blue-phase liquid crystal that could triple the sharpness of TVs, computer screens, and other displays while also reducing the power needed to run the device. 6 February – The first stable helium compound is synthesized, Na2He. Helium is the most unreactive element. 7 February A mysterious "white dwarf pulsar" is announced, the first known star of its kind, located 380 light years from Earth. Asteroid 2017 BQ6 passed within 6.6 lunar distances of Earth at 6:36 UT. 8 February The genome of the quinoa food crop is decoded by researchers at King Abdullah University of Science and Technology. NASA publishes a report outlining the mission goals of an unmanned Europa surface lander and which instruments the probe may need. 9 February – Researchers at Japan's National Institute of Advanced Industrial Science and Technology demonstrate a robotic drone bee able to pollinate flowers. 10 February A study in the journal Anthropocene Review concludes that human activity is changing the climate 170 times faster than natural processes. A study by the University of Buffalo, using four decades of evidence, finds no link between immigration and higher rates of crime. 14 February – A committee from the US National Academy of Sciences and the National Academy of Medicine gives cautious backing to gene editing of human embryos. 15 February – A study published in Nature finds that oxygen levels in the oceans have declined by 2% globally in the last 50 years, due to warming and stratification. 16 February NASA's Dawn mission finds evidence of organic material on Ceres, the first clear detection of organic molecules from orbit on a main belt body. (related image) Researchers from the University of Texas at Austin develop ultra-flexible, nanoelectronic thread (NET) brain probes, designed to achieve more reliable long-term neural recording than existing probes and without causing scar formation when implanted. 21 February – Scientists describe a technique to grow large quantities of inner ear progenitor cells that convert into hair cells, which could potentially treat hearing loss. 22 February – Astronomers announce the discovery of seven Earth-sized exoplanets, which may all be in the habitable zone, orbiting TRAPPIST-1, an ultra-cool dwarf star, slightly larger than the planet Jupiter, located about 40 light-years from Earth. March 1 March – Researchers report evidence of possibly the oldest forms of life on Earth. Putative fossilized microorganisms were discovered in hydrothermal vent precipitates in the Nuvvuagittuq belt of Quebec, Canada, that may have lived as early as 4.280 billion years ago, not long after the oceans formed 4.4 billion years ago, and not long after the formation of the Earth 4.54 billion years ago. 2 March – The University of Alberta announces details of DeepStack, a new artificial intelligence program able to beat professional human players at poker for the first time. 6 March – IBM announces "IBM Q", an initiative to build commercially available universal quantum computing systems. 7 March The Sentinel-2B satellite is launched as part of the European Space Agency's Copernicus programme. NASA's Cassini mission reveals new images of Pan, a small moon of Saturn, which is now shown to have a bizarre 'flying saucer' shape. 8 March – Scientists at the University of Texas report a new phase of matter, dubbed a time crystal, in which atoms move in a pattern that repeats in time rather than in space. 9 March Researchers at the Institute for Basic Science publish details of a single atom memory storage system. The CDH2 gene is found to be implicated in sudden death among young people and athletes. A study by the Harvard-Smithsonian Center for Astrophysics suggests that fast radio bursts in distant galaxies could be evidence of advanced alien technology. 10 March Scientists report that extraterrestrial dust particles have been identified to be all over planet Earth. According to one of the researchers, “Once I knew what to look for, I found them everywhere.” A study published in Science Advances concludes that the world's oceans are warming 13% faster than previously thought, and accelerating. 16 March – Scientists report that a potential drug candidate, trodusquemine, can restore some heart muscle function after a heart attack. As of 2017, no drug exists that is able to do this. 17 March A new drug, evolocumab, is shown to prevent heart attacks and strokes by dramatically cutting bad cholesterol. A report by the International Energy Agency (IEA) finds that CO2 emissions have remained flat for the third year in a row, despite continued global economic growth. 22 March Scientists report a new way of classifying the dinosaur family tree, based on newer and more evidence than available earlier. According to the new classification, the original dinosaurs, arising 200 million years ago, were small, two-footed omnivorous animals with large grasping hands. Descendants (for the non-avian dinosaurs) lasted until 66 million years ago. NASA reports that sea ice extent has reached record lows at both the Arctic and Antarctic. 23 March – Dutch scientists report a drug that can reverse aspects of ageing in old mice – restoring their stamina, coat of fur and even some organ function – by flushing out "senescent" cells in the body that have stopped dividing. Human trials are planned. 24 March – Scientists at the University of New South Wales publish details of experiments on mice that suggest a treatment is possible for DNA damage from aging and radiation, based on the metabolite NAD+. 27 March – Scientists in Australia announce the discovery of the world's largest dinosaur footprint, measuring long. The previous record-holder was about long. 30 March – SpaceX conducts the world's first reflight of an orbital class rocket. April 3 April – Researchers at the University of Manchester demonstrate a graphene-based sieve able to filter seawater, which could improve desalination technologies. 10 April Australia's Great Barrier Reef is reported to be experiencing a second consecutive mass coral bleaching event, affecting two-thirds of its area. Researchers at Washington State University demonstrate a fluid with negative mass. 11 April – The telescopes of the Event Horizon Telescope finish data-taking in their attempt to image the region close to a black hole. Data analysis is expected to take several months. 12 April – University of Waterloo researchers capture the first composite image of a galaxies-connecting dark matter bridge. 13 April NASA scientists announce that molecular hydrogen has been detected in plumes erupting from Enceladus, moon of the planet Saturn, suggesting possible hydrothermal activity, and the possible consequent existence of primitive life forms. The University of California, Berkeley, creates a device that pulls water from dry air, powered only by the Sun. Even under conditions of relatively low (20–30 percent) humidity, it is able to produce 2.8 liters of water over a 12-hour period. 19 April – Astronomers report the discovery of LHS 1140b, a rocky "super-Earth" in the habitable zone of a red dwarf star, LHS 1140, which astronomers say is among the best ever candidates in the search for extraterrestrial life. 20 April – Researchers from the Medical Research Council (United Kingdom) led by Giovanna Mallucci identify two drugs, trazodone and dibenzoylmethane (DBM), that could potentially block cell death in all neurodegenerative brain diseases. 22 April – The March for Science takes place, timed to coincide with Earth Day. 24 April – Wax moth larvae are reported to be able to biodegrade polyethylene, one of the toughest, most resilient, and most used plastics. The creatures may be a solution to the growing problem of plastic waste. 25 April – Researchers in the U.S. demonstrate an artificial womb-like device on lambs, which could one day be used for saving premature human babies. 26 April – Scientists report evidence suggesting that ancient humans were present at the Cerutti Mastodon site on the North American continent 130,000 years ago, much earlier than 15,000 years ago, thought previously based on genetic studies. May 1 May – The University of Utah reveals a new robotic drill system for greatly speeding up surgical procedures. One type of complex cranial surgery could be done in a fiftieth of the normal time, decreasing from two hours to just two and a half minutes. 4 May The European x-ray free electron laser (XFEL) produces its first beams of x-rays. The first synthetic retina using soft biological tissues is created by a student at the University of Oxford. 9 May Scientists report newer findings, two adults and a child, of Homo naledi, an extinct species of hominin, in a second chamber, named "Lesedi", of the "Rising Star Cave" system. This second chamber is near the first earlier chamber, named "Dinaledi". In addition, remains of Homo naledi have been reported to be dated "between 236,000 and 335,000 years ago". Scientists publish evidence that the earliest known life on land may have been found in 3.48-billion-year-old geyserite and other related mineral deposits (often found around hot springs and geysers) uncovered in the Pilbara Craton of Western Australia. 10 May Researchers at the University of Minnesota demonstrate a 3D-printed ‘bionic skin’ that could give robots a sense of touch, or lead to electronics printed on real human skin. A study of nearly 6,000 adults finds that high levels of physical activity equate to a nine-year biological aging advantage. Those who engaged in a minimum of 30 to 40 minutes of running, five days a week, were found to have longer telomere lengths. 15 May – Researchers report that glints of light observed from Earth, seen as twinkling from an orbiting satellite a million miles away, have been found to be reflected light from ice crystals in the atmosphere. The technology used to determine this may be useful in studying the atmospheres of distant worlds, including those of exoplanets. 16 May SESAME, a synchrotron light source in Jordan built by a collaboration including Israel, the Palestinian National Authority and Iran, is inaugurated. ARM and the Center for Sensorimotor Neural Engineering (CSNE) announce plans to develop a "brain-implantable" system-on-a-chip (SoC) for bi-directional brain-computer interfaces (BBCI). The 10-year project is aimed at solving neurodegenerative disorders. 17 May – Human blood stem cells are grown in the laboratory for the first time by researchers at Boston Children's Hospital. 18 May – Researchers publish evidence of a rapid greening in the Antarctica region over the last 50 years. Mosses that once grew less than 1mm a year are now found to be growing more than 3mm a year on average. An Australian-Chinese research team creates the world's thinnest hologram, fabricated using a simple and fast direct laser writing system, with potential for use in a range of electronic products. 20 May – Astronomers report that Tabby's Star, about 1,300 light-years from Earth, has again begun dimming unusually; several explanations have been considered, including the possibility that intelligent extraterrestrial life may have been constructing a Dyson swarm. More dimming events happened in the following months.Update (Tabby's Star): as of 5 September 2017, a new dimming event had begun, the largest (of four) of the year, producing as much as a 3% dimming in star brightness. 23 May Researchers in Harvard University report that eating up to six bars of chocolate a week could decrease the risk of a potentially fatal heart condition by approximately one quarter. Scientists propose a new type of astronomical object called a "synestia" – a huge, spinning, donut-shaped mass of hot, vaporised rock, formed as planet-sized objects smash into each other. 24 May The launch date of NASA's Psyche probe is brought forward, to target a more efficient trajectory, launching in 2022 and arriving in 2026 with a Mars gravity assist in 2023. Researchers in Switzerland create artificial viruses that can be used to target cancer. These designer viruses alert the immune system and cause it to send killer cells to help fight the tumor. The results, published in Nature Communications, provide a basis for innovative cancer treatments. 25 May – An article in Science magazine claims the US Nuclear Regulatory Commission relied on faulty analysis to justify its refusal to adopt a critical measure for protecting Americans from nuclear-waste fires at dozens of reactor sites around the country. Radioactivity from such a fire could force approximately 8 million people to relocate and result in $2 trillion in damages. 26 May – Construction begins on the European Extremely Large Telescope. 27 May – At the Future of Go Summit in China, Google's DeepMind AlphaGo AI program beats the world's number one Go player, Ke Jie, in the third of three matches. 30 May Researchers at the Scripps Research Institute announce a way to structurally modify vancomycin to make the antibiotic more powerful. A survey of 352 experts in artificial intelligence finds that experts believe there is a 50% chance of AI outperforming humans in all tasks within 45 years and of automating all human jobs in 120 years. 31 May – Muon g-2, a precision experiment to measure the g-factor of muons, starts taking data. June 1 June SpaceX founder and CEO Elon Musk publishes Making Humans a Multi-Planetary Species, his plans for the future colonisation of Mars. Astronomers report the detection of a third gravitational wave, named GW170104, thereby further supporting the theory of general relativity presented in 1916 by Albert Einstein. NASA reports that the Curiosity rover provided evidence of an ancient lake in Gale crater on Mars that could have been favorable for microbial life; the ancient lake was stratified, with shallows rich in oxidants and depths poor in oxidants; and, the ancient lake provided many different types of microbe-friendly environments at the same time. NASA further reports that the Curiosity rover will continue to explore higher and younger layers of Mount Sharp in order to determine how the lake environment in ancient times on Mars became the drier environment in more modern times. 5 June – Astronomers at The Ohio State University and Vanderbilt University have detected a planet that is so hot, its heat rivals most stars. With a day-side temperature of 4,600 Kelvin (more than 7,800 degrees Fahrenheit), planet KELT-9b is hotter than most stars, and only 1,200 Kelvin (about 2,000 degrees Fahrenheit) cooler than our own sun. 7 June – Scientists report evidence, based on fossil remains found in the western part of North Africa in Morocco at Jebel Irhoud, that Homo sapiens may have originated about 300,000 years ago, over 100,000 years earlier than previously thought. 9 June – Researchers at the University of Zurich report the creation of the largest virtual universe ever simulated, consisting of 25 billion galaxies generated from 2 trillion digital particles. 12 June Researchers at Vrije Universiteit Amsterdam identify seven risk genes for insomnia. Two new moons – S/2016 J1 and S/2017 J1 – are reported to be orbiting Jupiter, bringing the gas giant's total number of known natural satellites to 69. 15 June Chinese scientists report the successful transmission of entangled photons between suborbital space and Earth, using the satellite Micius. A study by the universities of Coventry and Radboud finds that meditation, yoga and Tai Chi can 'reverse' the molecular reactions in DNA which cause ill-health and depression. 18 June – The European Society of Cardiology reports a vaccine that lowers cholesterol in mice, which may offer hope of immunising against cardiovascular disease. 19 June – Astronomers report evidence of a possible tenth Mars-sized planet residing at the edge of the Solar System. 20 June NASA's Kepler Space Telescope team publish 219 new exoplanet candidates, 10 of which are near-Earth size and orbiting in their star's habitable zone. The European Space Agency confirms the Laser Interferometer Space Antenna (LISA) as the third large-class mission in its science programme, with launch expected in 2034. 22 June – A study of snail neurons, published in Current Biology, suggests memories that trigger anxiety and PTSD could be 'erased' without affecting normal memory of past events. 26 June Research by Cornell University suggests that rising sea levels will displace 1.4 billion people by 2060 and 2 billion by 2100. Remote Sensing Systems (RSS), a satellite record of lower tropospheric temperature, undergoes a major revision, showing nearly 30% faster warming since 1979. 29 June – A study published in the journal Science concludes that unmitigated climate change will exacerbate inequality in the USA, with southern states losing up to 20 percent of their income by century's end. 30 June – The Japan Aerospace Exploration Agency (JAXA) reveals plans to send an astronaut to the Moon by 2030. July 1 July – Researchers from the University of Alabama at Birmingham warn that brainwave-sensing headsets are vulnerable to hacking and could reveal a user's passwords if their brainwaves are being monitored. 4 July – Scientists report evidence that homo sapiens may have migrated out of Africa 270,000 years ago, much earlier than the 70,000 years ago thought previously. 5 July – A study in the journal Science Advances shows that climate sensitivity is greater than previously thought, and that lower estimates of future temperatures do not take into account long-term patterns of warming. 6 July Physicists at CERN Large Hadron Collider report the detection of the particle (with the Greek letter Xi), a new hadron, a composite particle containing two charm quarks and one up quark. Researchers report that the surface on the planet Mars may be more toxic to microorganisms, especially a common terrestrial type, Bacillus subtilis, than thought earlier. This is based on studies with perchlorates, common on Mars, in a simulated Martian ultraviolet atmosphere. 7 July – Researchers at Queensland University of Technology announce the development of a genetically modified banana with higher levels of vitamin A, which could improve the nutritional content of bananas in Uganda. 10 July NASA's Juno spacecraft obtains close-range images of Jupiter's red spot. Scientists from Stanford University publish evidence that a sixth mass extinction of life on Earth is already underway. 11 July – Researchers at George Washington University reveal a new prototype solar cell with 44.5 percent efficiency. 12 July A huge iceberg, one of the largest ever recorded, detaches from the Larsen C ice shelf in Antarctica. The discovery of the smallest star able to sustain fusion, EBLM J0555-57Ab, is announced; its diameter is just slightly larger than Saturn. Scientists at Harvard use the CRISPR gene-editing system to store a GIF animation in the DNA of bacteria. Research published in Royal Society Open Science reveals that six of the world's large carnivores – the African wild dog, cheetah, Ethiopian wolf, lion, red wolf and tiger – have lost over 90% of their historic range. 14 July – Astrophysicists report that tardigrade micro-animals may be one of the most resilient life forms on Earth since they may be able to withstand global mass extinctions due to astrophysical events, such as supernovae, gamma-ray bursts, large asteroid impacts, and passing-by stars. 17 July Astronomers confirm the detection of strange radio signals from Ross 128, a nearby red dwarf star. Researchers at the University of Tokyo demonstrate a breathable nanoscale mesh with an electronic sensor that can be worn on the skin for a week without discomfort, and could potentially monitor a person's health continuously over a long period. Researchers in California report how carbon sequestration in the ocean can be made 500 times faster, by simply adding a common enzyme to the process. 18 July – A computer simulation by the University of Manchester suggests that Tyrannosaurus rex moved slower than was thought previously, with its size and weight limiting the dinosaur to a maximum of 20 km/h (12 mph). 19 July – Archaeologists publish evidence that Aboriginal people have been in Australia for at least 65,000 years, suggesting the arrival of humans on the continent was up to 18,000 years earlier than previously thought. 21 July – Asteroid 2017 OO1 passes close to Earth. 25 July – Researchers at Ulsan National Institute of Science and Technology (UNIST) announce a new record efficiency of 22.1% for perovskite solar cells. 26 July The Breakthrough Starshot initiative announces that it has developed and launched the world's smallest spacecraft, precursors of "StarChip", known as "Sprites", measuring just 3.5 cm and weighing only four grams, but containing solar panels, computers, sensors, and radios. Researchers discover that stem cells in the brain's hypothalamus govern how fast aging occurs in the body. The first gene editing of human embryos in the USA is reported to have taken place, using CRISPR. 27 July Astronomers announce that half the matter of the Milky Way galaxy may have come from other distant galaxies. Astronomers report the first measurement of a gamma ray burst (namely, GRB 160625B) as it happened. 28 July – An organic compound, acrylonitrile, or vinyl cyanide, (C2H3CN), possibly essential for life by being related to cell membrane and vesicle structure formation, is reported to have been found on Titan, moon of Saturn. August 1 August Scientists present a detailed description and 3D model image of possibly the first flower that lived about 140 million years ago. Virgo joins LIGO in the measurement of gravitational waves, improving the sensitivity. 2 August For the first time, scientists use CRISPR in human embryos to remove faulty DNA responsible for a hereditary heart condition. Scientists at Edinburgh Napier University report a treatment based on antimicrobial peptides that could potentially lead to a cure for the common cold. Astronomers report that WASP-121b is the first exoplanet found to contain water (in the form of hot water molecules) in an extrasolar planetary stratosphere (i.e., an atmospheric layer in which temperatures increase as the altitude increases). WASP-121b is a "hot Jupiter" in the constellation Puppis, and is about 880 light-years (light travel distance) from Earth. 4 August – In a letter to Darwin Life, Inc. and New Hope Fertility Center, the FDA warns that the "three parent baby" technique should not be marketed in the U.S. 5 August – NASA celebrates the fifth anniversary of the Curiosity rover mission landing, and related exploratory accomplishments, on the planet Mars. (Videos: Curiosity First Five Years (02:07); Curiosity POV: Five Years Driving (05:49); Curiosity Discoveries About Gale Crater (02:54)) 8 August – Patagotitan mayorum, one of the largest ever dinosaurs, is officially named by researchers. 10 August – Researchers at Brown University report the transmission of data through a terahertz multiplexer at 50 gigabits per second, which could lead to a new generation of ultra-fast Wi-Fi. 11 August – A deep learning algorithm is reported to be capable of visually identifying thousands of plant species. 12 August – Scientists discover 91 volcanoes located two kilometres below the West Antarctic Ice Sheet, making it the largest volcanic region on Earth. 14 August – A study by Ben-Gurion University suggests that the use of 'smiley' emoticons in workplace emails may reduce the perception of competence, and could even undermine information sharing. 21 August Researchers at MIT's Plasma Science and Fusion Center (PSFC) working with colleagues in Belgium and the UK find a new way to generate very high-energetic ions to study nuclear fusion. A team of scientists from all over the globe finds that there may indeed be diamond precipitation deep inside icy giant planets like Neptune and Uranus. 22 August Scientists at the American Chemical Society meeting in Washington demonstrate "cyborg bacteria" able to outperform plants at photosynthesis. Engineers in the U.S. demonstrate how to make ultra-compact antennas for wireless communication 100 times smaller than their current size. 23 August A peer-reviewed study by Harvard University concludes that petroleum company Exxon misled the public about the dangers of climate change for nearly 40 years. Astronomers using ESO's Very Large Telescope produce the most detailed ever image of a star, Antares, and create the first map of surface motion on a star other than our Sun. 24 August – In a study published by Nature, researchers at the University of Manchester show that magnetic hysteresis is possible in individual molecules at −213 °C. This proves that storing data with single-molecule magnets is more feasible than previously thought, and could theoretically give 100 times higher density than current technologies. 26 August – Astronomers detect 15 repeating Fast Radio Bursts coming from FRB 121102 located in a dwarf galaxy about 3 billion light-years away from Earth. The researchers note that FRB 121102 is presently in a "heightened activity state, and follow-on observations are encouraged, particularly at higher radio frequencies". 28 August – Scientists break the record for coldest temperature of molecules, at 50 millionths of a degree above absolute zero. 31 August – Astronomers at the Hubble Space Telescope report the first hints of possible water content within the TRAPPIST-1 multiplanetary system, which includes seven Earth-sized exoplanets, about 40 light-years away from Earth. September 1 September – The European X-ray Free Electron Laser (XFEL) is officially opened in the German city of Hamburg. 4 September – Astronomers report the discovery of an intermediate-mass black hole with 100,000 solar masses hiding in a gas cloud near the heart of the Milky Way, ranking it as the second largest black hole ever seen in the galaxy. 5 September NASA celebrates 40 years of the Voyager missions, which included the launch of Voyager 1 on 5 September 1977, and the earlier launch of Voyager 2 on 20 August 1977, presently traveling into interstellar space, beyond the outer solar system. Scientists report that the Curiosity rover detected boron, an essential ingredient for life on Earth, on the planet Mars. Such a finding, along with previous discoveries that water may have been present on ancient Mars, further supports the possible early habitability of Gale Crater on Mars. 6 September – A research team led by Andrea Morello at the University of New South Wales invented a new type of quantum computing design they called Flip-flop qubits, which makes it much easier to integrate quantum computing with electronic circuits compared with existing approaches. 7 September – The International Astronomical Union officially approves the naming of 14 features on the surface of Pluto. These are the first geological features on the planet to be named following the close flyby by the New Horizons spacecraft in July 2015. 11 September – The second phase of experimentation at the upgraded Wendelstein 7-X fusion reactor begins, and first plasmas are produced. 13 September A study published in Nature concludes that Asia's mountain glaciers will lose at least a third of their mass through global warming by 2100. A quantum computer at IBM was used to calculate energy levels in beryllium hydride. The calculation method is an important step towards the simulation of larger molecules beyond the reach of classical supercomputers. 15 September – The Cassini-Huygens spacecraft ends its 20-year mission to explore the planet Saturn, its rings and moons. The spacecraft is directed into Saturn's atmosphere to disintegrate. 25 September A 35-year-old man who had been in a vegetative state for 15 years after a car accident is reported to have shown signs of consciousness after neurosurgeons implanted a vagus nerve stimulator into his chest. The Australian government announces that it will establish a national space agency. 27 September The LIGO and Virgo collaborations announce the detection of a fourth binary black hole merger, GW170814. For the first time, three detectors recorded the signal, leading to a more precise localization of the source in the sky. Researchers from Oxford, Münster and Exeter Universities create photonic computer chips – that use light rather than electricity – to imitate the way a brain's synapses operate. 28 September – The last recovered image taken by the Rosetta spacecraft, which closely studied the comet 67P/Churyumov–Gerasimenko (67P), and which later impacted the surface of the comet, is reported. 29 September – At the 68th International Astronautical Congress in Adelaide, Australia, Elon Musk reveals the next plans for his company SpaceX including the announcement of a rocket called Big Falcon Rocket, known as Starship as of 2019. 30 September – NASA reports radiation levels on the surface of the planet Mars were temporarily doubled, and were associated with an aurora 25-times brighter than any observed earlier, due to a massive, and unexpected, solar storm in mid-September 2017. October 2 October The Nobel Prize in Physiology or Medicine is awarded to Jeffrey C. Hall, Michael Rosbash and Michael W. Young for their discoveries of molecular mechanisms controlling the circadian rhythm. 3 October The Nobel Prize in Physics is awarded to Rainer Weiss, Kip Thorne and Barry Barish for their role in the detection of gravitational waves. 4 October The Nobel Prize in Chemistry is awarded to Jacques Dubochet, Joachim Frank and Richard Henderson for "developing cryo-electron microscopy for the high-resolution structure determination of biomolecules in solution". NASA announces that the likely explanation for the unusual dimming events related to KIC 8462852 (or Tabby's Star) is that an "uneven ring of dust" orbits the star. 5 October – Astronomers identify C/2017 K2, the most distant comet ever observed in our Solar System, at a distance of . 9 October A study by the Carnegie Institution for Science finds that wind farms in the North Atlantic could, in theory, provide sufficient energy to meet all of humanity's current needs during wintertime. Scientists at Rutgers University find an efficient way to enhance the nutritional value of corn, by inserting a bacterial gene from E Coli, that stimulates production of a key nutrient called methionine, an amino acid found usually in meat. 10 October – A study by Imperial College London and the World Health Organisation finds there has been a tenfold increase in childhood and adolescent obesity since 1975, with the number of obese likely to exceed the underweight by 2022. 12 October – The dwarf planet Haumea is confirmed to have a ring system, the first time such a feature has been discovered around a trans-Neptunian object. 16 October – Astronomers officially announce the detection of a gravitational wave, named GW170817, associated with the merger of two neutron stars. GW170817 also seemed related to a gamma ray burst, likely GRB 170817A, 1.7 seconds later, and a visible light observational event 11 hours afterwards, AT 2017gfo. 17 October – Qualcomm announces the first 5G mobile connection, which has a connection speed of 1 Gbit/s. 18 October Alphabet announces an improved version of the AlphaGo artificial intelligence, developed by its subsidiary Google DeepMind. A German study finds a 75% reduction in flying insect biomass over the past 25 years, suggesting the possibility of large-scale ecological collapse. Scientists announce the discovery of teeth fossils in Germany resembling those of Australopithecus afarensis, suggesting the extinct hominin may have existed 9.7 million years ago, much earlier than 3.9 million years ago, and not living only in Africa, as thought previously. 19 October NASA announces that the Dawn spacecraft mission around the dwarf planet Ceres would be extended until the hydrazine fuel in the spacecraft runs out, possibly in the second half of 2018; afterwards, the spacecraft is expected to remain in a stable orbit around Ceres indefinitely. Measurement of the magnetic moment of antiprotons provides further evidence for CPT symmetry, the hypothesis that matter and antimatter behave identical when time and space are reversed at the same time. 20 October – IBM shifts the goal for quantum supremacy by demonstrating that classical computers can simulate larger quantum systems than previously thought, and uses a supercomputer to simulate up to 56 qubits. 25 October An improved version of the genetic engineering technique CRISPR is published in the journals Science and Nature. In a report, the United States Bureau of Labor Statistics estimates the loss of tens of thousands of US manufacturing jobs, due to the effects of artificial intelligence by 2026, but at the same time estimates growth in fields like software engineering. 26 October European researchers discover a flaw in the way ocean temperatures have been estimated, suggesting they were colder in the past than previously thought, and that the current period of warming is unparalleled over the last 100 million years. A study by the University of Melbourne finds that sea levels could rise 1.3m globally unless coal power ends by 2050. NASA reports an object, named A/2017 U1, that is believed to be the first known interstellar asteroid or comet to pass through our Solar System. 30 October – The World Meteorological Organisation (WMO) reports that concentrations of CO2 in the Earth's atmosphere reached a record high of 403.3 parts per million in 2016. 31 October Researchers at the United States Department of Energy set a new world efficiency record for quantum dot solar cells, at 13.4 percent. Astronomers working with the Next-Generation Transit Survey report the discovery of NGTS-1b, a large confirmed hot Jupiter-sized extrasolar planet orbiting NGTS-1, a small red dwarf star about half the mass and radius of the Sun, every 2.65 days. Daniel Bayliss, of the University of Warwick, and lead author of the study describing the discovery of NGTS-1b, stated, "The discovery of NGTS-1b was a complete surprise to us—such massive planets were not thought to exist around such small stars – importantly, our challenge now is to find out how common these types of planets are in the Galaxy, and with the new Next-Generation Transit Survey facility we are well-placed to do just that." November 1 November – NASA reports that the first evidence of an exoplanet was noted as early as 1917. The evidence was found after reviewing archival materials discovered in storerooms at the Carnegie Observatories in Pasadena, California. 2 November Scientists report that significant changes in the position and structure of the brain have been found in astronauts who have taken trips in space, based on MRI studies. Astronauts who took longer space trips were associated with greater brain changes. Archaeologists using muography report the discovery of a large "void" inside the Great Pyramid of Giza, Egypt. The Tapanuli orangutan (Pongo tapanuliensis), a new species of orangutan, is described in the journal Current Biology. Researchers at the University of Rochester Medical Center identify four genes – KRAS, CDKN2A, SMAD4, and TP53 – responsible for how long patients survive with pancreatic cancer. Researchers from the University of Aberdeen report that a single dose of the drug Trodusquemine can "melt away" fat inside arteries. 6 November – A study by the Earth Institute at Columbia University finds that swapping where crops are grown around the world could potentially feed an extra 825 million people. 7 November Fossils of tiny shrew-like creatures are discovered in southern England, dating back 145 million years, making them the oldest-known ancestors of most living mammals. UK scientists report that resveratrol analogues, when applied to senescent cells in the laboratory, made the cells look and behave younger, with longer telomeres and the ability to divide again. 8 November Astronomers report the first known case of a star, IPTF14hls, that exploded multiple times, over a period of at least 50 years. Through the use of a gene therapy technique, doctors in Germany are able to treat a boy with junctional epidermolysis bullosa, a disease which leaves skin fragile and easily susceptible to blister formation. 9 November – Scientists from the Portuguese Institute of the Sea and the Atmosphere discover and recover the fossil remains of an ancient shark dating back nearly 80 million years, in waters off the coast of the town of Portimão. 10 November IBM reports building a quantum computer with 50 qubits. Researchers at Texas A&M University and the Los Alamos National Laboratory discover a new type of material, which is possibly more resistant to the effects of helium in a nuclear fusion reaction than current materials. 13 November Global carbon emissions are reported to be rising again after a three-year plateau. Experiments on mice show that variants in a gene called ankyrin-B, carried by millions of Americans, could cause cells to store fat, potentially leading to obesity. The FDA approves "Abilify MyCite", the first drug in the U.S. with a digital ingestion tracking system that records when the medication was taken, via a sensor embedded in the pill. California Institute of Technology researchers manage to stabilize a ring of plasma in open air for the first time. 15 November – A study led by Newcastle University finds that sea life in some of the deepest parts of the Pacific Ocean – as far down as 11 km (7 miles) – is contaminated with plastic pollution. 20 November – A study by the University of Edinburgh suggests that life on Earth may have originated from biological particles carried by streams of space dust. 22 November – In a breakthrough for antibiotic resistance, researchers at the Université de Montréal in Canada report a way of designing better molecules that make it harder for plasmids to move between bacteria. 23 November – A study by the University of Leeds finds that shrinking glacier cover across Iceland could lead to increased volcanism in the region, by reducing pressure on the Earth's surface. 28 November A study by Northwell Health identifies dozens of new genetic variations associated with a person's general cognitive ability, while also noting a genetic overlap with longevity. Facebook begins to use artificial intelligence to help identify users potentially at risk for suicide, and thus possibly better help provide them the proper mental health and support resources. The Voyager I spacecraft, the most distant man-made object, fires its trajectory thrusters for the first time since 1980, to extend its lifetime by two or three more years. 29 November – A study published in Nature finds that inhibiting RNA polymerase III (Pol III), a common enzyme found in all mammals, including humans, can extend the lifespan of flies and worms. 30 November – Researchers from Imperial College London announce a breakthrough in optical computing, with a 10,000-fold reduction in the distance over which light can interact. December 1 December Researchers at the Swiss Federal Institute of Technology Zurich demonstrate a 3D printer using living matter, which could offer unique properties not found in "dead matter" such as plastics or metals. Researchers at the University of Minnesota develop graphene nano 'tweezers' able to grab individual biomolecules, with potential for use in handheld disease diagnostic systems that could be run on smartphones. Scientists establish a new method to estimate the magnitude of large earthquakes in minutes instead of hours based on measurements of the gravitational field in the region. 4 December – The MICROSCOPE (satellite) collaboration publishes its first results. The Equivalence principle was measured to hold true within a precision of 10−15, improving prior measurements by an order of magnitude. 5 December – Google's new AlphaZero AI beats a champion chess program after teaching itself in four hours. 6 December Astronomers report detecting the most distant known quasar ULAS J1342+0928, which contains the most distant supermassive black hole, at a reported redshift of z = 7.54, surpassing the redshift of 7 for the previously known most distant quasar ULAS J1120+0641. Construction of the ITER nuclear fusion project reaches the halfway point. Scientists at Harvard University's Wyss Institute for Biologically Inspired Engineering announce a 100-fold increase in the complexity of "DNA bricks" that can self-assemble into 3D nanostructures. A paper by the Carnegie Institution for Science concludes that climate models with the most severe impacts for later this century are likely to be the most accurate, suggesting that the IPCC reports may be underestimating the future trends. 7 December Physicists at the University of Illinois at Urbana–Champaign report the discovery of a new form of matter called excitonium. Massachusetts Institute of Technology researchers discover a method to make bacteria more vulnerable to a type of antibiotics known as quinolones. 8 December – Scientists report that Homo sapiens may have migrated out of Africa as early as 120,000 years ago, particularly into Asia; well before the traditional exiting date of 60,000 years ago. 9 December – Astronomers report a brightening of X-ray emissions from GW170817 (gravitational wave)/GRB 170817A (gamma-ray burst)/SSS17a (optical astronomical transient). 11 December University College London reports that a genetic error responsible for Huntington's disease has been corrected in patients for the first time, using an experimental drug called IONIS-HTTRx. Vanderbilt University researchers develop a new hyperlens material, which makes it possible to resolve details as small as a virus on the surface of living cells. University of Waterloo researchers devise a new battery technology innovation which could lead car batteries to be able to hold dramatically more energy, thus leading to an extended travel range per charge in electric vehicles. 13 December Engineers at Columbia University manage to create artificial graphene in a nano-fabricated semiconductor structure. The firm Carbon Engineering demonstrates the synthesization of gasoline and diesel fuels with the use of carbon dioxide captured from the air and hydrogen derived from water, with its "Air to Fuels" technology for the first time. 14 December British doctors use a new form of gene therapy to treat haemophilia A, a genetic defect which leads to constant bleeding. NASA astronomers report the detection of Kepler-90i, a super-Earth exoplanet, and the eighth in a multiexoplanetary system orbiting the star Kepler-90, which now hosts the most exoplanets ever found to date orbiting an extrasolar star. The exoplanet was found in archival data gathered by the Kepler space telescope with the aid of a newly utilized computer tool, deep learning, a class of machine learning algorithms. Archeological excavations at Lechaio in Greece reveal new evidence of large-scale Ancient Roman engineering. 15 December Researchers at the University of New South Wales publish a complete design for a quantum computer chip that can be manufactured using mostly standard industry processes and components. Researchers at the United States Department of Energy's Princeton Plasma Physics Laboratory and Princeton University develop a piece of software employing the use of new artificial intelligence techniques which is more efficient at predicting possible disruptions in nuclear fusion reactions. The first baby is born from a transplanted cadaveric womb, in Brazil. 16 December – The Pentagon confirms the existence of the Advanced Aviation Threat Identification Program (AATIP), a secret investigatory effort funded from 2007 to 2012 by the United States Government to research and study unidentified flying objects. 17 December – The peer-reviewed scientific journal Nano Letters publishes details of a memory storage device only one atomic layer thick. 18 December – Scientists report that 3.45 billion year old Australian rocks once contained microorganisms, the earliest direct evidence of life on Earth. 19 December – The FDA approves Luxturna, the first gene therapy for an inherited condition in the U.S., for patients with a form of retinal dystrophy. 21 December The company General Fusion reports achieving the first plasma on its newest plasma injector named PI3, the world's largest. Astronomers report that RZ Piscium, a star that brightens and dims in a highly erratic manner, is associated with a large amount of infrared radiation, suggestive evidence that a large amount of gas, dust and debris is orbiting the star, possibly as a result of the disruption, or destruction, of local planets by the star. Awards Queen Elizabeth Prize for Engineering – Eric Fossum, George E. Smith, Nobukazu Teranishi and Michael Francis Tompsett Deaths 4 January – Heinz Billing, German physicist (b. 1914) 10 January – Oliver Smithies, British-American biochemist and Nobel Prize winner (b. 1925) 16 January – Eugene Cernan, American astronaut, Apollo 17 (b. 1934) 7 February – Hans Rosling, Swedish statistician (b. 1948) 8 February – Peter Mansfield, British physicist and Nobel Prize winner (b. 1933) 20 February – Mildred Dresselhaus, American physicist, Presidential Medal of Freedom and National Medal of Science laureate, "queen of carbon science" (b. 1930) 21 February – Kenneth Arrow, American economist and Nobel Prize winner (b. 1921) 26 February – Ludvig Faddeev, Russian physicist and mathematician (b. 1934) 7 March – Hans Georg Dehmelt, German-American physicist and Nobel Prize winner (b. 1922) 8 March – George Andrew Olah, Hungarian-American chemist and Nobel Prize winner (b. 1927) 29 March – Alexei Abrikosov, Russian-American physicist and Nobel Prize winner (b. 1928) 14 July – Maryam Mirzakhani, Iranian mathematician and the first female Fields Medalist (b. 1977) 5 September – Nicolaas Bloembergen, Dutch-American physicist and Nobel Prize winner (b. 1920) 30 September – Vladimir Voevodsky, Russian mathematician and Fields Medalist (b. 1966) 18 November – Fotis Kafatos, Greek biologist and founding president of the European Research Council (b. 1940) See also 2017 in spaceflight List of emerging technologies List of years in science References External links 2017-related lists 21st century in science
57769192
https://en.wikipedia.org/wiki/%2832496%29%202000%20WX182
(32496) 2000 WX182
, provisional designation: , is a Jupiter trojan from the Trojan camp, approximately in diameter. It was discovered on 18 November 2000, by astronomers with the LINEAR program at the Lincoln Laboratory's Experimental Test Site near Socorro, New Mexico, in the United States. The dark Jovian asteroid belongs the 100 largest Jupiter trojans and has a rotation period of 23.3 hours. It has not been named since its numbering in November 2001. Orbit and classification is a Jupiter trojan in a 1:1 orbital resonance with Jupiter. It is located in the trailering Trojan camp at the Gas Giant's Lagrangian point, 60° behind its orbit . It is also a non-family asteroid of the Jovian background population. It orbits the Sun at a distance of 4.9–5.7 AU once every 12 years and 1 month (4,419 days; semi-major axis of 5.27 AU). Its orbit has an eccentricity of 0.08 and an inclination of 30° with respect to the ecliptic. The body's observation arc begins with a precovery published by the Digitized Sky Survey and taken at Palomar Observatory in January 1955, more than 45 years prior to its official discovery observation at Socorro. Numbering and naming This minor planet was numbered by the Minor Planet Center on 30 November 2001 (). , it has not been named. Physical characteristics is an assumed, carbonaceous C-type asteroid. Most Jupiter trojans are D-types, with the reminder being mostly C and P-type asteroids. It has a typical V–I color index of 0.95 and a BR color of 1.23 (also see table below). Rotation period In November 2013, a rotational lightcurve of was obtained from eleven nights of photometric observations by Robert Stephens at the Center for Solar System Studies in Landers, California. Lightcurve analysis gave a rotation period of hours with a brightness amplitude of 0.19 magnitude (). Diameter and albedo According to the survey carried out by the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer and the Japanese Akari satellite, measures 48.02 and 51.63 kilometers in diameter and its surface has an albedo of 0.070 and 0.080, respectively. The Collaborative Asteroid Lightcurve Link assumes a standard albedo for a carbonaceous asteroid of 0.057 and calculates a diameter of 50.77 kilometers based on an absolute magnitude of 10.2. Notes References External links Asteroid Lightcurve Database (LCDB), query form (info ) Dictionary of Minor Planet Names, Google books Discovery Circumstances: Numbered Minor Planets (30001)-(35000) – Minor Planet Center Asteroid (32496) 2000 WX182 at the Small Bodies Data Ferret 032496 032496 20001118
685190
https://en.wikipedia.org/wiki/Comparison%20of%20C%20Sharp%20and%20Java
Comparison of C Sharp and Java
This article compares two programming languages: C# with Java. While the focus of this article is mainly the languages and their features, such a comparison will necessarily also consider some features of platforms and libraries. For a more detailed comparison of the platforms, see Comparison of the Java and .NET platforms. C# and Java are similar languages that are typed statically, strongly, and manifestly. Both are object-oriented, and designed with semi-interpretation or runtime just-in-time compilation, and both are curly brace languages, like C and C++. Types Unified type system Both languages are statically typed with class-based object orientation. In Java the primitive types are special in that they are not object-oriented and they could not have been defined using the language itself. They also do not share a common ancestor with reference types. The Java reference types all derive from a common root type. C# has a unified type system in which all types (besides unsafe pointers) ultimately derive from a common root type. Consequently, all types implement the methods of this root type, and extension methods defined for the object type apply to all types, even primitive int literals and delegates. This allows C#, unlike Java, to support objects with encapsulation that are not reference types. In Java, compound types are synonymous with reference types; methods cannot be defined for a type unless it is also a class reference type. In C# the concepts of encapsulation and methods have been decoupled from the reference requirement so that a type can support methods and encapsulation without being a reference type. Only reference types support virtual methods and specialization, however. Both languages support many built-in types that are copied and passed by value rather than by reference. Java calls these types primitive types, while they are called simple types in C#. The primitive/simple types typically have native support from the underlying processor architecture. The C# simple types implement several interfaces and consequently offer many methods directly on instances of the types, even on the literals. The C# type names are also merely aliases for Common Language Runtime (CLR) types. The C# System.Int64 type is exactly the same type as the long type; the only difference is that the former is the canonical .NET name, while the latter is a C# alias for it. Java does not offer methods directly on primitive types. Instead, methods that operate on primitive values are offered through companion primitive wrapper classes. A fixed set of such wrapper classes exist, each of which wraps one of the fixed set of primitive types. As an example, the Java Long type is a reference type that wraps the primitive long type. They are not the same type, however. Data types Numeric types Signed integers Both Java and C# support signed integers with bit widths of 8, 16, 32 and 64 bits. They use the same name/aliases for the types, except for the 8-bit integer that is called a byte in Java and a sbyte (signed byte) in C#. Unsigned integers C# supports unsigned in addition to the signed integer types. The unsigned types are byte, ushort, uint and ulong for 8, 16, 32 and 64 bit widths, respectively. Unsigned arithmetic operating on the types are supported as well. For example, adding two unsigned integers (uints) still yields a uint as a result; not a long or signed integer. Java does not feature unsigned integer types. In particular, Java lacks a primitive type for an unsigned byte. Instead, Java's byte type is sign extended, which is a common source of bugs and confusion. Unsigned integers were left out of Java deliberately because James Gosling believed that programmers would not understand how unsigned arithmetic works.In programming language design, one of the standard problems is that the language grows so complex that nobody can understand it. One of the little experiments I tried was asking people about the rules for unsigned arithmetic in C. It turns out nobody understands how unsigned arithmetic in C works. There are a few obvious things that people understand, but many people don't understand it. High-precision decimal numbers C# has a type and literal notation for high-precision (28 decimal digits) decimal arithmetic that is appropriate for financial and monetary calculations. Contrary to the float and double data types, decimal fractional numbers such as 0.1 can be represented exactly in the decimal representation. In the float and double representations, such numbers often have non-terminating binary expansions, making those representations more prone to round-off errors. While Java lacks such a built-in type, the Java library does feature an arbitrary precision decimal type. This is not considered a language type and it does not support the usual arithmetic operators; rather it is a reference type that must be manipulated using the type methods. See more about arbitrary-size/precision numbers below. Advanced numeric types Both languages offer library-defined arbitrary-precision arithmetic types for arbitrary-size integers and decimal point calculations. Only Java has a data type for arbitrary precision decimal point calculations. Only C# has a type for working with complex numbers. In both languages, the number of operations that can be performed on the advanced numeric types is limited compared to the built-in IEEE 754 floating point types. For instance, none of the arbitrary-size types support square root or logarithms. C# allows library-defined types to be integrated with existing types and operators by using custom implicit/explicit conversions and operator overloading. See example in section Integration of library-defined types Characters Both languages feature a native char (character) datatype as a simple type. Although the char type can be used with bit-wise operators, this is performed by promoting the char value to an integer value before the operation. Thus, the result of a bitwise operation is a numeric type, not a character, in both languages. Built-in compound data types Both languages treat strings as (immutable) objects of reference type. In both languages, the type contains several methods to manipulate strings, parse, format, etc. In both languages regular expressions are considered an external feature and are implemented in separate classes. Both languages' libraries define classes for working with dates and calendars in different cultures. The Java java.util.Date is a mutable reference type, whereas the C# System.DateTime is a struct value type. C# additionally defines a TimeSpan type for working with time periods. Both languages support date and time arithmetic according to different cultures. User-defined value type (struct) C# allows the programmer to create user-defined value types, using the struct keyword. Unlike classes and like the standard primitives, such value types are passed and assigned by value rather than by reference. They can also be part of an object (either as a field or boxed), or stored in an array without the memory indirection that normally exists for class types. Because value types have no notion of a null value and can be used in arrays without initialization, they always come with an implicit default constructor that essentially fills the struct memory space with zeroes. The programmer can only define additional constructors with one or more arguments. Value types do not have virtual method tables, and because of that (and the fixed memory footprint), they are implicitly sealed. However, value types can (and frequently do) implement interfaces. For example, the built-in integer types implement several interfaces. Apart from the built-in primitive types, Java does not include the concept of value types. Enumerations Both languages define enumerations, but they are implemented in fundamentally different ways. As such, enumerations are one area where tools designed to automatically translate code between the two languages (such as Java to C# converters) fail. C# has implemented enumerations in a manner similar to C, that is as wrappers around the bit-flags implemented in primitive integral types (int, byte, short, etc.). This has performance benefits and improves interaction with C/C++ compiled code, but provides fewer features and can lead to bugs if low-level value types are directly cast to an enumeration type, as is allowed in the C# language. Therefore, it is seen as syntactic sugar. In contrast, Java implements enumerations as full featured collection of instances, requiring more memory and not aiding interaction with C/C++ code, but providing additional features in reflection and intrinsic behavior. The implementation in each language is described in the table below. In both C# and Java, programmers can use enumerations in a switch statement without conversion to a string or primitive integer type. However, C# disallows implicit fall-through unless the case statement does not contain any code, as it is a common cause of hard-to-find bugs. Fall-through must be explicitly declared using a goto statement. Delegates, method references C# implements object-oriented method pointers in the form of delegates. A delegate is a special type that can capture a reference to a method. This reference can then be stored in a delegate-type variable or passed to a method through a delegate parameter for later invocation. C# delegates support covariance and contravariance, and can hold a reference to any signature-compatible static method, instance method, anonymous method or lambda expression. Delegates should not be confused with closures and inline functions. The concepts are related because a reference to a closure/inline function must be captured in a delegate reference to be useful at all. But a delegate does not always reference an inline function; it can also reference existing static or instance methods. Delegates form the basis of C# events, but should not be confused with those either. Delegates were deliberately left out of Java because they were considered unnecessary and detrimental to the language, and because of potential performance issues. Instead, alternative mechanisms are used. The wrapper pattern, which resembles the delegates of C# in that it allows the client to access one or more client-defined methods through a known interface, is one such mechanism. Another is the use of adapter objects using inner classes, which the designers of Java argued are a better solution than bound method references. See also example C# delegates and equivalent Java constructs. Lifted (nullable) types C# allows value/primitive/simple types to be "lifted" to allow the special null value in addition to the type's native values. A type is lifted by adding a ? suffix to the type name; this is equivalent to using the Nullable<T> generic type, where T is the type to be lifted. Conversions are implicitly defined to convert between values of the base and the lifted type. The lifted type can be compared against null or it can be tested for HasValue. Also, lifted operators are implicitly and automatically defined based on their non-lifted base, where – with the exception of some boolean operators – a null argument will propagate to the result. Java does not support type lifting as a concept, but all of the built-in primitive types have corresponding wrapper types, which do support the null value by virtue of being reference types (classes). According to the Java spec, any attempt to dereference the null reference must result in an exception being thrown at run-time, specifically a NullPointerException. (It would not make sense to dereference it otherwise, because, by definition, it points to no object in memory.) This also applies when attempting to unbox a variable of a wrapper type, which evaluates to null: the program will throw an exception, because there is no object to be unboxed – and thus no boxed value to take part in the subsequent computation. The following example illustrates the different behavior. In C#, the lifted*operator propagates the null value of the operand; in Java, unboxing the null reference throws an exception. Not all C# lifted operators have been defined to propagate null unconditionally, if one of the operands is null. Specifically, the boolean operators have been lifted to support ternary logic thus keeping impedance with SQL. The Java boolean operators do not support ternary logic, nor is it implemented in the base class library. Late-bound (dynamic) type C# features a late bound dynamic type that supports no-reflection dynamic invocation, interoperability with dynamic languages, and ad-hoc binding to (for example) document object models. The dynamic type resolves member access dynamically at runtime as opposed to statically/virtual at compile time. The member lookup mechanism is extensible with traditional reflection as a fall-back mechanism. There are several use cases for the dynamic type in C#: Less verbose use of reflection: By casting an instance to the dynamic type, members such as properties, methods, events etc. can be directly invoked on the instance without using the reflection API directly. Interoperability with dynamic languages: The dynamic type comes with a hub-and-spoke support for implementing dynamically typed objects and common runtime infrastructure for efficient member lookup. Creating dynamic abstractions on the fly: For instance, a dynamic object could provide simpler access to document object models such as XML or XHTML documents. Java does not support a late-bound type. The use cases for C# dynamic type have different corresponding constructs in Java: For dynamic late-bound by-name invocation of preexisting types, reflection should be used. For interoperability with dynamic languages, some form of interoperability API specific to that language must be used. The Java virtual machine platform does have multiple dynamic languages implemented on it, but there is no common standard for how to pass objects between languages. Usually this involves some form of reflection or reflection-like API. As an example of how to use JavaFX objects from Java. For creating and interacting with objects entirely at runtime, e.g., interaction with a document object model abstraction, a specific abstraction API must be used. See also example #Interoperability with dynamic languages. Pointers Java precludes pointers and pointer-arithmetic within the Java runtime environment. The Java language designers reasoned that pointers are one of the main features that enable programmers to put bugs in their code and chose not to support them. Java does not allow for directly passing and receiving objects/structures to/from the underlying operating system and thus does not need to model objects/structures to such a specific memory layout, layouts that frequently would involve pointers. Java's communication with the underlying operating system is instead based upon Java Native Interface (JNI) where communication with/adaptation to an underlying operating system is handled through an external glue layer. While C# does allow use of pointers and corresponding pointer arithmetic, the C# language designers had the same concerns that pointers could potentially be used to bypass the strict rules for object access. Thus, C# by default also precludes pointers. However, because pointers are needed when calling many native functions, pointers are allowed in an explicit unsafe mode. Code blocks or methods that use the pointers must be marked with the unsafe keyword to be able to use pointers, and the compiler requires the /unsafe switch to allow compiling such code. Assemblies that are compiled using the /unsafe switch are marked as such and may only execute if explicitly trusted. This allows using pointers and pointer arithmetic to directly pass and receive objects to/from the operating system or other native APIs using the native memory layout for those objects while also isolating such potentially unsafe code in specifically trusted assemblies. Reference types In both languages references are a central concept. All instances of classes are by reference. While not directly evident in the language syntax per se, both languages support the concept of weak references. An instance that is only referenced by weak references is eligible for garbage collection just as if there were no references at all. In both languages this feature is exposed through the associated libraries, even though it is really a core runtime feature. Along with weak references, Java has soft references. They are much like weak references, but the JVM will not deallocate softly-referenced objects until the memory is needed. Arrays and collections Arrays and collections are concepts featured by both languages. The syntax used to declare and access arrays is identical, except that C# has added syntax for declaring and manipulating multidimensional arrays. Multidimensional arrays can in some cases increase performance because of increased locality (as there is one pointer dereference instead of one for every dimension of the array, as it is the case for jagged arrays). However, since all array element access in a multidimensional array requires multiplication/shift between the two or more dimensions, this is an advantage only in very random access scenarios. Another difference is that the entire multidimensional array can be allocated with a single application of operator new, while jagged arrays require loops and allocations for every dimension. However, Java provides a syntactic construct for allocating a jagged array with regular lengths; the loops and multiple allocations are then performed by the virtual machine and need not be explicit at the source level. Both languages feature an extensive set of collection types that includes various ordered and unordered types of lists, maps/dictionaries, sets, etc. Java also supports the syntax of C/C++: Expressions and operators Boxing and unboxing Both languages allow automatic boxing and unboxing, i.e. they allow for implicit casting between any primitive types and the corresponding reference types. In C#, the primitive types are subtypes of the Object type. In Java this is not true; any given primitive type and the corresponding wrapper type have no specific relationship with each other, except for autoboxing and unboxing, which act as syntactic sugar for interchanging between them. This was done intentionally, to maintain backward compatibility with prior versions of Java, in which no automatic casting was allowed, and the programmer worked with two separate sets of types: the primitive types, and the wrapper (reference) type hierarchy. This difference has the following consequences. First of all, in C#, primitive types can define methods, such as an override of Object's ToString() method. In Java, this task is accomplished by the primitive wrapper classes. Secondly, in Java an extra cast is needed whenever one tries to directly dereference a primitive value, as it will not be boxed automatically. The expression ((Integer)42).toString() will convert an integer literal to string in Java while 42.ToString() performs the same operation in C#. This is because the latter one is an instance call on the primitive value 42, while the former one is an instance call on an object of type java.lang.Integer. Finally, another difference is that Java makes heavy use of boxed types in generics (see below). Statements Syntax Both languages are considered "curly brace" languages in the C/C++ family. Overall the syntaxes of the languages are very similar. The syntax at the statement and expression level is almost identical with obvious inspiration from the C/C++ tradition. At type definition level (classes and interfaces) some minor differences exist. Java is explicit about extending classes and implementing interfaces, while C# infers this from the kind of types a new class/interface derives from. C# supports more features than Java, which to some extent is also evident in the syntax that specifies more keywords and more grammar rules than Java. Keywords and backward compatibility As the languages evolved, the language designers for both languages have faced situations where they wanted to extend the languages with new keywords or syntax. New keywords in particular may break existing code at source level, i.e. older code may no longer compile, if presented to a compiler for a later version of the language. Language designers are keen to avoid such regressions. The designers of the two languages have been following different paths when addressing this problem. Java language designers have avoided new keywords as much as possible, preferring instead to introduce new syntactic constructs that were not legal before or to reuse existing keywords in new contexts. This way they didn't jeopardize backward compatibility. An example of the former can be found in how the for loop was extended to accept iterable types. An example of the latter can be found in how the extends and (especially) the super keywords were reused for specifying type bounds when generics were introduced in Java 1.5. At one time (Java 1.4) a new keyword assert was introduced that was not reserved as a keyword before. This had the potential to render previously valid code invalid, if for instance the code used assert as an identifier. The designers chose to address this problem with a four-step solution: 1) Introducing a compiler switch that indicates if Java 1.4 or later should be used, 2) Only marking assert as a keyword when compiling as Java 1.4 and later, 3) Defaulting to 1.3 to avoid rendering previous (non 1.4 aware code) invalid and 4) Issue warnings, if the keyword is used in Java 1.3 mode, in order to allow the developers to change the code. C# language designers have introduced several new keywords since the first version. However, instead of defining these keywords as global keywords, they define them as context sensitive keywords. This means that even when they introduced (among others) the partial and yield keywords in C# 2.0, the use of those words as identifiers is still valid as there is no clash possible between the use as keyword and the use as identifier, given the context. Thus, the present C# syntax is fully backward compatible with source code written for any previous version without specifying the language version to be used. Object-oriented programming Both C# and Java are designed from the ground up as object-oriented languages using dynamic dispatch, with syntax similar to C++ (C++ in turn derives from C). Neither language is a superset of C or C++, however. Partial class C# allows a class definition to be split across several source files using a feature called partial classes. Each part must be marked with the keyword partial. All the parts must be presented to the compiler as part of a single compilation. Parts can reference members from other parts. Parts can implement interfaces and one part can define a base class. The feature is useful in code generation scenarios (such as user interface (UI) design), where a code generator can supply one part and the developer another part to be compiled together. The developer can thus edit their part without the risk of a code generator overwriting that code at some later time. Unlike the class extension mechanism, a partial class allows circular dependencies among its parts as they are guaranteed to be resolved at compile time. Java has no corresponding concept. Inner and local classes Both languages allow inner classes, where a class is defined lexically inside another class. However, in each language these inner classes have rather different semantics. In Java, unless the inner class is declared static, a reference to an instance of an inner class carries a reference to the outer class with it. As a result, code in the inner class has access to both the static and non-static members of the outer class. To create an instance of a non-static inner class, the instance of the embracing outer class must be named. This is done via a new new-operator introduced in JDK 1.3: outerClassInstance.new Outer.InnerClass(). This can be done in any class that has a reference to an instance of the outer class. In C#, an inner class is conceptually the same as a normal class. In a sense, the outer class only acts as a namespace. Thus, code in the inner class cannot access non-static members of the outer class unless it does so through an explicit reference to an instance of the outer class. Programmers can declare the inner class private to allow only the outer class to have any access to it. Java provides another feature called local classes or anonymous classes, which can be defined within a method body. These are generally used to implement an interface with only one or two methods, which are typically event handlers. However, they can also be used to override virtual methods of a superclass. The methods in those local classes have access to the outer method's local variables declared final. C# satisfies the use-cases for these by providing anonymous delegates; see event handling for more about this. C# also provides a feature called anonymous types/classes, but it is rather different from Java's concept with the same name. It allows the programmer to instantiate a class by providing only a set of names for the properties the class should have, and an expression to initialize each. The types of the properties are inferred from the types of those expressions. These implicitly-declared classes are derived directly from object. Event C# multicast-delegates are used with events. Events provide support for event-driven programming and are an implementation of the observer pattern. To support this there is a specific syntax to define events in classes, and operators to register, unregister or combine event handlers. See here for information about how events are implemented in Java. Operator overloading and conversions Operator overloading and user-defined casts are separate features that both aim to allow new types to become first-class citizens in the type system. By using these features in C#, types such as Complex and decimal have been integrated so that the usual operators like addition and multiplication work with the new types. Unlike C++, C# does restrict the use of operator overloading, prohibiting it for the operators new, ( ), ||, &&, =, and any variations of compound statements like +=. But compound operators will call overloaded simple operators, like -= calling - and =. Java does not include operator overloading, nor custom conversions in order to prevent abuse of the feature and to keep the language simple. Indexer C# also includes indexers that can be considered a special case of operator overloading (like the C++ operator[]), or parameterized get/set properties. An indexer is a property named this[] that uses one or more parameters (indexes); the indices can be objects of any type: myList[4] = 5; string name = xmlNode.Attributes["name"]; orders = customerMap[theCustomer]; Java does not include indexers. The common Java pattern involves writing explicit getters and setters where a C# programmer would use an indexer. Fields and initialization Object initialization In both C# and Java, an object's fields can be initialized either by variable initializers (expressions that can be assigned to variables where they are defined) or by constructors (special subroutines that are executed when an object is being created). In addition, Java contains instance initializers, which are anonymous blocks of code with no arguments that are run after the explicit (or implicit) call to a superclass's constructor but before the constructor is executed. C# initializes object fields in the following order when creating an object: Derived static fields Derived static constructor Derived instance fields Base static fields Base static constructor Base instance fields Base instance constructor Derived instance constructor Some of the above fields may not be applicable (e.g. if an object does not have static fields). Derived fields are those that are defined in the object's direct class, while base field is a term for the fields that are defined in one of the object's superclasses. Note that an object representation in memory contains all fields defined in its class or any of its superclasses, even, if some fields in superclasses are defined as private. It is guaranteed that any field initializers take effect before any constructors are called, since both the instance constructor of the object's class and its superclasses are called after field initializers are called. There is, however, a potential trap in object initialization when a virtual method is called from a base constructor. The overridden method in a subclass may reference a field that is defined in the subclass, but this field may not have been initialized because the constructor of the subclass that contains field initialization is called after the constructor of its base class. In Java, the order of initialization is as follows: Invocation of another constructor (either of the object's class or of the object's superclass) Instance variable initializers and instance initializers (in the order they appear in the source code) The constructor body Like in C#, a new object is created by calling a specific constructor. Within a constructor, the first statement may be an invocation of another constructor. If this is omitted, the call to the argumentless constructor of the superclass is added implicitly by the compiler. Otherwise, either another overloaded constructor of the object's class can be called explicitly, or a superclass constructor can be called. In the former case, the called constructor will again call another constructor (either of the object's class or its subclass) and the chain sooner or later ends up at the call to one of the constructors of the superclass. After another constructor is called (that causes direct invocation of the superclass constructor, and so forth, down to the Object class), instance variables defined in the object's class are initialized. Even if there are no variable initializers explicitly defined for some variables, these variables are initialized to default values. Note that instance variables defined in superclasses are already initialized by this point, because they were initialized by a superclass constructor when it was called (either by the constructor's code or by variable initializers performed before the constructor's code or implicitly to default values). In Java, variable initializers are executed according to their textual order in the source file. Finally, the constructor body is executed. This ensures proper order of initialization, i.e. the fields of a base class finish initialization before initialization of the fields of an object class begins. There are two main potential traps in Java's object initialization. First, variable initializers are expressions that can contain method calls. Since methods can reference any variable defined in the class, the method called in a variable initializer can reference a variable that is defined below the variable being initialized. Since initialization order corresponds to textual order of variable definitions, such a variable would not be initialized to the value prescribed by its initializer and would contain the default value. Another potential trap is when a method that is overridden in the derived class is called in the base class constructor, which can lead to behavior the programmer would not expect when an object of the derived class is created. According to the initialization order, the body of the base class constructor is executed before variable initializers are evaluated and before the body of the derived class constructor is executed. The overridden method called from the base class constructor can, however, reference variables defined in the derived class, but these are not yet initialized to the values specified by their initializers or set in the derived class constructor. The latter issue applies to C# as well, but in a less critical form since in C# methods are not overridable by default. Resource disposal Both languages mainly use garbage collection as a means of reclaiming memory resources, rather than explicit deallocation of memory. In both cases, if an object holds resources of different kinds other than memory, such as file handles, graphical resources, etc., then it must be notified explicitly when the application no longer uses it. Both C# and Java offer interfaces for such deterministic disposal and both C# and Java (since Java 7) feature automatic resource management statements that will automatically invoke the disposal/close methods on those interfaces. Methods Extension methods and default methods Using a special this designator on the first parameter of a method, C# allows the method to act as if it were a member method of the type of the first parameter. This extension of the foreign class is purely syntactical. The extension method must be declared static and defined within a purely static class. The method must obey any member access restriction like any other method external to the class; thus static methods cannot break object encapsulation. The "extension" is only active within scopes where the namespace of the static host class has been imported. Since Java 8, Java has a similar feature called default methods, which are methods with a body declared on interfaces. As opposed to C# extension methods, Java default methods are instance methods on the interface that declare them. Definition of default methods in classes that implement the interface is optional: If the class does not define the method, the default definition is used instead. Both the C# extension methods and the Java default methods allow a class to override the default implementation of the extension/default method, respectively. In both languages this override is achieved by defining a method on the class that should use an alternate implementation of the method. C# scope rules defines that if a matching method is found on a class, it takes precedence over a matching extension method. In Java any class declared to implement an interface with default method is assumed to have the default methods implementions, unless the class implements the method itself. Partial methods Related to partial classes C# allows partial methods to be specified within partial classes. A partial method is an intentional declaration of a method with several restrictions on the signature. The restrictions ensure that if a definition is not provided by any class part, then the method and every call to it can be safely erased. This feature allows code to provide a large number of interception points (like the template method GoF design pattern) without paying any runtime overhead if these extension points are not being used by another class part at compile time. Java has no corresponding concept. Virtual methods Methods in C# are non-virtual by default, and must be declared virtual explicitly, if desired. In Java, all non-static non-private methods are virtual. Virtuality guarantees that the most recent override for the method will always be called, but incurs a certain runtime cost on invocation as these invocations cannot be normally inlined, and require an indirect call via the virtual method table. However, some JVM implementations, including the Oracle reference implementation, implement inlining of the most commonly called virtual methods. Java methods are virtual by default (although they can be sealed by using the final modifier to disallow overriding). There is no way to let derived classes define a new, unrelated method with the same name. This means that by default in Java, and only when explicitly enabled in C#, new methods may be defined in a derived class with the same name and signature as those in its base class. When the method is called on a superclass reference of such an object, the "deepest" overridden implementation of the base class' method will be called according to the specific subclass of the object being referenced. In some cases, when a subclass introduces a method with the same name and signature as a method already present in the base class, problems can occur. In Java, this will mean that the method in the derived class will implicitly override the method in the base class, even though that may not be the intent of the designers of either class. To mitigate this, C# requires that if a method is intended to override an inherited method, the override keyword must be specified. Otherwise, the method will "hide" the inherited method. If the keyword is absent, compiler warning to this effect is issued, which can be silenced by specifying the new keyword. This avoids the problem that can arise from a base class being extended with a non-private method (i.e. an inherited part of the namespace) whose signature is already in use by a derived class. Java has a similar compiler check in the form of the @Override method annotation, but it is not compulsory, and in its absence, most compilers will not provide comment (but the method will be overridden). Constant/immutable parameters In Java, it is possible to prevent reassignment of a local variable or method parameter by using the keyword. Applying this keyword to a primitive type variable causes the variable to become immutable. However, applying to a reference type variable only prevents that another object is assigned to it. It will not prevent the data contained by the object from being mutated. As of C#7, it is possible to prevent reassignment of a method parameter by using the in keyword, however this keyword cannot be used on local variables. As with Java, applying in to a parameter only prevents the parameter from being reassigned to a different value. It is still possible to mutate the data contained by the object. Both languages do not support essential feature of const-correctness that exists in C/C++, which makes a method constant. Java defines the word "constant" arbitrarily as a field. As a convention, these variable names are capital-only with words separated with an underscore but the Java language doesn't insist on this. A parameter that is only is not considered as a constant, although it may be so in the case of a primitive data type or an immutable class, like a . Generator methods Any C# method declared as returning IEnumerable, IEnumerator or the generic versions of these interfaces can be implemented using yield syntax. This is a form of limited, compiler-generated continuations and can drastically reduce the code needed to traverse or generate sequences, although that code is just generated by the compiler instead. The feature can also be used to implement infinite sequences, e.g., the sequence of Fibonacci numbers. Java does not have an equivalent feature. Instead, generators are typically defined by providing a specialized implementation of a well-known collection or iterable interface, which will compute each element on demand. For such a generator to be used in a for each statement, it must implement interface java.lang.Iterable. See also example Fibonacci sequence below. Explicit interface implementation C# also has explicit interface implementation that allows a class to specifically implement methods of an interface, separate to its own class methods, or to provide different implementations for two methods with the same name and signature inherited from two base interfaces. In either language, if a method (or property in C#) is specified with the same name and signature in multiple interfaces, the members will clash when a class is designed that implements those interfaces. An implementation will by default implement a common method for all of the interfaces. If separate implementations are needed (because the methods serve separate purposes, or because return values differ between the interfaces) C#'s explicit interface implementation will solve the problem, though allowing different results for the same method, depending on the current cast of the object. In Java there is no way to solve this problem other than refactoring one or more of the interfaces to avoid name clashes. Reference (in/out) parameters The arguments of primitive types (e.g. int, double) to a method are passed by value in Java whereas objects are passed by reference. This means that a method operates on copies of the primitives passed to it instead of on the actual variables. On the contrary, the actual objects in some cases can be changed. In the following example, object String is not changed. Object of class 'a' is changed. In C#, it is possible to enforce a reference with the ref keyword, similar to C++ and in a sense to C. This feature of C# is particularly useful when one wants to create a method that returns more than one object. In Java trying to return multiple values from a method is unsupported unless a wrapper is used, in this case named "Ref". Exceptions Checked exceptions Java supports checked exceptions (along with unchecked exceptions). C# only supports unchecked exceptions. Checked exceptions force the programmer to either declare the exception thrown in a method, or to catch the thrown exception using a try-catch clause. Checked exceptions can encourage good programming practice, ensuring that all errors are dealt with. However Anders Hejlsberg, chief C# language architect, argues that they were to some extent an experiment in Java and that they have not been shown to be worthwhile except in small example programs. One criticism is that checked exceptions encourage programmers to use an empty catch block (catch (Exception e) {}), which silently swallows exceptions, rather than letting the exceptions propagate to a higher-level exception-handling routine. In some cases, however, exception chaining can be applied instead, by re-throwing the exception in a wrapper exception. For example, if an object is changed to access a database instead of a file, an could be caught and re-thrown as an , since the caller may not need to know the inner workings of the object. However, not all programmers agree with this stance. James Gosling and others maintain that checked exceptions are useful, and misusing them has caused the problems. Silently catching exceptions is possible, yes, but it must be stated explicitly what to do with the exception, versus unchecked exceptions that allow doing nothing by default. It can be ignored, but code must be written explicitly to ignore it. Try-catch-finally There are also differences between the two languages in treating the try-finally statement. The finally block is always executed, even if the try block contains control-passing statements like throw or return. In Java, this may result in unexpected behavior, if the try block is left by a return statement with some value, and then the finally block that is executed afterward is also left by a return statement with a different value. C# resolves this problem by prohibiting any control-passing statements like return or break in the finally block. A common reason for using try-finally blocks is to guard resource managing code, thus guaranteeing the release of precious resources in the finally block. C# features the using statement as a syntactic shorthand for this common scenario, in which the Dispose() method of the object of the using is always called. A rather subtle difference is the moment a stack trace is created when an exception is being thrown. In Java, the stack trace is created in the moment the exception is created. class Foo { Exception up = new Exception(); int foo() throws Exception { throw up; } } The exception in the statement above will always contain the constructor's stack-trace – no matter how often foo is called. In C# on the other hand, the stack-trace is created the moment "throw" is executed. class Foo { Exception e = new Exception(); int foo() { try { throw e; } catch (Exception e) { throw; } } } In the code above, the exception will contain the stack-trace of the first throw-line. When catching an exception, there are two options in case the exception should be rethrown: throw will just rethrow the original exception with the original stack, while throw e would have created a new stack trace. Finally blocks Java allows flow of control to leave the finally block of a try statement, regardless of the way it was entered. This can cause another control flow statement (such as return) to be terminated mid-execution. For example: int foo() { try { return 0; } finally { return 1; } } In the above code, the return statement within the try block causes control to leave it, and thus finally block is executed before the actual return happens. However, the finally block itself also performs a return. Thus, the original return that caused it to be entered is not executed, and the above method returns 1 rather than 0. Informally speaking, it tries to return 0 but finally returns 1. C# does not allow any statements that allow control flow to leave the finally block prematurely, except for throw. In particular, return is not allowed at all, goto is not allowed if the target label is outside the finally block, and continue and break are not allowed if the nearest enclosing loop is outside the finally block. Generics In the field of generics the two languages show a superficial syntactical similarity, but they have deep underlying differences. Type erasure versus reified generics Generics in Java are a language-only construction; they are implemented only in the compiler. The generated classfiles include generic signatures only in form of metadata (allowing the compiler to compile new classes against them). The runtime has no knowledge of the generic type system; generics are not part of the JVM. Instead, generics classes and methods are transformed during compiling via a process termed type erasure. During this, the compiler replaces all generic types with their raw version and inserts casts/checks appropriately in client code where the type and its methods are used. The resulting byte code will contain no references to any generic types or parameters (See also Generics in Java). The Java language specification intentionally prohibits certain uses of generics; this is necessary to allow for implementing generics through type erasure, and to allow for migration compatibility. Research into adding reified generics to the Java platform is ongoing, as part of Project Valhalla. C# builds on support for generics from the virtual execution system, i.e., it is not just a language feature. The language is merely a front-end for cross-language generics support in the CLR. During compiling generics are verified for correctness, but code generation to implement the generics are deferred to class-load time. Client code (code invoking generic methods/properties) are fully compiled and can safely assume generics to be type-safe. This is called reification. At runtime, when a unique set of type parameters for a generic class/method/delegate is encountered for the first time, the class loader/verifier will synthesize a concrete class descriptor and generate method implementations. During the generation of method implementations all reference types will be considered one type, as reference types can safely share the same implementations. This is merely for the purpose of implementing code. Different sets of reference types will still have unique type descriptors; their method tables will merely point to the same code. The following list illustrates some differences between Java and C# when managing generics. It is not exhaustive: C# allows generics directly for primitive types. Java, instead, allows the use of boxed types as type parameters (e.g., List<Integer> instead of List<int>). This comes at a cost since all such values need to be boxed/unboxed when used, and they all need to be heap-allocated. However, a generic type can be specialized with an array type of a primitive type in Java, for example List<int[]> is allowed. Several third-party libraries implemented the basic collections in Java with backing primitive arrays to preserve the runtime and memory optimization that primitive types provide. Migration compatibility Java's type erasure design was motivated by a design requirement to achieve migration compatibility – not to be confused with backward compatibility. In particular, the original requirement was "… there should be a clean, demonstrable migration path for the Collections APIs that were introduced in the Java 2 platform". This was designed so that any new generic collections should be passable to methods that expected one of the pre-existing collection classes. C# generics were introduced into the language while preserving full backward compatibility, but did not preserve full migration compatibility: Old code (pre C# 2.0) runs unchanged on the new generics-aware runtime without recompilation. As for migration compatibility, new generic collection classes and interfaces were developed that supplemented the non-generic .NET 1.x collections rather than replacing them. In addition to generic collection interfaces, the new generic collection classes implement the non-generic collection interfaces where possible. This prevents the use of new generic collections with pre-existing (non-generic aware) methods, if those methods are coded to use the collection classes. Covariance and contravariance Covariance and contravariance is supported by both languages. Java has use-site variance that allows a single generic class to declare members using both co- and contravariance. C# has define-site variance for generic interfaces and delegates. Variance is unsupported directly on classes but is supported through their implementation of variant interfaces. C# also has use-site covariance support for methods and delegates. Functional programming Closures A closure is an inline function that captures variables from its lexical scope. C# supports closures as anonymous methods or lambda expressions with full-featured closure semantics. In Java, anonymous inner classes will remain the preferred way to emulate closures until Java 8 has become the new standard. This is a more verbose construction. This approach also has some differences compared to real closures, notably more controlled access to variables from the enclosing scopes: only final members can be referenced. Java 8, however introduces lambdas that fully inherit the current scope and, in fact, do not introduce a new scope. When a reference to a method can be passed around for later execution, a problem arises about what to do when the method has references to variables/parameters in its lexical scope. C# closures can access any variable/parameter from its lexical scope. In Java's anonymous inner classes, only references to final members of the lexical scope are allowed, thus requiring the developer to mark which variables to make available, and in what state (possibly requiring boxing). Lambdas and expression trees C# and Java feature a special type of in-line closures called lambdas. These are anonymous methods: they have a signature and a body, but no name. They are mainly used to specify local function-valued arguments in calls to other methods, a technique mainly associated with functional programming. C#, unlike Java, allows the use of lambda functions as a way to define special data structures called expression trees. Whether they are seen as an executable function or as a data structure depends on compiler type inference and what type of variable or parameter they are assigned or cast to. Lambdas and expression trees play key roles in Language Integrated Query (LINQ). Metadata Preprocessing, compilation and packaging Namespaces and file contents In C#, namespaces are similar to those in C++. Unlike package names in Java, a namespace is not in any way tied to the location of the source file. While it is not strictly necessary for a Java source file location to mirror its package directory structure, it is the conventional organization. Both languages allow importing of classes (e.g., import java.util.* in Java), allowing a class to be referenced using only its name. Sometimes classes with the same name exist in multiple namespaces or packages. Such classes can be referenced by using fully qualified names, or by importing only selected classes with different names. To do this, Java allows importing a single class (e.g., import java.util.List). C# allows importing classes under a new local name using the following syntax: using Console = System.Console. It also allows importing specializations of classes in the form of using IntList = System.Collections.Generic.List<int>. Both languages have a static import syntax that allows using the short name of some or all of the static methods/fields in a class (e.g., allowing foo(bar) where foo() can be statically imported from another class). C# has a static class syntax (not to be confused with static inner classes in Java), which restricts a class to only contain static methods. C# 3.0 introduces extension methods to allow users to statically add a method to a type (e.g., allowing foo.bar() where bar() can be an imported extension method working on the type of foo). The Sun Microsystems Java compiler requires that a source file name must match the only public class inside it, while C# allows multiple public classes in the same file, and puts no restrictions on the file name. C# 2.0 and later allows splitting a class definition into several files by using the partial keyword in the source code. In Java, a public class will always be in its own source file. In C#, source code files and logical units separation are not tightly related. Conditional compilation Unlike Java, C# implements conditional compilation using preprocessor directives. It also provides a Conditional attribute to define methods that are only called when a given compilation constant is defined. This way, assertions can be provided as a framework feature with the method Debug.Assert(), which is only evaluated when the DEBUG constant is defined. Since version 1.4, Java provides a language feature for assertions, which are turned off at runtime by default but can be enabled using the -enableassertions or -ea switch when invoking the JVM. Threading and asynchronous features Both languages include thread synchronization mechanisms as part of their language syntax. Task-based parallelism for C# With .NET Framework 4.0, a new task-based programming model was introduced to replace the existing event-based asynchronous model. The API is based around the Task and Task<T> classes. Tasks can be composed and chained. By convention, every method that returns a Task should have its name postfixed with Async. public static class SomeAsyncCode { public static Task<XDocument> GetContentAsync() { HttpClient httpClient = new HttpClient(); return httpClient.GetStringAsync("www.contoso.com").ContinueWith((task) => { string responseBodyAsText = task.Result; return XDocument.Parse(responseBodyAsText); }); } } var t = SomeAsyncCode.GetContentAsync().ContinueWith((task) => { var xmlDocument = task.Result; }); t.Start(); In C# 5 a set of language and compiler extensions was introduced to make it easier to work with the task model. These language extensions included the notion of async methods and the await statement that make the program flow appear synchronous. public static class SomeAsyncCode { public static async Task<XDocument> GetContentAsync() { HttpClient httpClient = new HttpClient(); string responseBodyAsText = await httpClient.GetStringAsync("www.contoso.com"); return XDocument.Parse(responseBodyAsText); } } var xmlDocument = await SomeAsyncCode.GetContentAsync(); // The Task will be started on call with await. From this syntactic sugar the C# compiler generates a state-machine that handles the necessary continuations without developers having to think about it. Task-based parallelism for Java Java supports threads since JDK 1.0. Java offers a high versatility for running threads, often called tasks. This is done by implementing a functional interface (a java.lang.Runnable interface) defining a single void no-args method as demonstrated in the following example: var myThread = new Thread(() -> { var threadName = Thread.currentThread().getName(); System.out.println("Hello " + threadName); }); myThread.start(); Similar to C#, Java has a higher level mechanism for working with threads. Executors can execute asynchronous tasks and also manage a group of subprocesses. All the threads of an ExecutorServices instance are handled in a pool. This ExecutorService instance will be reused under the hood for revenant tasks, so it's possible runs as many concurrent tasks as the programmer wants throughout the life-cycle of the application using a single executor service instance. This is how the first thread-example looks like using executors: ExecutorService executor = Executors.newSingleThreadExecutor(); executor.submit(() -> { var threadName = Thread.currentThread().getName(); System.out.println("Hello " + threadName); }); The ExecutorService instance also supports a Callable interface, another single method interface like Runnable but the signature of the contained method of Callable returns a value. In this way, the lambda expression must also return a value. public static class SomeAsyncCode { ExecutorService executor = Executors.newSingleThreadExecutor(); public static Future<String> getContentAsync(){ return executor.submit(() -> { HttpRequest httpReq = HttpRequest.newBuilder() .uri(new URI("https://www.graalvm.org")) .build(); return HttpClient.newHttpClient() .send(httpReq, BodyHandlers.ofString()) .body(); }); } } var webPageResult = SomeAsyncCode.getContentAsync().get(); Calling the method get() blocks the current thread and waits until the callable completes before returning the value (in the example, a web page content): Additional features Numeric applications To adequately support applications in the field of mathematical and financial computation, several language features exist. Java's strictfp keyword enables strict floating-point calculations for a region of code. Strict floating-point calculations require that even if a platform offers higher precision during calculations, intermediate results must be converted to single/double. This ensures that strict floating-point calculations return exactly the same result on all platforms. Without strict floating-point, a platform implementation is free to use higher precision for intermediate results during calculation. C# allows an implementation for a given hardware architecture to always use a higher precision for intermediate results if available, i.e. C# does not allow the programmer to optionally force intermediate results to use the potential lower precision of single/double. Although Java's floating-point arithmetic is largely based on IEEE 754 (Standard for Binary Floating-Point Arithmetic), certain features are unsupported even when using the strictfp modifier, such as Exception Flags and Directed Roundings, abilities mandated by IEEE Standard 754 (see Criticism of Java, Floating point arithmetic). C# provides a built-in decimal type, which has higher precision (but less range) than the Java/C# double. The decimal type is a 128-bit data type suitable for financial and monetary calculations. The decimal type can represent values ranging from 1.0 × 10−28 to approximately 7.9 × 1028 with 28–29 significant digits. The structure uses C# operator overloading so that decimals can be manipulated using operators such as +, -,*and /, like other primitive data types. The and types provided with Java allow arbitrary-precision representation of decimal numbers and integer numbers, respectively. Java standard library does not have classes to deal with complex numbers. The BigInteger, and Complex types provided with C# allow representation and manipulation of arbitrary-precision integers and complex numbers, respectively. The structures use C# operator overloading so that instances can be manipulated using operators such as +, -, *, and /, like other primitive data types. C# standard library does not have classes to deal with arbitrary-precision floating point numbers (see software for arbitrary-precision arithmetic). C# can help mathematical applications with the checked and unchecked operators that allow the enabling or disabling of run-time checking for arithmetic overflow for a region of code. Language integrated query (LINQ) C#s Language Integrated Query (LINQ) is a set of features designed to work together to allow in-language querying abilities and is a distinguishing feature between C# and Java. LINQ consists of the following features: Extension methods allow existing interfaces or classes to be extended with new methods. Implementations can be shared or an interface can have a dedicated implementation. Lambdas allow for expression of criteria in a functional fashion. Expression trees allow a specific implementation to capture a lambda as an abstract syntax tree rather than an executable block. This can be utilized by implementations to represent criteria in a different language, e.g. in the form of an SQL where clause as is the case with e.g. Linq, LINQ to SQL. Anonymous types and type inference supports capturing and working with the result type of a query. A query may both join and project over query sources that may lead to a result type that cannot be named. Query expressions to support a syntax familiar to SQL users. Nullable (lifted) types to allow for a better match with query providers that support nullable types, like e.g. SQL. Native interoperability The Java Native Interface (JNI) feature allows Java programs to call non-Java code. However, JNI does require the code being called to follow several conventions and imposes restrictions on types and names used. This means that an extra adaption layer between legacy code and Java is often needed. This adaption code must be coded in a non-Java language, often C or C++. Java Native Access (JNA) allows easier calling of native code that only requires writing Java code, but comes at a performance cost. In addition, third party libraries provide Java-Component Object Model (COM) bridging, e.g., JACOB (free), and J-Integra for COM (proprietary). .NET Platform Invoke (P/Invoke) offers the same ability by allowing calls from C# to what Microsoft terms unmanaged code. Through metadata attributes the programmer can control exactly how the parameters and results are marshalled, thus avoiding the external glue code needed by the equivalent JNI in Java. P/Invoke allows almost complete access to procedural APIs (such as Win32 or POSIX), but limited access to C++ class libraries. In addition, .NET Framework also provides a .NET-COM bridge, allowing access to COM components as, if they were first-class .NET objects. C# also allows the programmer to disable the normal type-checking and other safety features of the CLR, which then enables the use of pointer variables. When using this feature, the programmer must mark the code using the unsafe keyword. JNI, P/Invoke, and "unsafe" code are equally risky features, exposing possible security holes and application instability. An advantage of unsafe, managed code over P/Invoke or JNI is that it allows the programmer to continue to work in the familiar C# environment to accomplish some tasks that otherwise would require calling out to unmanaged code. An assembly (program or library) using unsafe code must be compiled with a special switch and will be marked as such. This enables runtime environments to take special precautions before executing potentially harmful code. Runtime environments Java (the programming language) is designed to execute on the Java platform via the Java Runtime Environment (JRE). The Java platform includes the Java virtual machine (JVM) and a common set of libraries. The JRE was originally designed to support interpreted execution with final compiling as an option. Most JRE environments execute fully or at least partially compiled programs, possibly with adaptive optimization. The Java compiler produces Java bytecode. Upon execution the bytecode is loaded by the Java runtime and either interpreted directly or compiled to machine instructions and then executed. C# is designed to execute on the Common Language Runtime (CLR). The CLR is designed to execute fully compiled code. The C# compiler produces Common Intermediate Language instructions. Upon execution the runtime loads this code and compiles to machine instructions on the target architecture. Examples Input/output Example illustrating how to copy text one line at a time from one file to another, using both languages. Integration of library-defined types C# allows library-defined types to be integrated with existing types and operators by using custom implicit/explicit conversions and operator overloading as illustrated by the following example: C# delegates and equivalent Java constructs Type lifting Interoperability with dynamic languages This example illustrates how Java and C# can be used to create and invoke an instance of class that is implemented in another programming language. The "Deepthought" class is implemented using the Ruby programming language and represents a simple calculator that will multiply two input values (a and b) when the Calculate method is invoked. In addition to the conventional way, Java has GraalVM, a virtual machine capable to run any implemented programming language. Fibonacci sequence This example illustrates how the Fibonacci sequence can be implemented using the two languages. The C# version takes advantage of C# generator methods. The Java version takes the advantage of Stream interface and method references. Both the Java and the C# examples use K&R style for code formatting of classes, methods and statements. See also Comparison of C# and VB.NET Comparison of Java and C++ Comparison of the Java and .NET platforms Java programming language References External links Moving to C# and the .NET Framework at MSDN C# and Java: Comparing Programming Languages at MSDN Java vs. C# – Code for Code Comparison Nine Language Performance Round-up Microsoft Developer Network (MSDN): The C# Programming Language for Java Developers Standard ECMA-334 C# Language specification Java Language Specification (Sun) The State of C#: Is It Still a Viable Language? .NET C programming language family Comparison of individual programming languages Java (programming language)
23219992
https://en.wikipedia.org/wiki/Cyber%20Studio
Cyber Studio
Cyber Studio CAD-3D (or just CAD-3D) is a 3D modeling and animation package developed by Tom Hudson for the Atari ST computer and published by Antic Software. The package is a precursor to 3D Studio Max. CAD-3D is a basic polygonal 3D modeling and rendering program. An operator can assemble a scene out of geometric primitives or custom extruded or lathed objects. Various view ports are available to adjust lighting and camera positioning. The limited rendering functionality allows for flat shading in 16 shades. Rendered images can be exported in Degas Elite or NeoChrome format. By making changes between rendering separate cels, CAD-3D can be used for simple animations. Without its scripting extension Cyber Control changes have to be made by hand. History The first version was published in 1986 titled CAD-3D. It still lacked advanced modeling features (boolean subtraction) and any animation. In early 1987 Tom Hudson extended the application and it was renamed 'Cyber Studio CAD-3D v.2.02 '. The name Cyber Studio was proposed by Antic Software publisher Gary Yost due to his interest in William Gibson's seminal 1984 book "Neuromancer" which had introduced the term Cyberspace to describe a virtual 3D environment. As of 1987 the software was packaged together with Cybermate, a Forth-based authoring language written by Tektronix engineer Mark Kimball, the creator of the StereoTek liquid crystal shutter 3D glasses that Antic Software sold as an add-on to Cyber Studio. Cybermate was used to edit, sequence and present the animation files along with sound. The scripts allowed an operator to control when and how fast a video or audio segment played and whether it should loop. In combination with the other scripting language, CyberControl, users were capable to create video animations up to five minutes long. Jim Kent wrote Cyber Paint, a 2D animation program that brought together a wide variety of animation and paint functionality and the delta-compressed animation format developed for CAD-3D. Extensions Antic Software published a variety of related Cyber-products to extend the software's functionality: Cyberpaint - A Cell-based 2D-painting and animation software CyberControl - Scripting language for CyberStudio CAD-3D CyberSculpt - An extended modeling software CyberTexture - A texturing extension References 1986 software Animation software Atari ST software
1910758
https://en.wikipedia.org/wiki/Nancy%20Hafkin
Nancy Hafkin
Nancy Hafkin is a pioneer of networking and development information and electronic communications in Africa, spurring the Pan African Development Information System (PADIS) of the United Nations Economic Commission for Africa (UNECA) from 1987 until 1997. She also played a role in facilitating the Association for Progressive Communications's work to enable email connectivity in more than 10 countries during the early 1990s, before full Internet connectivity became a reality in most of Africa. Work Hafkin studied history and anthropology at Brandeis University in Boston from 1960 to 1965. She then studied at Boston University from 1965 to 1967. There, she found a mentor in Professor Ruth Morgenthau, who encouraged her to intensively study African history during her graduate studies from 1967 to 1973. At the time it was a young field in which many women were active. Hafkin received her doctorate with a thesis on Trade, Society and Politics in Northern Mozambique from 1753-1913. Move to Ethiopia Hafkin moved to Ethiopia in 1975 with her husband, Berhanu Abebe, an Ethiopian classmate at Brandeis, and they lived in Addis Ababa for nearly 25 years, until 2000. When Hafkin worked for the United Nations Economic Commission for Africa (UNECA), the Internet did not yet exist. While her efforts with UNECA were focused on economic development in every African country, she noticed that information was basically inaccessible on the continent with data being shared by fax and postal delivery. There wasn't even one public library in the country so she decided to address the information crisis by launching the Pan African Development Information System (PADIS) in 1986. In addition to her role with PADIS, she worked as a visiting professor at the University of Addis Ababa as the Chair of History from 1980–1981. Through her time with PADIS, was able to help establish the first electronic communications networks in ten African countries and actively convinced many African government officials of the importance of the Internet. Over the years Hafkin significantly contributed to "sharpening global awareness of developments in the context of gender and information technology as well as enabling fast and inexpensive access to information technology and thus information and networking on the African continent." Through the efforts of PADIS, new African networks have broadened access to information resources while reducing the isolation of African students. Return to the U.S. In 2000, she left her position with the U.N. and returned to the United States with her husband so she could continue her work improving information access for women, one of her initial goals as a young researcher. Since retiring from academia, Hafkin still acts as a keynote speaker and gives lectures on the empowerment and participation of women in information technology. Writing Nancy Hafkin edited Cinderella or Cyberella?: Empowering Women in the Knowledge Society, which was published in 2006 - a collection of essays discussing ways that information and communications technologies empower women. Awards The APC (headquartered in Johannesburg) established the annual Nancy Hafkin Prize for innovation in information technology in Africa which recognizes outstanding initiatives using information and communications technology (ICTs) for development. In 2012, Hafkin was inducted into the Internet Hall of Fame by the Internet Society. Bibliography Cinderella or Cyberella? Empowering Women in the Knowledge Society, editors Nancy Hafkin and Sophia Huyer (Kumarian Press, 2006) References Year of birth missing (living people) Living people South African computer scientists South African women computer scientists South African women scientists 21st-century women scientists Internet pioneers Women Internet pioneers 21st-century South African scientists
2305026
https://en.wikipedia.org/wiki/LinuxWorld%20Conference%20and%20Expo
LinuxWorld Conference and Expo
LinuxWorld Conference and Expo (renamed to OpenSource World in its final year) was a conference and trade show that focused on open source and Linux solutions in the information technology sector. It ran from 1998 to 2009, in venues around the world. The show was owned and managed by IDG World Expo, a business unit of International Data Group (IDG). Keynote speakers included Linux creator Linus Torvalds, One Laptop Per Child founder Nicholas Negroponte, and Creative Commons founder Lawrence Lessig. Another IDG business unit, Network World, operated the LinuxWorld.com web site, which often carried audio, video, and presentation materials from the show, as well as interviews with the show's speakers. This event should not be confused with the "Open Source World Conference", an annual Spanish-language event that ran from 2004 to 2012. History The first LinuxWorld Conference and Expo occurred in 1998 at the San Jose Convention Center. The keynote speaker was Linus Torvalds. The event featured a debate with Torvalds, Richard Stallman and Larry Wall. At the conference an agreement was made by Patrick Op de Beeck and Mark Shuttleworth concerning cooperation between KDE and Gnome for improving each other's work. The 2001 documentary film Revolution OS includes footage from the 1999 LinuxWorld event in New York City. Writer and free software advocate Don Marti ran LinuxWorld from 2005 until its end in 2009. LinuxWorld Open Solutions Summits took place in Italy, Spain, Sweden, and New York City. LinuxWorld Conference and Expo took place in the following locations, among others: Belgium Brazil Canada Beijing, China Guangzhou, China Shanghai, China Germany Japan Korea Malaysia Mexico Netherlands Singapore South Africa United Kingdom (in London) San Francisco, United States Boston, United States In 2009, the conference was renamed to "OpenSource World". It was held at the Moscone Center in San Francisco. This was the last known LinuxWorld or OpenSource World event. See also List of free-software events References External links http://www.linuxworldexpo.co.uk Linux conferences Free-software conferences Recurring events established in 1999 International Data Group
44938612
https://en.wikipedia.org/wiki/Shai%20Halevi
Shai Halevi
Shai Halevi (; born 1966) is a computer scientist who works on cryptography research at Algorand Foundation, a blockchain startup founded by Silvio Micali. Born in Israel in 1966, Halevi received a B.A. and M.Sc. in computer science from the Technion, Israel Institute of Technology in 1991 and 1993. He received his Ph.D. in computer science from MIT in 1997, and then joined IBM's Thomas J. Watson Research Center, where he was a principal research staff member until 2019. Since 2019, he has been a research fellow at Algorand Foundation. Research Shai Halevi's research interests are in cryptography and security. He has published numerous original technical research papers, three of which were awarded the IBM Pat Goldberg memorial best-paper award (in 2004, 2012, and 2013). Notable contributions by Shai Halevi include: Obfuscation. Halevi is a co-inventor of the first candidate general-purpose indistinguishability obfuscation schemes, with security based on a mathematical conjecture. This development generated much interest in the cryptography community and was called "a watershed moment for cryptography." Cryptographic Multilinear Maps. Halevi is a co-inventor of Cryptographic Multilinear Maps (which constitute the main technical tool behind cryptographic obfuscation and many other applications), solving a long-standing open problem Homomorphic Encryption. Halevi is one of the leading researchers on homomorphic encryption. He authored many articles, gave invited lectures and tutorials on the topic, and he is also the principal developer (together with Victor Shoup) of the HElib homormophic-encryption software library. The Random Oracle Model. Halevi co-authored the influential work that pointed out for the first time the existence of "structurally flawed" cryptosystems that nonetheless have a proof of security in the random-oracle model. Since 2013 Halevi is the chair of the steering committee of the Theory of Cryptography Conference. He served on the board of directors of the International Association for Cryptologic Research. He chaired the CRYPTO conference in 2009 and co-chaired the TCC conference in 2006. Halevi also gave many invited talks, including in the USENIX Security Symposium in 2008 and the PKC conference in 2014. Software Halevi maintains two open-source software projects: The HElib homomorphic-encryption library, and a web-system for submission/review of articles to academic conferences References External links Shai Halevi's Home Page The Cryptography Research Group at the IBM T.J.Watson Research Center Israeli computer scientists Theoretical computer scientists Modern cryptographers Public-key cryptographers Living people Israeli cryptographers 1966 births MIT School of Engineering alumni Technion – Israel Institute of Technology alumni IBM employees
24378325
https://en.wikipedia.org/wiki/Bachelor%20of%20Science%20in%20Information%20Technology
Bachelor of Science in Information Technology
A Bachelor of Science in Information Technology, (abbreviated BSIT or B.Sc. IT), is a bachelor's degree awarded for an undergraduate program in the information technology. The degree is normally required in order to work in the Information technology industry. A Bachelor of Science in Information Technology (B.Sc IT) degree program typically takes three to four years depending on the country. This degree is primarily focused on subjects such as software, databases, and networking. In general, computer science degrees tend to focus on the mathematical and theoretical foundations of computing rather than emphasizing specific technologies. However in India an engineering degree (B.Tech IT) in Information Technology is considered equivalent to Computer Science and Engineering degree because they both contain the core Computer Science subjects like Algorithms&Data Structures,Compiler Design,Automata Theory,Computer Organization&Architecture etc and focuses heavily on mathematical foundations of computer science.And the syllabi of both these streams are strikingly similar across many universities in India. The degree is a Bachelor of Science degree with institutions conferring degrees in the fields of information technology and related fields. This degree is awarded for completing a program of study in the field of software development, software testing, software engineering, web design, databases, programming, computer networking and computer systems. Graduates with an information technology background are able to perform technology tasks relating to the processing, storing, and communication of information between computers, mobile phones, and other electronic devices. Information technology as a field emphasizes the secure management of large amounts of variable information and its accessibility via a wide variety of systems both local and worldwide. Skills taught Generally, software and information technology companies look for people who have strong programming skills, system analysis, and software testing skills. Many colleges teach practical skills that are crucial to becoming a software developer. As logical reasoning and critical thinking are important in becoming a software professional, this degree encompasses the complete process of software development from software design and development to final testing. Students who complete their undergraduate education in software engineering at a satisfactory level often pursue graduate studies such as a Master of Science in Information Technology (M.Sc IT) and sometimes continuing onto a doctoral program and earning a doctorate such as a Doctor of Information Technology (DIT). International variations Australia In Australia, Bachelor of Information Technology/Science (BInfTech) programs are three to four years in duration. Honors awarded to graduates who successfully complete a four-year program. Belgium In Belgium, the Bachelor of Science in Information Technology is a 3-year degree after the compulsory education, but outside the universities; with specializing in certain fields, usually databases, network, realtime operating systems and/or web design. Bangladesh In Bangladesh, the Bachelor of Engineering in Information Technology is awarded following a four-year course of study under the Dhaka University, Jahangirnagar University, Bangladesh University of Professionals, University of Information Technology and Sciences, Stamford University Bangladesh and Royal University of Dhaka. Canada In Canada, the Bachelor of Science (B.S.) program in Information Technology (IT) with a minor in Business Administration offers an interdisciplinary curriculum focusing on both information technology and business administration. In addition, the program is unique in that it merges traditional academic topics with leading edge and current IT practices and technology. This program is offered under the written consent of the British Columbia Ministry of Advanced Education. Germany In Germany, Bachelor of Science in Information Technology integrates a professional degree in information technology with a major in another country or culture and its language, enhancing professional training and career options. The course is of three to five years duration. Students spend two semesters of study at a university or other higher education institution in the country of their major. The information technology component provides a sound education in all aspects of computing and information technology for a career in the profession. India In India, a Bachelor of Science in Information Technology(BSc IT) is a 3-year undergraduate program. One can apply for BSc IT after completing HSC or after completing Engineering Diploma. In India an engineering degree in Information Technology is 4 year academic program equivalent to Computer Science&Engineering because in the first year basic engineering subjects and Calculus are taught and in the succeeding years core computer science topics are taught in both B.Tech-IT and B.Tech-CSE Malaysia In Malaysia Information Technology course is being studied for 6 or 7 semesters (3 or years) with specializing in certain field and by undergoing one semester (6 months) of industrial training. Netherlands In the Netherlands, the Bachelor of Science in Information Technology degree is awarded after four years of study with specializing in a certain field. Namibia In Namibia, Bachelor of Science in Information Technology degrees are awarded after three and four years with specialization in areas such as Business Computing, System Administration & Networks as well as Software Development. The Polytechnic of Namibia, University of Namibia and other educational institutions in Namibia are key producers of graduates in this field. Nepal In Nepal, Bachelor of Science in Computer Science and Information Technology (B.Sc.CSIT ) is a four-year course of study. The Bachelor of Computer Science and Information Technology is provided by Tribhuvan University and the degree awarded is referred to as BScCSIT. 1 Tribhuvan University Bachelors of Science in Computer Science and Information Technology (BSc. CSIT) Bachelor in Information Technology (BIT) Bachelor of Computer Application (BCA) 2 Purbanchal University Bachelor in Computer Engineering (BE) Bachelor of Computer Application (BCA) Computer Application (Hons) Bachelor in Information Technology (BIT) 3 Kathmandu University Bachelor in Computer Engineering (BE) 4 Pokhara University Bachelor in Computer Engineering (BE) Engineering in Information Technology Software Engineering Bachelor of Computer Application (BCA) New Zealand In New Zealand, a Bachelor of Science in Information Technology (BS-IT) is 3-year program. The Massey University offers the B.Sc. IT as a conventional lecture based face to face course. The Otago Polytechnic offers the B.Sc. IT as a conventional lecture based face to face course Pakistan In Pakistan, a Bachelor of Science in Information Technology (BS-IT) is 4-year program. Some universities also named as Bachelor of Engineering in Information Technology (BE-IT).BEIT was started in Dr.A.Q Khan institute of Computer Sciences and Information Technology(KICSIT) from 2001 to 2013.Now it has been converted into BSIT.www.kicsit.edu.pk site for Visit.PUCIT is the only university who's ranked 'W'(highest)by HEC in BS(IT) among Pakistan universities. Philippines In the Philippines, BSIT program normally takes 4 years to complete. Schools with trimester system has less time to complete this course. A total number of 486 hours was set by the CHED during internships of the program. Portugal In Portugal, the Bachelor of Science in Information Technology degree is awarded following a three-year course of study without specialization. Sri Lanka In Sri Lanka, the Bachelor of Science in Information Technology (B.Sc. IT) is either a four-year degree with a specialization, called a major, with honors or a three-year degree without any specialization, called a general degree. The University of Moratuwa and the Sri Lanka Institute of Information Technology offers four-year Bachelor of Science in Information Technology degrees. South Africa In South Africa, the Bachelor of Science in Information Technology (B.Sc. IT) is a three-year degree. At the North West University (NWU) in Potchefstroom, North West Province, the programme is delivered by the School of Computer Science and Information Systems through the Unit of Open Distance Learning (UODL). Lectures, tutorials and learning activities are presented online but exams are written in designated exam centers in South Africa and Namibia, therefore only South African and Namibian citizens can enroll for this degree at the NWU. →(distance.nwu.ac.za/BscIT) The University of Cape Town offers the B.Sc. IT as a conventional lecture based face to face course with 5 fields of Specialisation. The University of the Free State in Bloemfontein offers the B.Sc. IT as a conventional lecture based face to face course, as well as a BCIS (Baccalaureus in Computer Information Systems). The University of Johannesburg Academy of Computer Science and Software Engineering (ACSSE) offers the B.Sc. IT as a 3 year degree and is the first University in the southern hemisphere offering a BSc Hons (IT) degree which was formally accredited since 2003 by the highly respected and professional BCS: British Computer SocietyThe Chartered Institute for IT. The University of Pretoria offers a Baccalaureus in Information Technology (BIT) degree in a four-year programme as well as two three-year degrees, the BSc (CS) and BSc IT. List_of_universities_in_South_Africa Thailand In Thailand, the Bachelor of Science in Information Technology (BS IT) is a four-year undergraduate degree program which is a subject of accreditation by the Office of the Higher Education Commission (OHEC) and the Office for National Education Standards and Quality Assessment (ONESQA) of the Ministry of Higher Education, Science, Research and Innovation (MHESI). The first international BS IT program, using English as a medium of instruction (EMI), has been established in 1990 at the Faculty of Science and Technology (renamed in 2013 to Vincent Mary School of Science and Technology (VMS)), Assumption University of Thailand (AU). The 2019 BS IT curriculum has been updated by VMS to respond to the discovery of the students' potential and also blended with marketing communications needs. United States In the United States, a B.S. in Information Technology is awarded after a four-year course of study. Some degree programs are accredited by the Computing Accreditation Commission of the Accreditation Board for Engineering and Technology (ABET). United Arab Emirates In UAE, Skyline University College offers 4 years Bachelor of Science in Information Technology enterprise computing. See also Bachelor of Computing Bachelor of Information Technology Bachelor of Computer Science Bachelor of Software Engineering Bachelor of Computer Information Systems References Science in Information Technology Computer science education Information technology education Information technology qualifications
9108450
https://en.wikipedia.org/wiki/List%20of%20motion%20and%20gesture%20file%20formats
List of motion and gesture file formats
The question of gesture and motion takes more and more importance with the development of gesture controllers, haptic systems, motion capture systems, etc., on the one hand, and with the need of allowing virtual reality systems to inter-communicate through control data. Motion and gesture file formats are widely used today in many applications that deal with motion and gesture signal. It is the case in domains like motion capture, character animation, gesture analysis, biomechanics, musical gesture interfaces, virtual surgery. Those formats are low-level formats, i.e. formats close to the signal produced by the capture system. Existing formats that encode gesture and motion BVA and BVH file formats BVH stands for Biovision Hierarchical Data, which was developed by a motion capture company called Biovision. The BVA format (also developed by Biovision) is an older format which was the precursor to BVH. The BVH format is mainly used as a standard representation of movements in the animation of humanoid structures. It is currently one of the most popular motion data formats and has been widely adopted by the animation community (probably because of its simple specifications). MNM file format This file format allows renaming the segments of a BVH file to match the convention used in Autodesk 3D Studio Max. The name defined by the user is associated to the predefined label for the biped segment. eg. Humerus = L UpperArm This file format also allows renaming the markers of a CSM file to match the convention used in Autodesk 3D Studio Max. A name defined by the user is associated to the predefined label expected by Character Studio. eg. LeftShoulder = LSHO MBX file format The MBX format is a binary hierarchical skeletal format that is exclusive to the Noitom Perception Neuron motion capture system. Files can be loaded into the Noitom Axis Studio software and then exported to BVH and other formats from their software. MVNX format The MVNX format is a human-readable open XML based format for storing Xsens MVN motion capture data. The format contains the 3D positions and orientations of all segments captured with Xsens MVN. In addition, the format includes several other variables to be exported such as joint angles, segment velocity and free acceleration, center of mass trajectory and calibrated sensor data of the individual motion trackers. MVNX data can also be imported into leading software programs including MATLAB and Excel. ASK/SDL file format The format is a variant of the BVH file format developed by Biovision. The ASK file (Alias Skeleton) only contain information concerning the skeleton and, as a result, does not contain any information about the channels or the movement. The offset coordinates are absolute, unlike the BVH in which they are relative. The SDL file associated to the ASK file contains the data of the movement but it can contain much other information concerning the scene than the very samples of the movement. AOA file format Adaptative Optics is a company dedicated to the creation of hardware support for the motion capture. This ASCII file format simply describes the captors and their position at each sampling period. ASF/AMC file formats This format was developed by Acclaim, a video game company. Once entered in the public domain it has been used by Oxford Metrics (Vicon Motion Capture Systems). The Acclaim format is composed of two different files, one for the skeleton and the other one for the movement. The separation between these two types has been done because the same skeleton is often used for numerous distinct movements. The file containing the skeleton description in the ASF file (Acclaim Skeleton File) and the file containing the movement data is the AMC file (Acclaim Motion Capture data). BRD file format The format is uniquely used by the motion capture system Ascension Technology “Flock of Birds” developed by LambSoft. It allowed stocking the data coming from a magnetic motion capture system. GRC file format The GRC file format is file format to store motion capture data from Synertial mocap system. GRC includes RAW data from inertial sensors (such as rotation, acceleration, and magnetic field strength), skeleton details, absolute position of the skeleton root and various metadata (notes, TimeCode, ..). Thanks to the fact that the RAW data from this file are read by Synertial SDK and the skeleton structure is recomputed each time they are needed, the file format is memory efficient. GRC file format data are compatible and can be exported into BVH and FBX file formats using Synertial software tools. HTR and GTR file formats The HTR format (Hierarchical Translation Rotation) has been developed as a native format for the skeleton of the Motion Analysis software. It has been created as an alternative to the BVH format to make up for its main drawbacks. A HTR variant exist which is called the GTR format (Global Translation Rotation) and is the same format less the structural information. TAK file format The Tak, pronounced "take," file format is used by the Motive software developed by OptiTrack. The file can contain information on: Marker position and residual error Skeleton information and 6DoF Rigid Body position and rotation Force Plate Data Auido General information on capture, like frame rate, calculation properties of markers, camera filters, synchronization method, recording system data/time SMPTE Timecode Recorded Camera Data TRC file format The TRC file format is another file format from Motion Analysis. It contains not only the raw data from the full body motion capture system they developed but also the output data coming from their face tracker. The TRC file format, conversely to most others, is not skeleton-based. CSM file format The CSM format is an optical tracking format that is used by Character Studio (an animation and skinning plug-in for 3ds Max) for importing marker data. V/VSK file format The V file format is a binary motion data format developed by Vicon Motion Systems. This file is normally used in conjunction with a VSK file also developed by Vicon Motion System. The VSK file contains the skeleton Hierarchy. The V file can contain the following data: - Marker data - Global segment translation and rotation data - Local rotation data (with root translation data) C3D file format The C3D file format is a public domain, binary file format developed in the mid-1980s at The National Institutes of Health in Bethesda, Maryland. It stores 3D coordinate information, analog data and associated information used in 3D motion data capture and subsequent analysis operations. At the time of its development all 3D motion capture systems stored their data in multiple files, each with a different proprietary format, making the exchange of data between various biomechanics and gait analysis laboratories very difficult. With the introduction and adoption of the C3D file format by all major 3D motion capture companies all necessary 3D information, analog data and parameters describing the data can be seamlessly transferred between researchers and laboratories, regardless of the hardware or environment used to collect the data. The major features of the C3D file format are listed below: The ability to store 3D positional and analog data in both processed and unprocessed form. Store information describing the physical design of the laboratory such as EMG channels used, force plate positions, and marker sets, etc. Store Trial information relating to the circumstances of the test session such as sample rates, filenames, dates, EMG muscles recorded, etc. Store subject information e.g. ID, age at trial, with physical parameters such as weight, leg length, etc. Store calculated analysis results such as gait timing, cycle information and related information. Extensibility - the C3D format provides the ability to store new information without making older data obsolete. The public specification and description of the C3D format allows anyone to access data without depending on a manufacturer for information. Prior to the introduction of the C3D file format almost all biomechanics and gait analysis software was written for each specific 3D system manufacturers file format. As a result, researchers and clinicians were restricted to either writing their own analysis software or else using only the software provided by with their 3D data collection system. The introduction of the C3D format resulted in the availability of a substantial body of third-party software and freed the research community from dependence on any individual 3D data system manufacturer. The C3D file format, conversely to most others, is not skeleton-based, and is binary. GMS file format The GMS (Gesture and Motion Signal) format is a low-level, binary, minimal, but generic, format for storing Gesture and Motion Signals in a flexible, organized, optimized way. The GMS format takes into account the minimal features a format carrying movement/gesture information needs: flexible dimensionality for the signals, versatile structuration, flexible types of the encoded variables, and spatial and temporal properties of gesture and motion signals. GMS received the support of the FP6 Network of Excellence IST-2002-002114 – "Enactive Interfaces". The GMS file format, conversely to most others, is NOT skeleton-based, and is binary. HDF file format A closed binary file format developed by House of Moves for use in their proprietary software called (at the time) Diva. This file format is essentially a dump of a Diva scene. It includes all translational marker data as well as all rotational bone data in the scene and more. FBX file format The FBX proprietary file format (.fbx) owned by Autodesk since 2006. The Blender Foundation has published an unofficial specification for binary FBX. PZ2 file format The PZ2 format is used by the popular 3D figure software Poser and DAZ Studio. The format can encode both body and facial animation, and can be applied by dropping the file onto the character in the viewport. It is text-based, akin to XML in structure, and can be easily edited. There is also a face-only variant (no neck) called FC2. Motion capture software that can output PZ2 includes Zign Track, F-Clone, Kinect Capture, Brekel Face, and Faceshift via a free script. See also file format motion capture Computer file formats Gesture recognition
98931
https://en.wikipedia.org/wiki/AIM%20alliance
AIM alliance
The AIM alliance, also known as the PowerPC alliance, was formed on October 2, 1991, between Apple, IBM, and Motorola. Its goal was to create an industry-wide open-standard computing platform based on the POWER instruction set architecture. It was intended to solve legacy problems, future-proof the industry, and compete with Microsoft's monopoly and the Wintel duopoly. The alliance yielded the launch of Taligent, Kaleida Labs, the PowerPC CPU family, the Common Hardware Reference Platform (CHRP) hardware platform standard, and Apple's Power Macintosh computer line. History Development From the 1980s into the 1990s, the computer industry was moving from a model of just individual personal computers toward an interconnected world, where no single company could afford to be vertically isolated anymore. Infinite Loop says "most people at Apple knew the company would have to enter into ventures with some of its erstwhile enemies, license its technology, or get bought". Furthermore, Microsoft's monopoly and the Wintel duopoly threatened competition industrywide, and the Advanced Computing Environment (ACE) consortium was underway. Phil Hester, a designer of the IBM RS/6000, convinced IBM's president Jack Kuehler of the necessity of a business alliance. Kuehler called Apple President Michael Spindler, who bought into the approach for a design that could challenge the Wintel-based PC. Apple CEO John Sculley was even more enthusiastic. On July 3, 1991, Apple and IBM signed a non-contractual letter of intent, proposing an alliance and outlining its long-term strategic technology goals. Its main goal was creating a single unifying open-standard computing platform for the whole industry, made of a new hardware design and a next-generation operating system. IBM intended to bring the Macintosh operating system into the enterprise and Apple intended to become a prime customer for the new POWER hardware platform. Considering it to be critically poorly communicated and confusing to the outside world at this point, industry commentators nonetheless saw this partnership as an overall competitive force against Microsoft's monopoly and Intel's and Microsoft's duopoly. IBM and Motorola would have 300 engineers to codevelop chips at a joint manufacturing facility in Austin, Texas. Motorola would sell the chips to Apple or anyone else. Between the three companies, more than 400 people had been involved to define a more unified corporate culture with less top-down executive decree. They collaborated as peers and future coworkers in creating the alliance and the basis of its ongoing future dialog which promised to "change the landscape of computing in the 90s". Launch On October 2, 1991, the historic AIM alliance was officially formed with a contract between Apple CEO John Sculley, IBM Research and Development Chief Jack Kuehler, and IBM Vice President James Cannavino. Kuehler said "Together we announce the second decade of personal computing, and it begins today" and Sculley said this would "launch a renaissance in technological innovation", as they signed the foot-high stack of papers comprising the contract. The New York Times called it "an act that a year ago almost no one in the computer world would have imagined possible". It was so sweeping that it underwent antitrust review by the United States federal government. In 1992, Apple and IBM created two new companies called Taligent and Kaleida Labs as had been declared in the alliance contract, with the expectation that neither would launch any products until the mid-90s. Since 1988, Apple had already created a next-generation operating system, codenamed "Pink"; and Taligent Inc. was incorporated to bring Pink to market as the ultimate crossplatform object-oriented OS and application frameworks. Kaleida was to create an object-oriented, cross-platform multimedia scripting language which would enable developers to create entirely new kinds of applications that would harness the power of the platform. IBM provided affinity between its own Workplace OS and Taligent, replacing Taligent's microkernel with the IBM Microkernel and adopting Taligent's CommonPoint application framework into Workplace OS, OS/2, and AIX. CISC microprocessors, including the mainstream Intel x86 products, were considered an evolutionary dead end, and that because RISC was the future, the next few years were a period of great opportunity. The alliance's hardware is based on the PowerPC processorsthe first of which, the PowerPC 601, is a single-chip version of IBM's POWER1 CPU. Both IBM and Motorola would manufacture PowerPC integrated circuits for this new platform. The computer architecture base is called "PReP" (PowerPC Reference Platform), later complemented with OpenFirmware and renamed "CHRP" (Common Hardware Reference Platform). IBM used PReP and CHRP for the PCI version of IBM's RS/6000 platform, which was adapted from existing Micro Channel architecture models, and changed only to support the new 60x bus style of the PowerPC. In 1994, Apple delivered its first alliance-based hardware platform, the PowerPC-based Power Macintosh line, on schedule as predicted by the original alliance contract. Infinite Loop considered the PowerPC to be five years too late to the overall market, "no more than a welcome offering to Apple's own market base", and further hamstrung by the legacy architecture of System 7. Downturn In 1995, IT journalist Don Tennant asked Bill Gates to reflect upon "what trend or development over the past 20 years had really caught him by surprise". Gates responded with what Tennant described as biting, deadpan sarcasm: "Kaleida and Taligent had less impact than we expected." Tennant believed the explanation to be that "Microsoft's worst nightmare is a conjoined Apple and IBM. No other single change in the dynamics of the IT industry could possibly do as much to emasculate Windows." Efforts by Motorola and IBM to popularize PReP and CHRP failed when Apple, IBM, and Taligent all failed to provide a single comprehensive reference operating system for server and personal markets—mainly Taligent's OS or IBM's Workplace OS. Windows NT was the only OS with mainstream consumer recognition that had been ported to PowerPC, but there was virtually no market demand for it on this non-mainstream hardware. Although PowerPC was eventually supported by several Unix variants, Windows NT, and Workplace OS (in the form of OS/2), these operating systems generally ran just as well on commodity Intel-based hardware so there was little reason to use the PReP systems. The BeBox, designed to run BeOS, uses some PReP hardware but is overall incompatible with the standard. Kaleida Labs closed in 1995. Taligent was absorbed into IBM in 1998. Some CHRP machines shipped in 1997 and 1998 without widespread reception. Relations between Apple and Motorola further deteriorated in 1998 with the return of Steve Jobs to Apple and his contentious termination of Power Macintosh clone licensing. Reportedly, a heated telephone conversation between Jobs and Motorola CEO Christopher Galvin resulted in the long-favored Apple being demoted to "just another customer", mainly for PowerPC CPUs. In retaliation, Apple and IBM briefly expelled Motorola from the AIM alliance, and forced Motorola to stop making PowerPC CPUs, leaving IBM to design and produce all future PowerPC chips. Motorola was reinstated into the alliance in 1999. Legacy The PowerPC is the clearest intended success that came out of the AIM alliance. From 1994 to 2006, Apple used PowerPC chips in almost every Macintosh. PowerPC also has had success in the embedded market, and in video game consoles: GameCube, Wii, Wii U, Xbox 360, and PlayStation 3. After being reinstated into the AIM alliance, Motorola helped IBM to design some laptop PowerPC chips with IBM's manufacturing. In 2004, Motorola spun off its Semiconductor production as Freescale Semiconductor, and left the AIM alliance completely, leaving IBM and Apple in the alliance. Freescale continued to help IBM design PowerPC chips until Freescale was acquired and absorbed by NXP Semiconductors in 2015. Apple transitioned entirely to Intel CPUs in 2006, due to eventual disappointment with the direction and performance of PowerPC development as of the G5 model, especially in the fast-growing laptop market. This was seen as the end of the AIM alliance as that left IBM as the sole user of PowerPC. Taligent was launched from the original AIM alliance, based originally on Apple's Pink operating system. From Taligent came the CommonPoint application framework and many global contributions to internationalization and compilers, in the form of Java Development Kit 1.1, VisualAge C++, and the International Components for Unicode open source project. Power.org was founded in 2004 by IBM and fifteen partners with intent to develop, enable, and promote Power Architecture technology, such as PowerPC, POWER, and software applications. The OpenPOWER Foundation is a collaboration around Power ISA-based products initiated by IBM and announced as the "OpenPOWER Consortium" on August 6, 2013. It has more than 250 members. In 2019, IBM announced its open-sourcing of the Power ISA. See also Advanced Computing Environment Advanced RISC Computing References 1991 establishments in the United States Apple Inc. partnerships Former IBM subsidiaries Motorola Technology consortia
57679297
https://en.wikipedia.org/wiki/Enonic%20XP
Enonic XP
Enonic XP is a free and open-source web application platform and content management system (CMS) in one based on Java and Elasticsearch. Developed by the Norwegian software company Enonic, the microservice web platform can be used to build progressive web applications, complex websites, or web-based APIs. Enonic XP uses an application framework for coding server logic with JavaScript, and has no need for SQL as it ships with an integrated content repository. The CMS is fully decoupled, meaning developers can create traditional websites and landing pages, or use XP in headless mode, that is without the presentation layer, for loading editorial content onto any device or client. Enonic is used by major organizations in Norway, including the national postal service Norway Post, the insurance company Gjensidige, the national lottery Norsk Tipping, the Norwegian Labour and Welfare Administration, and all the top football clubs in the national football league for men, Eliteserien. Overview Enonic XP has embedded web content management, blending applications and websites into one experience. The content management system (CMS) functionality includes a visual drag and drop editor, a landing page editor, support for multi-site and multi-language, media and structured content, advanced image editing, responsive user interface, permissions and roles management, revision and version control, and bulk publishing. Content and website(s) are managed through the "Content Studio," while integrations and applications can be directly installed via the "Applications" section in XP, where the platform finds apps approved in the official Enonic Market. There are no third-party databases in Enonic XP. Instead the developers have built a distributed storage repository on top of the search engine Elasticsearch, avoiding the need to index content. The system brings together capabilities from Filesystem, NoSQL, document stores, and search in the storage technology, which automatically indexes everything put into the storage. Enonic XP supports deployment of server side JavaScript and Java applications, using the framework PurpleJS, which includes code build by Enonic. PurpleJS melds Java and JavaScript, and is able to run lightweight JavaScript server applications without the complexity of the Node.js programming model. The open-source framework runs on top of a JVM (Java virtual machine), and allows developers to run the same code in the browser and on the server, thus enabling them to employ JavaScript while working with existing Java projects. While running on the Java virtual machine, Enonic XP can be deployed on most infrastructures. The dependency on a third-party application server to deploy code has been removed, as the platform is an application server by default. A developer can for instance insert his own modules and code straight into the system while it is running. JavaScript unifies all the technical elements, and Enonic XP features a MVC framework where everything on the back-end can be coded with server-side JavaScript. The Enonic platform can use any template engine. The most used one, Thymeleaf, allows users to create a plain HTML5 document and use it as a view, allowing a designer to work on the HTML file, while a developer can make it more functional and dynamic. Progressive web apps Another feature of Enonic XP is the possibility for developers to create progressive web apps (PWA). A PWA is a web application that is a regular web page or website, but can appear to the user like a mobile application. In early 2017 Enonic released "Office League", an open-source progressive web application built on the Enonic XP platform, making it one of the first companies in Scandinavia to develop and release a production-ready PWA. Later in 2017 Enonic released a PWA starter kit, helping developers build scalable PWAs in Enonic XP. History Enonic AS was founded in 2000 by Morten Øien Eriksen and Thomas Sigdestad. The software company specialized in building services and solutions using Java, including a content management system known as "Vertical Site,", then "Enonic CMS". Being aware that they had application, database, and website teams working on separate silos toward the same goal, Enonic sought to combine the different elements into a single software. The resulting application platform Enonic XP, first released in 2015, includes a CMS as an optional surface layer. In March 2020, Enonic XP was ranked by SoftwareReviews as the "Leader" in Web Experience Management. The ranking is based on user reviews, and is featured in SoftwareReviews‘ 2020 Data Quadrant Report, a comprehensive evaluation and ranking of 18 leading Web Experience Management vendors. SoftwareReviews is a division of Info-Tech Research Group, a Canadian IT research and analyst firm established in 1997. Release history Enonic XP assumed the mantle from the previous content management system Enonic CMS, and thus began with "version 5.0.0." The following list only contains major releases. Development and support Enonic offers a user and developer community consisting of a forum, support system with tickets, documentation, codex, learning and training center with certifications, and various community groups. Writing about the support system, Mike Johnston of CMS Critic notes that "enterprise customers obviously get access to a higher level of personalized support, where the Enonic support team can respond as fast as two hours." The support system is divided in three levels: silver, gold and platinum—from next day business support to 24/7 support. As Enonic XP is open-source, known vulnerabilities, bugs and issues are listed on GitHub. See also List of content management systems References External links Official website 2015 software Content management systems Free content management systems Software forks Software using the GPL license Website management Free and open-source Android software Web applications
38979342
https://en.wikipedia.org/wiki/EACSL
EACSL
The European Association for Computer Science Logic (EACSL), founded 14 July 1992, is an international professional non-profit organization representing the interests of its members and promoting computer science logic in the areas of scientific research and education. It supports both basic and application oriented research to advance the connections between basic research and industrial applications. The current president is Prof. Thomas Schwentick (Technical University of Dortmund, Germany). Each year, the EACSL organizes the international conference Computer Science Logic (CSL) <ref name=DBLP>. Complete list of past CSL conferences from DBLP, the Computer Science Bibliography.</ref> and publishes the associated proceedings, it supports several workshops and summer schools and sponsors the Ackermann Award'', the EACSL Outstanding Dissertation Award for Logic in Computer Science. The annual general meeting of members takes place each year during the annual international conference CSL. References External links Computer science organizations
5094367
https://en.wikipedia.org/wiki/European%20Conference%20on%20Object-Oriented%20Programming
European Conference on Object-Oriented Programming
The European Conference on Object-Oriented Programming (ECOOP), is an annual conference covering topics on object-oriented programming systems, languages and applications. Like other conferences, ECOOP offers various tracks and many simultaneous sessions, and thus has different meaning to different people. The first ECOOP was held in Paris, France in 1987. It operates under the auspices of the Association Internationale pour les Technologies Objets, a non-profit organization located in Germany. ECOOP’s venue changes every year, and the categories of its program vary. Historically ECOOP has combined the presentation of academic papers with comparatively practical experience reports, panels, workshops and tutorials. ECOOP helped object-oriented programming develop in Europe into what is now mainstream programming, and helped incubate a number of related disciplines, including design patterns, refactoring, aspect-oriented programming, and agile software development. The winners of the annual AITO Dahl-Nygaard Prize are offered the opportunity to give a keynote presentation at ECOOP. The sister conference of ECOOP in North America is OOPSLA. See also List of computer science conferences List of computer science conference acronyms List of publications in computer science Outline of computer science External links Computer science conferences Programming languages conferences
41196235
https://en.wikipedia.org/wiki/Ira%20P.%20Rothken
Ira P. Rothken
Ira P. Rothken is an American high technology attorney and computer scientist who has handled numerous cases of first impression involving the internet and new technologies. Education and Early Work Rothken is a graduate of Brandeis University with a bachelor's degree in science and Golden Gate University School of Law where he was Editor in Chief of the Intellectual Property Law Review. Rothken, a former medical researcher and computer scientist, began the Northern California-based Rothken Law Firm in 1993 and the firm has evolved from the beginning of the commercial internet in 1995 to emphasize complex high technology related litigation. Career Background According to a July 31, 2007 CNET News.com Article profiling Ira P. Rothken's legal career: "Tech start-ups sued by media conglomerates for copyright infringement typically call on Rothken, a medical researcher turned lawyer. He's made a name for himself by bucking entertainment empires and by backing long-shot copyright cases, such as those involving RecordTV, ReplayTV and MP3Board.com. His efforts have won him praise from the Electronic Frontier Foundation (EFF), the advocacy group that has become synonymous with user rights on the Web." Rothken has been described as a litigator who is both creative and tough. Internet Gambling Cases One of Rothken's earliest cases alleged that credit card companies were involved in providing illegal gambling loans to users of internet gambling sites. In this case, Rothken pointed out "We want the court to say Visa and MasterCard can't make money on illegal transactions...." The case resulted in credit card companies settling and providing, amongst other things, a notice to their card holders that their card may not be used to fund online gambling. As a result of the case, Ed Dixon, a spokesman for MasterCard admitted that they introduced new rules related to Internet gambling. Later, Visa affiliates agreed to clear the credit rating of Rothken's client and issue warnings to consumers. Internet Search Engine Cases Several of Rothken's cases have involved defending various search engines. Rothken's reason is simple: "We all recognize that the greater good is to allow for robust search...Search is just too important to society. Regardless of the percentage of [illegal] files, even if it's a large percentage of those files that ultimately will lead to downstream content that's unauthorized, search of that content should still be allowed... When you look at the total picture... do we believe that search engines for .torrent files should be banned altogether? Most people would say no." Cases of First Impression Many of Rothken's high technology legal actions have included issues of first impression where original issues of law are presented for decision by the court. In these cases, there is no precedent for the legal issue at hand in a specific court. Often Rothken's cases of first impression involve complex technology issues. Examples of first impression cases include: Rothken is the lead global defense counsel for Kim Dotcom and his company Megaupload in what has been called the largest criminal copyright case in US history. Rothken represented consumers as lead counsel in a suit brought on behalf of consumers against Microsoft, Symantec, Adobe, and others; related to their software "EULA" and shrinkwrap policies, in Baker v. Microsoft et al. These shrink-wrap licenses could not be read by consumers prior to unwrapping the software, at which point major retailers would refuse to allow return of the software. This case resulted in a settlement that led to policy changes, created a more transparent market for software companies to compete on licensing terms, and benefitted consumers nationwide. Rothken was co-lead class counsel representing owners of Treo 600 and Treo 650 smartphones made by Palm; in a nationwide federal court class action lawsuit consolidated in the Northern District of California titled in re Palm Treo, claiming the devices are inherently defective, resulting in a multimillion-dollar nationwide class recovery. Rothken was co-lead class counsel representing owners of T-Mobile Sidekick smartphones; in a nationwide federal court class action lawsuit consolidated in the Northern District of California titled in re Sidekick, claiming a massive corruption of data integrity and interruption of consumer data access, resulting in a multimillion-dollar nationwide class recovery. Rothken was co-lead class counsel and liaison counsel to the Court representing owners of Apple iPhone 4 smartphones, in a suit dubbed as "Antenna-gate" because of comments made by Steve Jobs. Rothken claimed defective antenna design leading to degraded connectivity and dropped calls, resulting in a class remedy of 15 dollars per iPhone 4 claimant. Rothken was co-lead settlement class counsel in a nationwide consumer privacy lawsuit brought against DoubleClick, for allegedly intruding on web user privacy; consolidated in the Southern District of New York and titled in re DoubleClick Privacy Litigation, resulting in a nationwide class settlement. Rothken represented consumers as lead counsel in a suit against a Music CD Recording Company and its Digital Rights Management scheme; in connection to violations of their privacy and first sale doctrine rights, in DeLise v. Sunncomm et al. Rothken was involved in the first case where a court ordered that defendants had to turn over contents of random access memory. "Lawyers will be flinging around preservation letters, coming up with all kinds of creative ways to tell the other to preserve RAM", Rothken opined. "That would cause huge economic implications. If it's not changed, it can create e-discovery chaos". Rothken was the lead defense counsel in a case of first impression brought by the major motion picture studios against an online interactive services provider; which argued that its Internet centric VCR, created out of software code, is just as lawful as a physical VCR and it should not have to pay any Copyright damages, in MGM et al. v. RecordTV.com, Inc. Rothken was lead plaintiff's counsel in a lawsuit to enjoin the RIAA's Clean Slate "Amnesty" program as it allegedly did not provide a full release of copyright claims against consumers who were required, as a condition of entering the program, to make admissions of copyright infringement; which led to a negotiated settlement that benefitted consumers nationwide. Rothken was also lead defense counsel defending a large search engine in Federal Court against claims by the RIAA for secondary copyright infringement arising out of hyperlinks to mp3 music files; in Arista Records et al. v. MP3Board.com. Rothken defended bit torrent site isoHunt in a copyright case of first impression, regarding the reach of secondary civil copyright infringement and the DMCA safe harbors. "One person's 'worst search engine' is another person's 'robust search engine'", said isoHunt's attorney, Rothken. "Should we as a society not allow torrent search engines because some groups like the major studios don't like the state of the Internet as it relates to .torrent files?" he asked. Rothken served as lead counsel along with the Electronic Frontier Foundation, defending the rights of consumers including Craig Newmark (founder of Craigslist), under the Copyright fair use doctrine; to use their net-connected ReplayTV devices (digital video recorders) to space-shift and commercial-skip television programs, in Newmark v. Turner Broadcasting et al. Rothken was a co-lead defense counsel in the trial court and Seventh Circuit Court of Appeals in Stayart v. Yahoo et al.; which found that the plaintiff did not have a protectable commercial interest in her name under certain sections of the Lanham Act; and thus affirmed the trial court's dismissal of such claims against a major search engine and social networking site. Megaupload One of Rothken's best known cases involves the defense of former cloud storage provider Megaupload and its founder Kim Dotcom. Rothken is the lead global defense counsel for Kim Dotcom and Megaupload in what has been called the largest criminal copyright case in US history. Rothken had represented Megaupload and Kim Dotcom in other matters, such as the case alleging Universal Music did an improper takedown notice of a well known Megaupload video from YouTube in which several pop stars gave a performance praising Megaupload. On January 20, 2012, Megaupload and Kim Dotcom were raided by a force involving dozens of members of the New Zealand elite Special Tactics Group and Armed Offenders Squad under the direction of the FBI. In articles and interviews, Rothken pointed out how this raid has caused millions of innocent users to lose access to their personal files, such as Microsoft Word and Excel files, stored on the Megaupload servers, as this raid destroyed the company in an instant. At the SF MusicTech Summit, Julie Samuels, staff attorney for Electronic Frontier Foundation, and Rothken discussed how the government, after seizing the data, then quickly sent a letter to Carpathia, an ISP that provides server space, strongly encouraging them to delete the data of Megaupload and all the data of their customers that was stored on Megaupload. Samuels and Rothken discussed many aspects of the effects of the government actions on innocent parties such as the server company, and customers who relied on Megaupload, as backup storage, who were suddenly denied all access to their personal files. They discuss how the governments actions in this case were incredibly more aggressive than in other cases involving seizures, such as of online gambling sites where the casinos were still allowed to operate. Because of the unnoticed shutdown that resulted from the Megaupload seizure, "lots of protected speech is now offline," said Rothken, adding that the process in this case, where the material was taken offline without any chance to avoid harm to innocent parties, shows exactly why the adversarial process must always be used to give courts a chance to narrowly tailor rulings as to what the government should be allowed to take offline while protecting the interests of innocent users. Samuels pointed out that few defendants have the resources to fight a case of this type against the government, and Rothken mentioned, because of government freezing assets, how most of the 20 lawyers working on this case have received very little, if any, payment for their work. Samuels asserts that in many similar cases the government has finally admitted, as much as 12 to 18 months after taking sites offline, they did not have sufficient evidence to support the shutdowns. When the operation was over, the U.S. Department of Justice issued a press release: "This action is among the largest criminal copyright cases ever brought by the United States and directly targets the misuse of a public content storage and distribution site to commit and facilitate intellectual property crime." Rothken undertook the representation of Megaupload and Kim Dotcom on the day of the raid and assembled and coordinated the global legal team. Rothken made a number of appearances in the United States Court in the Eastern District of Virginia in an effort, along with EFF, to negotiate the preservation and return of user files, and while the Court initially entertained such arguments and ordered briefing, the Judge has yet to rule. In addition Rothken and his co-counsel, William Burck from Quinn Emmanuel, filed motions to dismiss Megaupload from the case due to failure of the United States to serve the foreign corporate entity. Rothken and his team cited the rule that required the US to serve a foreign corporation at an office in the United States. Megaupload Ltd., a Hong Kong Corporation, did not have a corporate office in the United States and therefore could never be served. The Judge has yet to make a final ruling on the motion to dismiss, leaving Megaupload in a state of limbo where all of its assets were frozen by the US and it is both not dismissed and not served. Rothken had a succinct description of the US government case against Megaupload and Kim Dotcom: "wrong on the facts and wrong on the law." In Rothken's words, the government is acting over-aggressively and overbroadly by taking down one of the world's largest cloud storage services "without any notice or chance for Megaupload to be heard in a court of law." The result ignores substantial non infringing uses of cloud storage and is both "offensive to the rights of Megaupload and to the rights of millions of consumers worldwide" who stored personal data with the service. In Rothken's view, attempting to hold a cloud storage provider criminally responsible for the acts of its users is known as "secondary" criminal copyright liability and there is no such statutory claim under US Law. Secondary copyright liability is judge made law in "civil" cases, such as Grokster, and such theories are not "criminal" in nature. Instead, the government's willingness to pursue the case as an international racketeering charge meant "essentially only sticking up for one side of the copyright vs. technology debate." The result, Rothken says, is "terrible chilling effect it's having on Internet innovators" who feature cloud storage components to their business. Rothken was unhappy about the police raid on Kim Dotcom's family: "Using "James Bond tactics with helicopters and weaponry, and breaking into homes over what is apparently a philosophical debate over the balance between copyright protection and the freedom to innovate, are heavy-handed tactics, are over-aggressive, and have a detrimental effect on society as a whole," Rothken said. In addition, the raid was a reminder that bills like the Stop Online Piracy Act "ought not to ever be passed, because these tactics [the helicopters, etc.] are so offensive that if you take the shackles off of government, it may lead to more abuse, more aggression." Rothken also suggested that the timing of the raid was suspicious; "over a two-year period, they happened to pick the one week where SOPA started going south." Rothken and his global legal team were able to show that the U.S. government recruited the New Zealand authorities to engage in various illegal activities in New Zealand. Although some of the facts are still being uncovered, it is undisputed that the New Zealand authorities illegally spied on Kim Dotcom prior to his arrest, and continued to spy on him illegally for an additional time after the arrest. Issues were raised with how broadly the original search warrant was written. According to Rothken the warrant was very broad, and could have included family photos. It provided little guidance on what to actually gather, leading the court to determine the search warrant was overly broad. The U.S. then quickly removed the seized information from New Zealand, even before the court could make this ruling. "They just went and grabbed everything. It's like, literally going into someone's home with a search warrant and just clearing the whole place out, which happened." said Auckland defense lawyer Gary Gotlieb. The Court found that the raid on Dotcom's home in Coatesville was illegal based on invalid search warrants and that the police illegally seized Kim Dotcom's computer systems and data. The Court also found that the United States violated the law when they removed Dotcom's data from New Zealand without authorization. While the ultimate repercussions of those illegal activities are still unclear, Rothken has written that they lend no credibility to the U.S. prosecution's case against Megaupload and Kim Dotcom. In an interview with Larry Williams, Rothken made the point "how the government conducts themselves in trying to prosecute someone matters." Because of all the problems already found by the courts, Rothken called for a global dismissal of the case. Rothken laid out a "constellation of facts" that supports his conclusion that this case is related to SOPA failing in the U.S. Congress, and "this appears to be some sort of a political solution to gain the support of Hollywood". When asked why his client doesn't simply come to the U.S. and answer the charges, Rothken additionally claimed these are the very factors that would deny his client a fair trial in the United States. Rothken, declined to say whether his legal team has uncovered the full details of the illegal spying. He did tell the Huffington Post in a March 2013 interview his views about the nature of illegal US spying on the internet and that they are trying to learn more: "Based upon the public record and cases in the United States, and an understanding about how Echelon works, which is this global spying arrangement between United States, Canada, New Zealand, Australia and Great Britain, we came to the preliminary view that in essence New Zealand was working with the United States to basically grab everything," he said. "Not just against Kim Dotcom, but basically grabbing all email in relative real time and storing it so that one day if they need to they could datamine it." "As part of our request to the [NZ] court," Rothken said, "we asked for discovery that was tailored not only to protecting Kim Dotcom's rights but the rights of all residents of New Zealand, and we've asked for the full scope of all the data they've obtained." E-Discovery Work Rothken has been involved in handling cutting edge issues in electronic discovery in a legal think tank. Rothken as a member of the cutting edge Sedona Conference® Working Group 1, co-edited a leading Commentary on the issues of preserving, managing, and identifying not reasonably accessible electronically stored information or "NRA ESI". The result is a five-step framework for analysis and six Guidelines for making reasonable, good-faith assessments where no "bright line" rules exist. Rothken's activities with the Sedona Conference included speaking at Conferences and Seminars on e-discovery issues. Rothken worked with Judges and technology lawyers to evolve how e-discovery issues are handled in Courts nationwide. E-Commerce & Interactive Entertainment Rothken has been involved in advising on e-commerce legal strategies since the inception of the commercial internet in the mid-1990s. Rothken has represented some of the most successful web sites in the world on a huge range of matters from startup issues to risk reduction strategies to development of early affiliate programs to e-commerce policies and agreements. In many instances he was called upon to handle issues where there was no clear precedent and thus had to innovate a solution. Examples of technology companies Rothken has helped in the startup phase include FriendFinder (Social Networking), Pandemic Studios (which he negotiated the spin off from Activision and started the company), ArenaNet (The makers of Guild Wars in which he helped obtain the seed funding and started the company along with persons formally with Blizzard), Nihilistic (in which he negotiated multiple game development deals and started the company with former LucasArts employees), and Telltale (in which he started the company with former LucasArts employees, negotiated numerous development deals, and helped obtain seed funding). Rothken assisted developers and content creators in negotiating agreements with some of the most valuable intellectual property franchises in the world including for example, Star Wars, The Simpsons, and CSI. Rothken assisted in the negotiations of the sale of the FriendFinder family of websites to a Penthouse controlled entity for a half billion dollars in 2007. Rothken was involved in the global roll out of cloud storage provider Mega and was introduced on stage by Kim Dotcom in the January 20th 2013 New Zealand press conference where he answered questions regarding the service. Rothken has appeared as a guest legal expert on television and radio including CNN (internet privacy), KQED radio (computer keyboard injuries), FOX (internet gambling), NBC (internet copyright), CBS (internet privacy), CNET radio (internet copyright), KTVU Silicon Valley Business Report (software license agreements), Bloomberg (copyright Litigation), and Court TV (internet gambling issues and copyright litigation). Notes References Sandoval, Greg (31 July 2007) Techfirm Website (24 November 2013). Rothken Law Firm Linked In for Ira Rothken (24 November 2013). Ira Rothken - LinkedIn External links Lawyers from New York City California lawyers Living people Golden Gate University School of Law alumni Year of birth missing (living people)
63641239
https://en.wikipedia.org/wiki/David%20C.%20Parkes
David C. Parkes
David C. Parkes (born 1973) is a British-American computer scientist, conducting research at the interface between computer science and economics, with a focus on multi-agent systems, artificial intelligence, game theory and market design. He is the George F. Colony Professor of Computer Science and Co Faculty Director of the Harvard Data Science Initiative. From 2013–17, he was Area Dean for Computer Science. Parkes is a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI) and the Association for Computing Machinery (ACM). Education and academic career Parkes born in 1973 in Sidcup, Kent attended Holmes Chapel Comprehensive, Cheshire then Lincoln College, Oxford for an M.Eng. degree in Engineering and Computer Science. Having gained the Thouron Award to the University of Pennsylvania, he completed a Ph.D. in Computer and Information Science in 2001. Parkes worked as a research intern at the Xerox PARC, Palo Alto Research Center for summer 1997 and the IBM, T.J. Watson Research Center in the summer of 2000. In the spring of 2001 Parkes was lecturer of Operations and Information Management at the Wharton School, University of Pennsylvania. He became an assistant professor of computer science at Harvard in 2001. He was awarded the John L. Loeb Associate Professor of Natural Sciences in 2005, with tenure as a Gordon McKay Professor of Computer Science in 2008. He was later appointed the George F. Colony Professor of Computer Science . From September 2008 to January 2009 Parkes was a visiting professor of computer science at the Ecole Polytechnique Federale de Lausanne, Switzerland. For Lent and Easter terms 2012 he was a distinguished visiting scholar at Christ's College, Cambridge. He was appointed Harvard Area Dean for Computer Science (2013–17) which involved planning for the expansion of Engineering and Applied Sciences into Allston. He is co-director of the Harvard Data Science Initiative 2017 and currently Co-Chair of the Faculty of Arts and Sciences (FAS) Data Science Masters, and Co-Chair of the Harvard Business Analytics Program. Research work Parkes founded the EconCS research group within Harvard School of Engineering and Applied Sciences. Known for his work on incentive engineering for computational systems, early research contributed to the design of combinatorial auctions, procedures for selling complex packages of goods. He has worked on decentralized mechanism design as well as mechanism design in dynamic environments, where resources, participants, and information local to participants vary over time to embrace the real-world uncertainty. He served as technical advisor to CombineNet, Inc. (Pittsburgh, PA), 2001–2010, scientificaAdvisor to Nanigans, Inc. (Boston, MA), 2011–2017, and since 2014 has served as acting chief scientist at Nift Networks, Inc. (Boston, MA). He a member of the scientific advisory committee of the Centrum Wiskunde & Informatica (CWI), Amsterdam, The Netherlands, from 2019. Parkes is a council member for the Computing Community Consortium of the Computing Research Association since 2018. He chaired the ACM Special Interest Group on Electronic Commerce between 2011 and 2015. He is also currently associate editor of ACM Transactions on Economics and Computation, since 2011; INFORMS Journal on Computing, since 2009; and Autonomous Agents and Multi-Agent Systems, since 2007. Awards Association for Computing Machinery (ACM) Fellow, 2018 ACM SIGAI Autonomous Agents Research Award, 2017 Association for the Advancement of Artificial Intelligence (AAAI) Fellow, 2014 Alfred P. Sloan Research Fellowship, 2005-2007 Thouron Award, 1995 References External links Living people Sloan Fellows Alumni of Lincoln College, Oxford University of Pennsylvania alumni People from Sidcup British emigrants to the United States John A. Paulson School of Engineering and Applied Sciences faculty 1973 births Academic journal editors Fellows of the Association for the Advancement of Artificial Intelligence American university and college faculty deans American computer scientists Fellows of the Association for Computing Machinery British computer scientists
15325980
https://en.wikipedia.org/wiki/Mouse%20button
Mouse button
A mouse button is an electric switch on a computer mouse which can be pressed (“clicked”) to select or interact with an element of a graphical user interface. Mouse buttons are most commonly implemented as miniature snap-action switches (micro switches). The three-button scrollmouse has become the most commonly available design. Users most commonly employ the second button to invoke a contextual menu in the computer's software user interface, which contains options specifically tailored to the interface element over which the pointer currently sits. By default, the primary mouse button sits located on the left-hand side of the mouse, for the benefit of right-handed users; left-handed users can usually reverse this configuration via software. Design In contrast to its motion-tracking mechanism, the mouse's buttons have changed little over the years, varying mostly in shape, number, and placement. A mouse click is the action of pressing (i.e. 'clicking', an onomatopoeia) a button to trigger an action, usually in the context of a graphical user interface (GUI). “Clicking” an onscreen button is accomplished by pressing on the real mouse button while the pointer is placed over the onscreen button's icon. The reason for the clicking noise made is due to the specific switch technology used nearly universally in computer mice. The switch is a subminiature precision snap-action type; the first of such types were the Honeywell MICRO SWITCH products. Operation Double clicking refers to clicking (and, naturally, releasing) a button (often the primary one, usually the left button) twice. Software recognizes both clicks, and if the second occurs within a short time, the action is recognised as a double click. If the second click is made after the time expires it is considered to be a new, single click. Most modern operating systems and mice drivers allow a user to change the speed of a double click, along with an easy way to test the setting. Some software recognises three or more clicks, such as progressively selecting a word, sentence, or paragraph in a word processor text page as more clicks are given in a sequence. With less abstracted software, a mouse button's current state (“mouse up” and “mouse down”) is monitored, allowing for modal operations such as drag and drop. Number of buttons Douglas Engelbart's first mouse had a single button; Xerox PARC soon designed a three-button model, but reduced the count to two for Xerox products. Apple decided on one button for their GUI environments on commercial release in 1983, while most other PC environments standardized on two, and most professional workstation environments used three. Aside from such OEM bundled mice, usually having between one and three buttons, many aftermarket mice have always had five or more, with varying amounts of additional software included to support them. This state of affairs continued until the late 1990s, when growing support for mice with a scroll wheel after the 1996 introduction of Microsoft's IntelliMouse incidentally made 3-button pointing devices ubiquitous on OEM hardware. The one major holdout, Apple, finally went multi-button in 2005 with their Mighty Mouse, though all Apple laptops would continue to use one-button trackpads until their first buttonless trackpad in 2008. Computer "My friend Marvin Minsky tells me there's great controversy in the artificial intelligence community over how many buttons a mouse should have", Jerry Pournelle wrote in 1983. In the matter of the number of buttons, Engelbart favored the view “as many as possible.” The prototype that popularized the idea of three buttons as standard had that number only because “we could not find anywhere to fit any more switches.” Those favoring single-button mice argue that a single button is simpler for novice users to understand, and for developers to support. In addition, as a lowest common denominator option, it offers both a path gradual advancement in user sophistication for unfamiliar applications, and a fallback for diverse or malfunctioning hardware. Those favoring multiple-button mice argue that support for a single-button mouse often requires clumsy workarounds in interfaces where a given object may have more than one appropriate action. Several common workarounds exist, and some are specified by the Apple Human Interface Guidelines. One workaround was the double click, first used on the Lisa, to allow both the “select” and “open” operation to be performed with a single button. Another workaround has the user hold down one or more keys on the keyboard before pressing the mouse button (typically control on a Macintosh for contextual menus). This has the disadvantage that it requires that both the user's hands be engaged. It also requires that the user perform actions on completely separate devices in concert; that is, holding a key on the keyboard while pressing a button on the mouse. This can be a difficult task for a disabled user, although can be remedied by allowing keys to stick so that they do not need to be held down. Another involves the press-and-hold technique. In a press-and-hold, the user presses and holds the single button. After a certain period, software perceives the button press not as a single click but as a separate action. This has two drawbacks: first, a slow user may press-and-hold inadvertently. Second, the user must wait for the software to detect the click as a press-and-hold, otherwise the system might interpret the button-depression as a single click. Furthermore, the remedies for these two drawbacks conflict with each other: the longer the lag time, the more the user must wait; and the shorter the lag time, the more likely it becomes that some user will accidentally press-and-hold when meaning to click. Studies have found all of the above workarounds less usable than additional mouse buttons for experienced users. A workaround for users of two-button mice in environments designed for three buttons is mouse chording, to simulate a tertiary-click by pressing both buttons simultaneously. Additional buttons Aftermarket manufacturers have long built mice with five or more buttons. Depending on the user's preferences and software environment, the extra buttons may allow forward and backward web-navigation, scrolling through a browser's history, or other functions, including mouse related functions like quick-changing the mouse's resolution/sensitivity. As with similar features in keyboards, however, not all software supports these functions. The additional buttons become especially useful in computer gaming, where quick and easy access to a wide variety of functions (such as macros and DPI changes) can give a player an advantage. Because software can map mouse-buttons to virtually any function, keystroke, application or switch, extra buttons can make working with such a mouse more efficient and easier. Scroll wheel Scrollmice almost always mount their scroll wheels on an internal spring-loaded frame and switch, so that simply pushing down makes them work as an extra button, made easier to do without accidentally spinning it by wheel detents present in most scrollmice. The wheel can both be rotated and clicked, thus most mice today effectively have three buttons. In web browsers, clicking on a hyperlink opens it in a new tab, and clicking on a tab itself usually closes it. Some mice have scroll wheels that can be tilted sideways for sideways scrolling. Omnidirectional scrolling can be performed in various document viewers including web browsers and PDF readers by middle-clicking and moving the pointer in any direction. This can be done by holding and scrolling until released, or by short clicking and scrolling until clicking once more (any mouse button) or pressing the Esc key. Some applications such as "Xreader" simulate a drag-to-scroll gesture as used by touch screen devices such as smartphones and tablet computers. In Linux, pressing the left and right mouse buttons simultaneously simulates a middle click, and middle-clicking into a text area pastes the clipboard at the mouse cursor's location (not the blinking cursor's existing location). Text editors including Kate and Xed allow switching between open tabs by scrolling while the cursor points at the tab bar. Software environment use The Macintosh user interface, by design, always has and still does make all functions available with a single-button mouse. Apple's Human Interface Guidelines still specify that other developers need to make all functions available with a single-button mouse as well. Various functions commonly done with additional buttons on other platforms were, when implemented on the Mac by most developers, instead done in conjunction with modifier keys. For instance, contextual menus were most often invoked by “Control Key-click,” a behavior later explicitly adopted by Apple in OS 8's Contextual Menu Manager. While there has always been a Macintosh aftermarket for mice and other pointing devices with two, three, or more buttons, and extensive configurable support (usually through keyboard emulation) to complement such devices in many major software packages on the platform, it wasn't until Mac OS X shipped that support for multi-button mice was hardcoded. X Window System applications, which Mac OS X can also run, have been developed with the use of two or three-button mice in mind. While historically, most PC mice provided two buttons, only the primary button was standardized in use for MS-DOS and versions of Windows through 3.1x; support and functionality for additional buttons was application specific. However, in 1992, Borland released Quattro Pro for Windows (QPW), which used the right (or secondary) mouse button to bring up a context menu for the screen object clicked (an innovation previously used on the Xerox Alto, but new to most users). Borland actively promoted the feature, advertising QPW as “The right choice,” and the innovation was widely hailed as intuitive and simple. Other applications quickly followed suit, and the “right-click for properties” gesture was cemented as standard Windows UI behavior after it was implemented throughout Windows 95. Most machines running Unix or a Unix-like operating system run the X Window System which almost always encourages a three-button mouse. X numbers the buttons by convention. This allows user instructions to apply to mice or pointing devices that do not use conventional button placement. For example, a left-handed user may reverse the buttons, usually with a software setting. With non-conventional button placement, user directions that say “left mouse button” or “right mouse button” are confusing. The ground-breaking Xerox Parc Alto and Dorado computers from the mid-1970s used three-button mice, and each button was assigned a color. Red was used for the left (or primary) button, yellow for the middle (secondary), and blue for the right (meta or tertiary). This naming convention lives on in some Smalltalk environments, such as Squeak, and can be less confusing than the right, middle and left designations. Acorn's RISC OS based computers necessarily use all three mouse buttons throughout their WIMP based GUI. RISC OS refers to the three buttons (from left to right) as Select, Menu and Adjust. Select functions in the same way as the “Primary” mouse button in other operating systems. Menu will bring up a context-sensitive menu appropriate for the position of the pointer, and this often provides the only means of activating this menu. This menu in most applications equates to the “Application Menu” found at the top of the screen in Mac OS, and underneath the window title under Microsoft Windows. Adjust serves for selecting multiple items in the “Filer” desktop, and for altering parameters of objects within applicationsalthough its exact function usually depends on the programmer. References Computer mice History of human–computer interaction Video game control methods American inventions
476148
https://en.wikipedia.org/wiki/Apple%20Developer
Apple Developer
Apple Developer (formerly Apple Developer Connection) is Apple Inc.'s website for software development tools, application programming interfaces (APIs), and technical resources. It contains resources to help software developers write software for the macOS, iOS, iPadOS, watchOS, and tvOS platforms. The applications are created in Xcode, or sometimes using other supported 3rd party programs. The apps can then be submitted to App Store Connect (formerly iTunes Connect), another one of Apple's website for approval the internal review team. Once approved, they can be distributed publicly via the respective app stores, i.e. App Store (iOS) for iOS and iPadOS apps, iMessage app store for Messages apps and Sticker pack apps, App Store (tvOS) for Apple TV apps, watchOS app store for Apple Watch apps with watchOS 6 and later, and via App Store (iOS) for earlier versions of watchOS. macOS apps are a notable exception to this, as they can be distributed similarly via Apple's Mac App Store or independently on the World Wide Web. Software leaks There have been several leaks of secret Apple software through the prerelease program, most notably the Mac OS X 10.4 Tiger leaks, in which Apple sued three men who allegedly obtained advance copies of Mac OS X 10.4 prerelease builds from the site and leaked it to BitTorrent. OS X Lion, OS X Mountain Lion, and OS X Mavericks, were leaked several times as well. However, to combat this issue Apple installed an alert system in the preview builds, alerting them if they had been uploaded by a tester to a BitTorrent system. Attempted hacks On July 18, 2013, an intruder attempted to access sensitive personal information on Apple's developer servers. The information was encrypted, but Apple could not guarantee that some information about developers may have been accessed. The Developer website was taken down for "maintenance" that Thursday, and was said to be undergoing maintenance through Sunday, when Apple posted a notice on the site notifying users of the attempted hack. They have stated that they will be rebuilding their servers and the developer system to prevent this from happening in the future. Related terms Apple ID References External links Apple Developer website Apple Inc. services Macintosh operating systems development Software developer communities
97302
https://en.wikipedia.org/wiki/Intel%208085
Intel 8085
The Intel 8085 ("eighty-eighty-five") is an 8-bit microprocessor produced by Intel and introduced in March 1976. It is a software-binary compatible with the more-famous Intel 8080 with only two minor instructions added to support its added interrupt and serial input/output features. However, it requires less support circuitry, allowing simpler and less expensive microcomputer systems to be built. The "5" in the part number highlighted the fact that the 8085 uses a single +5-volt (V) power supply by using depletion-mode transistors, rather than requiring the +5 V, −5 V and +12 V supplies needed by the 8080. This capability matched that of the competing Z80, a popular 8080-derived CPU introduced the year before. These processors could be used in computers running the CP/M operating system. The 8085 is supplied in a 40-pin DIP package. To maximise the functions on the available pins, the 8085 uses a multiplexed address/data(AD^0-AD^7) bus. However, an 8085 circuit requires an 8-bit address latch, so Intel manufactured several support chips with an address latch built in. These include the 8755, with an address latch, 2 KB of EPROM and 16 I/O pins, and the 8155 with 256 bytes of RAM, 22 I/O pins and a 14-bit programmable timer/counter. The multiplexed address/data bus reduced the number of PCB tracks between the 8085 and such memory and I/O chips. Both the 8080 and the 8085 were eclipsed by the Zilog Z80 for desktop computers, which took over most of the CP/M computer market, as well as a share of the booming home-computer market in the early-to-mid-1980s. The 8085 had a long life as a controller, no doubt thanks to its built-in serial I/O and five prioritized interrupts, arguably microcontroller-like features that the Z80 CPU did not have. Once designed into such products as the DECtape II controller and the VT102 video terminal in the late 1970s, the 8085 served for new production throughout the lifetime of those products. This was typically longer than the product life of desktop computers. Description The 8085 is a conventional von Neumann design based on the Intel 8080. Unlike the 8080 it does not multiplex state signals onto the data bus, but the 8-bit data bus is instead multiplexed with the lower eight bits of the 16-bit address bus to limit the number of pins to 40. State signals are provided by dedicated bus control signal pins and two dedicated bus state ID pins named S0 and S1. Pin 40 is used for the power supply (+5 V) and pin 20 for ground. Pin 39 is used as the Hold pin. The processor was designed using nMOS circuitry, and the later "H" versions were implemented in Intel's enhanced nMOS process called HMOS II ("High-performance MOS"), originally developed for fast static RAM products. Only a single 5-volt power supply is needed, like competing processors and unlike the 8080. The 8085 uses approximately 6,500 transistors. The 8085 incorporates the functions of the 8224 (clock generator) and the 8228 (system controller) on chip, increasing the level of integration. A downside compared to similar contemporary designs (such as the Z80) is the fact that the buses require demultiplexing; however, address latches in the Intel 8155, 8355, and 8755 memory chips allow a direct interface, so an 8085 along with these chips is almost a complete system. The 8085 has extensions to support new interrupts, with three maskable vectored interrupts (RST 7.5, RST 6.5 and RST 5.5), one non-maskable interrupt (TRAP), and one externally serviced interrupt (INTR). Each of these five interrupts has a separate pin on the processor, a feature which permits simple systems to avoid the cost of a separate interrupt controller. The RST 7.5 interrupt is edge triggered (latched), while RST 5.5 and 6.5 are level-sensitive. All interrupts are enabled by the EI instruction and disabled by the DI instruction. In addition, the SIM (Set Interrupt Mask) and RIM (Read Interrupt Mask) instructions, the only instructions of the 8085 that are not from the 8080 design, allow each of the three maskable RST interrupts to be individually masked. All three are masked after a normal CPU reset. SIM and RIM also allow the global interrupt mask state and the three independent RST interrupt mask states to be read, the pending-interrupt states of those same three interrupts to be read, the RST 7.5 trigger-latch flip-flop to be reset (cancelling the pending interrupt without servicing it), and serial data to be sent and received via the SOD and SID pins, respectively, all under program control and independently of each other. SIM and RIM each execute in four clock cycles (T states), making it possible to sample SID and/or toggle SOD considerably faster than it is possible to toggle or sample a signal via any I/O or memory-mapped port, e.g. one of the port of an 8155. (In this way, SID can be compared to the SO ["Set Overflow"] pin of the 6502 CPU contemporary to the 8085.) Like the 8080, the 8085 can accommodate slower memories through externally generated wait states (pin 35, READY), and has provisions for Direct Memory Access (DMA) using HOLD and HLDA signals (pins 39 and 38). An improvement over the 8080 is that the 8085 can itself drive a piezoelectric crystal directly connected to it, and a built-in clock generator generates the internal high-amplitude two-phase clock signals at half the crystal frequency (a 6.14 MHz crystal would yield a 3.07 MHz clock, for instance). The internal clock is available on an output pin, to drive peripheral devices or other CPUs in lock-step synchrony with the CPU from which the signal is output. The 8085 can also be clocked by an external oscillator (making it feasible to use the 8085 in synchronous multi-processor systems using a system-wide common clock for all CPUs, or to synchronize the CPU to an external time reference such as that from a video source or a high-precision time reference). The 8085 is a binary compatible follow-up on the 8080. It supports the complete instruction set of the 8080, with exactly the same instruction behavior, including all effects on the CPU flags (except for the AND/ANI operation, which sets the AC flag differently). This means that the vast majority of object code (any program image in ROM or RAM) that runs successfully on the 8080 can run directly on the 8085 without translation or modification. (Exceptions include timing-critical code and code that is sensitive to the aforementioned difference in the AC flag setting or differences in undocumented CPU behavior.) 8085 instruction timings differ slightly from the 8080—some 8-bit operations, including INR, DCR, and the heavily used MOV r,r' instruction, are one clock cycle faster, but instructions that involve 16-bit operations, including stack operations (which increment or decrement the 16-bit SP register) generally one cycle slower. It is of course possible that the actual 8080 and/or 8085 differs from the published specifications, especially in subtle details. (The same is not true of the Z80.) As mentioned already, only the SIM and RIM instructions were new to the 8085. Programming model The processor has seven 8-bit registers accessible to the programmer, named A, B, C, D, E, H, and L, where A is also known as the accumulator. The other six registers can be used as independent byte-registers or as three 16-bit register pairs, BC, DE, and HL (or B, D, H, as referred to in Intel documents), depending on the particular instruction. Some instructions use HL as a (limited) 16-bit accumulator. As in the 8080, the contents of the memory address pointed to by HL can be accessed as pseudo register M. It also has a 16-bit program counter and a 16-bit stack pointer to memory (replacing the 8008's internal stack). Instructions such as PUSH PSW, POP PSW affect the Program Status Word (accumulator and flags). The accumulator stores the results of arithmetic and logical operations, and the flags register bits (sign, zero, auxiliary carry, parity, and carry flags) are set or cleared according to the results of these operations. The sign flag is set if the result has a negative sign (i.e. it is set if bit 7 of the accumulator is set). The auxiliary or half carry flag is set if a carry-over from bit 3 to bit 4 occurred. The parity flag is set to 1 if the parity (number of 1-bits) of the accumulator is even; if odd, it is cleared. The zero flag is set if the result of the operation was 0. Lastly, the carry flag is set if a carry-over from bit 7 of the accumulator (the MSB) occurred. Commands/instructions As in many other 8-bit processors, all instructions are encoded in a single byte (including register-numbers, but excluding immediate data), for simplicity. Some of them are followed by one or two bytes of data, which can be an immediate operand, a memory address, or a port number. A NOP "no operation" instruction exists, but does not modify any of the registers or flags. Like larger processors, it has CALL and RET instructions for multi-level procedure calls and returns (which can be conditionally executed, like jumps) and instructions to save and restore any 16-bit register-pair on the machine stack. There are also eight one-byte call instructions (RST) for subroutines located at the fixed addresses 00h, 08h, 10h,...,38h. These are intended to be supplied by external hardware in order to invoke a corresponding interrupt-service routine, but are also often employed as fast system calls. One sophisticated instruction is XTHL, which is used for exchanging the register pair HL with the value stored at the address indicated by the stack pointer. 8-bit instructions All two-operand 8-bit arithmetic and logical (ALU) operations work on the 8-bit accumulator (the A register). For two-operand 8-bit operations, the other operand can be either an immediate value, another 8-bit register, or a memory cell addressed by the 16-bit register pair HL. The only 8-bit ALU operations that can have a destination other than the accumulator are the unary incrementation or decrementation instructions, which can operate on any 8-bit register or on memory addressed by HL, as for two-operand 8-bit operations. Direct copying is supported between any two 8-bit registers and between any 8-bit register and an HL-addressed memory cell, using the MOV instruction. An immediate value can also be moved into any of the foregoing destinations, using the MVI instruction. Due to the regular encoding of the MOV instruction (using nearly a quarter of the entire opcode space) there are redundant codes to copy a register into itself (MOV B,B, for instance), which are of little use, except for delays. However, what would have been a copy from the HL-addressed cell into itself (i.e., MOV M,M) instead encodes the HLT instruction, halting execution until an external reset or unmasked interrupt occurs. 16-bit operations Although the 8085 is an 8-bit processor, it has some 16-bit operations. Any of the three 16-bit register pairs (BC, DE, HL or SP) can be loaded with an immediate 16-bit value (using LXI), incremented or decremented (using INX and DCX), or added to HL (using DAD). LHLD loads HL from directly addressed memory and SHLD stores HL likewise. The XCHG operation exchanges the values of HL and DE. Adding HL to itself performs a 16-bit arithmetical left shift with one instruction. The only 16-bit instruction that affects any flag is DAD (adding BC, DE, HL, or SP, to HL), which updates the carry flag to facilitate 24-bit or larger additions and left shifts. Adding the stack pointer to HL is useful for indexing variables in (recursive) stack frames. A stack frame can be allocated using DAD SP and SPHL, and a branch to a computed pointer can be done with PCHL. These abilities make it feasible to compile languages such as PL/M, Pascal, or C with 16-bit variables and produce 8085 machine code. Subtraction and bitwise logical operations on 16 bits is done in 8-bit steps. Operations that have to be implemented by program code (subroutine libraries) include comparisons of signed integers as well as multiplication and division. Undocumented instructions A number of undocumented instructions and flags were discovered by two software engineers, Wolfgang Dehnhardt and Villy M. Sorensen in the process of developing an 8085 assembler. These instructions use 16-bit operands and include indirect loading and storing of a word, a subtraction, a shift, a rotate, and offset operations. Input/output scheme The 8085 supports both port-mapped and memory-mapped io. It supports up to 256 input/output (I/O) ports via dedicated Input/Output instructions, with port addresses as operands. Port-mapped IO can be an advantage on processors with limited address space. During a port-mapped I/O bus cycle, the 8-bit I/O address is output by the CPU on both the lower and upper halves of the 16-bit address bus. Devices designed for memory mapped I/O can also be accessed by using the LDA (load accumulator from a 16-bit address) and STA (store accumulator at a 16-bit address specified) instructions, or any other instructions that have memory operands. A memory-mapped IO transfer cycle appears on the bus as a normal memory access cycle. Development system Intel produced a series of development systems for the 8080 and 8085, known as the MDS-80 Microprocessor System. The original development system had an 8080 processor. Later 8085 and 8086 support was added including ICE (in-circuit emulators). It is a large and heavy desktop box, about a 20" cube (in the Intel corporate blue color) which includes a CPU, monitor, and a single 8-inch floppy disk drive. Later an external box was made available with two more floppy drives. It runs the ISIS operating system and can also operate an emulator pod and an external EPROM programmer. This unit uses the Multibus card cage which was intended just for the development system. A surprising number of spare card cages and processors were being sold, leading to the development of the Multibus as a separate product. The later iPDS is a portable unit, about 8" x 16" x 20", with a handle. It has a small green screen, a keyboard built into the top, a 5¼ inch floppy disk drive, and runs the ISIS-II operating system. It can also accept a second 8085 processor, allowing a limited form of multi-processor operation where both processors run simultaneously and independently. The screen and keyboard can be switched between them, allowing programs to be assembled on one processor (large programs took awhile) while files are edited in the other. It has a bubble memory option and various programming modules, including EPROM, and Intel 8048 and 8051 programming modules which are plugged into the side, replacing stand-alone device programmers. In addition to an 8080/8085 assembler, Intel produced a number of compilers including those for PL/M-80 and Pascal, and a set of tools for linking and statically locating programs to enable them to be burned into EPROMs and used in embedded systems. A lower cost "MCS-85 System Design Kit" (SDK-85) board contains an 8085 CPU, an 8355 ROM containing a debugging monitor program, an 8155 RAM and 22 I/O ports, an 8279 hex keypad and 8-digit 7-segment LED, and a TTY (Teletype) 20 mA current loop serial interface. Pads are available for one more 2K×8 8755 EPROM, and another 256 byte RAM 8155 I/O Timer/Counter can be optionally added. All data, control, and address signals are available on dual pin headers, and a large prototyping area is provided. List of Intel 8085 Applications The 8085 processor was used in a few early personal computers, for example, the TRS-80 Model 100 line used an OKI manufactured 80C85 (MSM80C85ARS). The CMOS version 80C85 of the NMOS/HMOS 8085 processor has several manufacturers. In the Soviet Union, an 80C85 clone was developed under the designation IM1821VM85A () 2016 was still in production. Some manufacturers provide variants with additional functions such as additional instructions. The rad-hard version of the 8085 has been in on-board instrument data processors for several NASA and ESA space physics missions in the 1990s and early 2000s, including CRRES, Polar, FAST, Cluster, HESSI, the Sojourner Mars Rover, and THEMIS. The Swiss company SAIA used the 8085 and the 8085-2 as the CPUs of their PCA1 line of programmable logic controllers during the 1980s. Pro-Log Corp.put the 8085 and supporting hardware on an STD Bus format card containing CPU, RAM, sockets for ROM/EPROM, I/O and external bus interfaces. The included Instruction Set Reference Card uses entirely different mnemonics for the Intel 8085 CPU. The product was a direct competitor to Intel's Multibus card offerings. MCS-85 family The 8085 CPU is one part of a family of chips developed by Intel for building a complete system. Many of these support chips were also used with other processors. The original IBM PC based on the Intel 8088 processor used several of these chips; the equivalent functions today are provided by VLSI chips, namely the "Southbridge" chips. 8085 – CPU 8155 – 2K-bit static MOS RAM with 3 I/O Ports and Timer. The industrial version of ID8155 was available for US$37.50 in quantities of 100 and up. The military version of M8155 was available for US$100.00 in quantities of 100. There is a 5 MHz version of Intel 8155-2. The available 8155H was introduced using the HMOS II technology which it uses 30 percent less power than the previous generation. The plastic package version of P8155H (3 MHz) and P8155H-2 (5 MHz) are available for USD $5.15 and USD $6.40 per 100 in quantities respectfully. 8156 – 2K-bit static MOS RAM with 3 I/O Ports and Timer. The industrial version of ID8156 was available for US$37.50 in quantities of 100. There is a 5 MHz version of Intel 8156-2. The available 8156H was introduced using the HMOS II technology which it uses 30 percent less power than the previous generation. The plastic package version of P8156H (3 MHz) and P8156H-2 (5 MHz) are available for USD $5.15 and USD $6.40 per 100 in quantities respectfully. 8185 – 1,024 x 8-bit Static RAM. The 5 MHz version of Intel 8185-2 was available for US$48.75 in quantity of 100. 8355 – 2,048 × 8-bit ROM, two 8-bit I/O ports. The industrial version of ID8355 was available for US$22.00 in quantities of 1000. There is a 5 MHz version of Intel 8355-2. 8604 – 4096-bit (512 ×8) PROM 8755 – 2048 x 8-bit EPROM, two 8-bit I/O ports. The Intel 8755A-2 is the 5 MHz version. That version was available for US$81.00 in quantity of 100. There was an Industrial Grade version Intel I8755A-8. 8202 – Dynamic RAM Controller. This supports the Intel 2104A, 2117, or 2118 DRAM modules, up to 128 KB of DRAM modules. Price was reduced to US$36.25 for quantities of 100 for the D8202 package style around May 1979. 8203 – Dynamic RAM Controller. The Intel 82C03 CMOS version dissipates less than 25 mA. It supports up to 16x 64K-bit RAM for a total capacity of up to 256 KB. It refreshes every 10 to 16 microseconds. It supports multiplexing of row and column memory addresses. It generates strobes to latch the address internally. It arbitrates between simultaneous requests for memory access and refresh. It also acknowledges memory-access cycles to the system CPU. The 82C03 was available in either ceramic or plastic packages for US$32.00 in 100 pieces quantity. 8205 – 1 of 8 Binary Decoder 8206 – Error Detection & Correction Unit 8207 – DRAM Controller 8210 – TTL To MOS Shifter & High Voltage Clock Driver 8212 – 8-bit I/O Port. The industrial version of ID8212 was available for US$6.75 in quantities of 100. 8216 – 4-bit Parallel Bidirectional Bus Driver. The industrial version of ID8216 was available for US$6.40 in quantities of 100. 8218/8219 – Bus Controller 8226 – 4-bit Parallel Bidirectional Bus Driver. The industrial version of ID8226 was available for US$6.40 in quantities of 100. 8231 – Arithmetic Processing Unit 8232 – Floating-Point Processor 8237 – DMA Controller 8251 – Communication Controller 8253 – Programmable Interval Timer 8254 – Programmable Interval Timer. The 82C54 CMOS version was outsourced to Oki Electronic Industry Co., Ltd. 8255 – Programmable Peripheral Interface 8256 – Multifunction Peripheral. This chip combines Intel 8251A Programmable Communications Interface, Intel 8253 Programmable Interval Timer, Intel 8255A Programmable Peripheral Interface, and Intel 8259A Programmable Interrupt Controller. This multifunction chip uses Serial Communications, Parallel I/O, Counter/Timers and Interrupts. The Intel 8256AH was available for US$21.40 in quantities of 100. 8257 – DMA Controller 8259 – Programmable Interrupt Controller 8271 – Programmable Floppy Disk Controller 8272 – Single/Double Density Floppy Disk Controller. It is compatible with IBM 3740 and System 34 formats and provides both Frequency Modulation (FM) or Modified Frequency Modulation (MFM). This version was available for US$38.10 in quantities of 100. 8273 – Programmable HDLC/SDLC Protocol Controller. This device supports ISO/CCITT's HDLC and IBM's SDLC communication protocols. These were available for US$33.75 (4 MHz) and US$30.00 (8 MHz) in quantities of 100. 8274 – Multi-Protocol Serial Controller. This support three different protocols using the following feature of Asynchronous Operation, Byte Synchronous Operation and Bit Synchronous Operation. The Byte Synchronous mode is compatible to IBM's Bisync signal protocol. The Bit Synchronous mode is compatible to IBM's SDLC and the International Standards Organization's HDLC protocol and is compatible with CCITT X.25 international standard as well. It was packaged in 40-pin product using the Intel's HMOS technology. The available version is rated up to 880 kilobaud for USD $30.30 in the quantities of 100. NEC µPD7201 was also compatible. 8275 – Programmable CRT Controller. It refreshes the raster scan display by buffering from main memory and keeping track of the display portion. This version was available for US$32.00 in quantities of 100. 8276 – Small System CRT Controller 8278 – Programmable Key Board Interface 8279 – Key Board/Display Controller 8282 – 8-bit Non-Inverting Latch with Output Buffer 8283 – 8-bit Inverting Latch with Output Buffer 8291 – GPIB Talker/Listener. This controller can operate from 1 to 8 MHz. It was available for US$23.75 in quantities of 100. 8292 – GPIB Controller. Designed around Intel 8041A which it has been programmed as an controller interface element. It also controls the bus using three lock-up timers to detect issues on the GPIB bus interface. It was available for US$21.25 in quantities of 100. 8293 – GPIB Transceiver. This chipset supports up to 4 different modes: Mode 0 Talker/Listener Control Lines, Mode 1 Talker/Listener/Controller Control Lines, Mode 2 Talker/Listener/Controller Data Lines, and Mode 3 Talker/Listener Data Lines. It was available for US$11.50 in quantities of 100. At the time of release, it was available in samples then full production in the first quarter of 1980. 8294 – Data Encryption/Decryption Unit + 1 O/P Port. It encrypts and decrypts 64-bit blocks of data using the Federal Information Processing Data Encryption Standard algorithm. This also uses the National Bureau of Standards encryption algorithm. This DEU operates using a 56-bit user-specified key to generate 64-bit cipher words. It was available for US$22.50 in quantities of 100. 8295 – Dot Matrix Printer Controller. This interfaces with LRC 7040 Series dot matrix printers and other small printers as well. It was available for US$20.65 in quantities of 100. Educational use In many engineering schools the 8085 processor is used in introductory microprocessor courses. Trainer kits composed of a printed circuit board, 8085, and supporting hardware are offered by various companies. These kits usually include complete documentation allowing a student to go from soldering to assembly language programming in a single course. Also, the architecture and instruction set of the 8085 are easy for a student to understand. Shared Project versions of educational and hobby 8085-based single-board computers are noted below in the External Links section of this article. Simulators Software simulators are available for the 8085 microprocessor, which allow simulated execution of opcodes in a graphical environment. See also IBM System/23 Datamaster gave IBM designers familiarity with the 8085 support chips used in the IBM PC. Notes References Further reading Books Bill Detwiler Tandy TRS-80 Model 100 Teardown Tech Republic, 2011 Web ; 495 pages ; 466 pages ; 303 pages Reference Cards Intel 8085 Reference Card; Saundby; 2 pages. (archive) External links Simulators: GNUSim8085 - simulator, assembler, debugger Boards: MCS-85 System Design Kit (SDK-85) - Intel Altaids SBC-85 Minimax Glitchworks OMEN Alpha 8-bit microprocessors
49669723
https://en.wikipedia.org/wiki/Lansweeper
Lansweeper
Lansweeper is an IT Asset Management solution that gathers hardware and software information of computers and other devices on a computer network for management and compliance and audit purposes. History Lansweeper originated in Belgium in 2004. On October 2020, Lansweeper announces acquisition of Fing - leader in device recognition. In June 2021, Lansweeper secured a €130m investment from Insight Partners to accelerate further growth. Description The central capability of Lansweeper derives from a discovery phase of sweeping round a local area network (LAN) and maintaining an inventory of the hardware assets and software deployed on those assets. Reports from the inventory enable complete hardware and software reports on the devices and can be used to identify problems. Lansweeper can collect information on all Windows, Linux and Mac devices and can also IP-addressable network appliances. The software incorporates an integrated ticket-based Help Desk module that can be used to assist issues to be captured and tracked through to completion. There is also a software module that allows Lansweeper to orchestrate software updates on Windows computers. The Lansweeper central inventory database must be located on a either an SQL Compact or SQL Server database on a Microsoft Windows machine. Lansweeper claims while a minimum default configuration can be supported by placing all its components on a single server the application has the capability to scale up to hundreds of thousands of devices. While Lansweeper can be set up agentless it may be recommended to use agents for more complex configurations. Lansweeper has a freeware version of the product but it is limited in the number of devices available and functionality provided unless appropriate licenses are purchased. Criticisms A PC World review in 2010 claimed the interface rendered less rapidly than Spiceworks. Lansweeper itself does not directly provide a network intrusion system; however, Lansweeper claims it is able to partner with an addition tool to address that area. In 2019 Lansweeper was discovered to be vulnerable to an SQL injection vulnerability. Notes References External links Network management Utility software IT infrastructure
1744625
https://en.wikipedia.org/wiki/Generic%20Mapping%20Tools
Generic Mapping Tools
Generic Mapping Tools (GMT) are an open-source collection of computer software tools for processing and displaying xy and xyz datasets, including rasterisation, filtering and other image processing operations, and various kinds of map projections. The software stores 2-D grids as COARDS-compliant netCDF files and comes with a comprehensive collection of free GIS data, such as coast lines, rivers, political borders and coordinates of other geographic objects. Users convert further data (like satellite imagery and digital elevation models) from other sources and import them. GMT stores the resulting maps and diagrams in PostScript (PS) or Encapsulated PostScript (EPS) format. Users operate the system from the command line: this enables scripting and the automation of routine tasks. More or less comprehensive graphic user interfaces are available from third parties, as well as web applications, bringing the system's functionality online. Paul Wessel and Walter H. F. Smith created GMT in 1988 at Lamont-Doherty Earth Observatory, officially releasing it on October 7, 1991 under the GNU General Public License. The letters GMT originally stood for Gravity, Magnetism and Topography, the three basic types of geophysical data. Besides its strong support for the visualisation of geographic data sets, the software includes tools for processing and manipulating multi-dimensional datasets. Most GMT users are geoscientists. Notes External links Interactive online map generation using GMT. Some examples. iGMT is a graphical front-end for GMT. Information on how to use OpenStreetMap data within GMT. Database-related software for Linux Earth sciences graphics software Free GIS software Graphics-related software for Linux Plotting software
9076566
https://en.wikipedia.org/wiki/BigMachines
BigMachines
BigMachines is a software company, founded in 2000 by Godard Abel and Christopher Shutts, and its software is designed to integrate with enterprise resource planning (ERP), customer relationship management (CRM), and other business systems. The company is headquartered in Deerfield, Illinois with development offices in San Mateo, California and Hyderabad, India. BigMachines also has European operations with offices in Frankfurt am Main, Germany, and London served by BigMachines AG as well as offices in Singapore, Tokyo, and Sydney, serving Asia-pacific. Company Vista Equity Partners, a private equity firm based in San Francisco, acquired majority ownership of BigMachines through several transactions starting January 2001. Vista Equity acquired majority ownership of the company in December 2010. Shortly after Vista acquired majority ownership Abel left the company. Three years later, Oracle Corporation announced it was acquiring BigMachines on October 24, 2013. The transaction later completed. Oracle kept the product, now called CPQ Cloud. Awards 2002 Forbes.com – Best of the Web Winner 2003 Start Magazine – Technology & Business Award 2004, 2006, 2007 Supply & Demand Chain – Supply & Demand Chain Executive Top 100 2005, 2007 Manufacturing Business Technology – Top 40 Emerging Software Venders 2008, 2009, 2010, 2011, 2012 Inc 5000 five-year consecutive Award Winner 2009, 2010 JMP Hot 100 Software Companies – Best Privately Owned Software Companies 2010 Sales 2.0 Awards – Best Sales Enablement Program 2010 Codie award Winner – Best Business Productivity Solution References External links Companies based in Deerfield, Illinois Software companies based in Illinois CRM software companies Customer relationship management software Software companies established in 2000 Cloud applications Oracle acquisitions 2013 mergers and acquisitions Software companies of the United States 2000 establishments in the United States 2000 establishments in Illinois Companies established in 2000
6613070
https://en.wikipedia.org/wiki/System%20time
System time
In computer science and computer programming, system time represents a computer system's notion of the passage of time. In this sense, time also includes the passing of days on the calendar. System time is measured by a system clock, which is typically implemented as a simple count of the number of ticks that have transpired since some arbitrary starting date, called the epoch. For example, Unix and POSIX-compliant systems encode system time ("Unix time") as the number of seconds elapsed since the start of the Unix epoch at 1 January 1970 00:00:00 UT, with exceptions for leap seconds. Systems that implement the 32-bit and 64-bit versions of the Windows API, such as Windows 9x and Windows NT, provide the system time as both , represented as a year/month/day/hour/minute/second/milliseconds value, and , represented as a count of the number of 100-nanosecond ticks since 1 January 1601 00:00:00 UT as reckoned in the proleptic Gregorian calendar. System time can be converted into calendar time, which is a form more suitable for human comprehension. For example, the Unix system time seconds since the beginning of the epoch translates into the calendar time 9 September 2001 01:46:40 UT. Library subroutines that handle such conversions may also deal with adjustments for time zones, daylight saving time (DST), leap seconds, and the user's locale settings. Library routines are also generally provided that convert calendar times into system times. Other time measurements Closely related to system time is process time, which is a count of the total CPU time consumed by an executing process. It may be split into user and system CPU time, representing the time spent executing user code and system kernel code, respectively. Process times are a tally of CPU instructions or clock cycles and generally have no direct correlation to wall time. File systems keep track of the times that files are created, modified, and/or accessed by storing timestamps in the file control block (or inode) of each file and directory. History Most first-generation personal computers did not keep track of dates and times. These included systems that ran the CP/M operating system, as well as early models of the Apple II, the BBC Micro, and the Commodore PET, among others. Add-on peripheral boards that included real-time clock chips with on-board battery back-up were available for the IBM PC and XT, but the IBM AT was the first widely available PC that came equipped with date/time hardware built into the motherboard. Prior to the widespread availability of computer networks, most personal computer systems that did track system time did so only with respect to local time and did not make allowances for different time zones. With current technology, most modern computers keep track of local civil time, as do many other household and personal devices such as VCRs, DVRs, cable TV receivers, PDAs, pagers, cell phones, fax machines, telephone answering machines, cameras, camcorders, central air conditioners, and microwave ovens. Microcontrollers operating within embedded systems (such as the Raspberry Pi, Arduino, and other similar systems) do not always have internal hardware to keep track of time. Many such controller systems operate without knowledge of the external time. Those that require such information typically initialize their base time upon rebooting by obtaining the current time from an external source, such as from a time server or external clock, or by prompting the user to manually enter the current time. Implementation The system clock is typically implemented as a programmable interval timer that periodically interrupts the CPU, which then starts executing a timer interrupt service routine. This routine typically adds one tick to the system clock (a simple counter) and handles other periodic housekeeping tasks (preemption, etc.) before returning to the task the CPU was executing before the interruption. Retrieving system time The following tables illustrate methods for retrieving the system time in various operating systems, programming languages, and applications. Values marked by (*) are system-dependent and may differ across implementations. All dates are given as Gregorian or proleptic Gregorian calendar dates. Note that the resolution of an implementation's measurement of time does not imply the same precision of such measurements. For example, a system might return the current time as a value measured in microseconds, but actually be capable of discerning individual clock ticks with a frequency of only 100 Hz (10 ms). Operating systems Programming languages and applications See also Notes References External links Critical and Significant Dates, J. R. Stockton (retrieved 3 December 2015) The Boost Date/Time Library (C++) The Boost Chrono Library (C++) The Chronos Date/Time Library (Smalltalk) Joda Time, The Joda Date/Time Library (Java) The Perl DateTime Project (Perl) date: Ruby Standard Library Documentation (Ruby) Operating system technology Computer programming Computer real-time clocks
26348164
https://en.wikipedia.org/wiki/European%20University%20of%20Tirana
European University of Tirana
The European University of Tirana (, UET) is an accredited private university in Tirana, Albania. It was established in 2006 by four PhD students (at the time): Adri Nurellari, Blendi Kajsiu, Ermal Hasimja, and Henri Çili, in collaboration with business manager Laert Duraj and journalist Robert Rakipllari. UET was licensed by the Albanian Ministry of Education & Sciences on 20 September 2006 following the decision of the Council of Ministers Nr. 636/2006 and is fully accredited by the Quality Assurance Agency of Higher Education. Academics The European University of Tirana (UET) awards Bachelor Degrees which usually take 3 years to be completed. It also offers Professional Masters, which last 1-1.5 years, and Scientific Masters, which last for 2 years. In 2011, the University acquired the right to award Doctorate Degrees in Social, Economic and Judicial Sciences. Webometrics Ranking of World Universities, ranked UET as the 7th top Higher Education Institution in the nation. Organisation Undergraduate UET has three Faculties: Faculty of Law: Department of General Law Faculty of Social Sciences and Communication: Department of Political Science Department of International Relations Department of Public Relations Department of Design Department of Sociology & Social Anthropology Department of Psychology Faculty of Economics and Information Technology: Department of Finance Banking Department of Business Management Department of Information Economics (Business Management) Department of Information Economics (Finance Banking) Professional Master The European University of Tirana offers about 24 Professional Masters and Master of Science programs in these fields of study: Economics, Finance, Corporate Governance, Management, Marketing, Information Economics, Public Law, International Law, Business Law, Private Law, Communication Sciences, Political Sciences, International Relations, Sociology & Social Anthropology, Psychology and Science Education. Currently there are 7 new programs of study in the process of being licensed. Faculty of Law Justice Faculty of Economics and Information Technology Finance Corporate Governing Marketing Business Management Applied Informatics Faculty of Social Sciences and Education Public Relations Political Sciences International Relations - Diplomacy Teaching School Psychology Science Master The European University of Tirana offers about 24 Professional Master’s and Master of Science programs in these fields of study: Economics, Finance, Management, Marketing, Information Economics, Public Law, International Law, Business Law, Private Law, Science Communication, Political Sciences, International Relations, Sociology & Social Anthropology, Psychology and Science Education. Currently there are 7 new programs of study in the process of being licensed. Faculty of Law Private and Business Law Criminal Law Public and International Law Faculty of Economics and Information Technology Finance Banking Business Administration Economic Informatics Faculty of Social Sciences and Education Communication – Public Relations Political Sciences International Relations Sociology – Social Anthropology Psychology Education Sciences International Collaboration The European University of Tirana encourages international cooperation and the transfer of lecturers and students between partner institutions. It also has study abroad programs for one semester or one academic year with partner Universities, including Aston University, in Birmingham, UK. The two Universities intend to further their cooperation by offering joint Doctorates in the near future. UET has also signed cooperation agreements with Panthéon-Assas University, France; the University of Bari, Italy; Bar-Ilan University, Israel; the University of Montenegro, Montenegro; South East European University, Macedonia; the University of Marseille, France; and the University of Santiago, Chile. In May 2010 EUT became a member of the Interuniversity Centre for Research and Cooperation with Eastern and South Eastern Europe (CIRCEOS), in Bari, Italy. See also List of universities in Albania List of colleges and universities List of colleges and universities by country References European University of Tirana Educational institutions established in 2006 Universities and colleges in Tirana 2006 establishments in Albania
57068258
https://en.wikipedia.org/wiki/National%20Cyber%20Security%20Authority%20%28Israel%29
National Cyber Security Authority (Israel)
The National Cyber Security Authority (NCSA), located within the Prime Minister's office, was an Israeli security entity responsible for protecting the Israeli civilian cyber space, during 2016-2018. The NCSA provided incident handling services and guidance for all civilian entities as well as all critical infrastructures in the Israeli economy, and works towards increasing the resilience of the civilian cyber space. At the end of 2017, the Israeli government decided to merge the NCSA with the Israeli National Cyber Bureau (established in 2012), the unit in the Prime Minister's Office , which served as the government's cyber policy Bureau, into one unit - the National Cyber Directorate. Background Israel was one of the first countries to set up national Critical Infrastructure Protection CIP or CIIP. In February 2002, the Israel Government passed Resolution B/84, deciding to protect Critical Infrastructure, and assigning the Israel Security Agency ("Shin Bet") with the task. The National Information Security Authority (NISA ) took upon the task. Although this CIP model has proven successful, the country's connectivity and dependency on technology continued to increase, and calls for an improved cyber strategy grew stronger. The discovery of Stuxnet catalyzed the policy processes. In November 2010, Israeli Prime-Minister Benyamin Netanyahu formally nominated a special taskforce to devise recommendations for a National Cyber Strategy, also known as the "Cyber Initiative". The team, headed by Major-General (Ret.) Prof. Isaac Ben-Israel of Tel Aviv University worked for several months, in eight sub-committees manned by dozens of experts. The team examined all the components vital to the need of the State of Israel to cope successfully in cyberspace, including the analysis of national benefits regarding aspects of economy, academy and National security. The "Cyber Initiative" teamwork was concluded in May 2011 and summed-up in a special report dispatched to the Prime Minister. The team's main conclusion was that "cyber-attacks should be considered as a substantial potential threat to the functional continuity of the state, its institutions and its citizens", and that "a central gap has been identified in the cyber defense of the civil sector at large". At the core of its report, the team recommended that two bodies be established – namely, a "National Cyber Bureau" and an "executive body for the security of the civil sector" by its side. The team also recommended to set-up a national "cyber defence foil", comprising automated computerized systems and manned systems, together defending pre-defined computer systems. It also motioned for the establishment of a national CERT. The team indicated that the civil and security components of cyberspace are interlaced and are, to all intents and purposes, inseparable, and that there is a need for a broad national perspective and for an understanding that the preparedness of the State of Israel to the challenges of cyberspace is a national undertaking of the first order. Following that, in August 2011 the Israeli government passed a resolution to establish the Israeli National Cyber Bureau (INCB), designated to assist the prime minister, the government and its committees in forging a National Cyber Policy and fostering the application of its aspects of National Security. Specifically, the INCB was assigned to develop a national cyber security strategy. The development of that strategy generated a professional and important discourse on the national level regarding possible ways to establish an operational body responsible for the defence of the civil cyberspace. The need for it has never been in doubt; however, the manner in which this need should be satisfied has been the subject of many discussions and some poignant disputes, and was finally resolved through the government's decision to establish a civilian body in the Prime Minister's Office – the NCSA. Government Resolutions 2443 and 2444 In February 2015, the 33rd Government of Israel approved two government resolutions concerning the Israeli cyber defense, centered by Government Resolution 2444, "Promoting National Preparedness for Cyber Defense". In this resolution, the government stipulated that the defense of the proper functioning in cyberspace is a vital, national state goal and a vital national interest of the state of Israel. It was accordingly decreed that the aim of the NCSA is to protect the entire civilian cyberspace of Israel. Its functions include: a. Managing, operating and carrying out all operational defence efforts in cyberspace on the national level, as needed in order to give a whole and continuous response to cyber-attacks. b. Operating the national CERT for the benefit of the economy as a whole, including the improvement of cyber resilience, and to assist in dealing with cyber threats and coping with cyber incidents. c. Building and enhancing the cyber resilience of the Israeli economy through preparedness, competence and regulation, including the enhancement of sectors and organizations, guidance, regulation of the cyber defence services market, licensing, standardization, exercising and general training, Incentivization, etc. d. Forging, implementation and assimilation of a national Cyber Defence Methodology. e. Performing any other task stipulated by the prime minister, according to the NCSA's aim. Establishment of the NCSA The NCSA began its activities in early 2016, upon the nomination of its Director General, Buky Carmeli. Carmeli came to the post after serving for over 20 years in Unit 8200 and in the defense establishment. In his last position he served as head of the technological unit of the Malmab, where he led cyber defense in the defense establishment and defense industries, and in the past he was involved in initiatives in the field of protection of sensitive systems. Prior to that position he headed a hedge fund that invests in international technology funds. NCSA was established as a body which combines security and operational characteristics with civil ones, to synergistically lead, together with all other State security organizations, the defense efforts against cyber-attacks, aimed at Israel's civil sector. One of the core missions of the NCSA is to assist Israeli organizations and the Israeli public at large in dealing with cyber threats – irrespective of the identity of those responsible for them. This assistance is realized through the CERT-IL (the National CERT). Located in the city of Beer Sheva at the heart of southern Israel, the CERT is a 24/7 center, offering aid to the general public: from the National Critical Infrastructure companies to the man on the street. Beside the CERT, special sectorial centers were established, assisting the government ministries, the Financial Sector and the Energy Sector, and had already proven the value of creating sectorial expertise. In many cases, after a professional analysis of the significance of the incident, it was decided to send response teams to assist the organization in containing the attack. For example, in was published in the media that during April 2017, the NCSA had thwarted a largescale cyber attack targeting over 120 organizations in Israel, and that in June, the NCSA dealt with a large cyberattack on Israeli hospitals. As a governmental entity facing the public, the NCSA was aware that information being shared is often sensitive or confidential due to matters of privacy, intellectual property, etc. Therefore, its actions are compatible with the specific guidelines determined by the Attorney General and the Department of Justice. The NCSA acted not only in removing attacks that had already penetrated organizations, but also helped deal with cyber threats before they reach the organizations. Thus, the BCSA led the national coping with dozens of cyber threats, such as: WannaCry, NotPetya, CCleaner and Bad Rabbit. In addition, since its creation, the NCSA has been active in the global cyber security community and has had operational relations with many bodies from various countries across the globe. These relationships generated not only shared insights and orderly work processes, but also real-time operational aid. Because of this connection, dozens of countries have in many cases assisted the NCSA's efforts to curb international attacks on Israeli organizations. Also, it was reported that the NCSA had created a framework for cooperation with the DHS's cyber protection body. Another important activity the NCSA has been conducting since its establishment is boosting the economy's cyber resilience. This activity is conducted in consent, by means of raising organizations awareness to cyber threats, and through guidance, when public interest requires it. Since March 2017, the NCSA was responsible by law to guide national CI organizations, such as the Israel Electric Company and Israel Railway, how to cope with cyber risks, which might shut down critical systems under their direct responsibility. Meanwhile, the NCSA began work with the sectorial regulators, in order to apply cyber-defence norms to various defence objectives. Thus, the NCSA and the Israeli government set up dedicated units within the regulatory authorities, and their activities have already begun to bear fruit, in the shape of risk assessment surveys and “cyber annexes” which help guide the relevant organizations under the general authority of each regulator. In addition, in order to assist the economy in preparation for cyber threats, the NCSA published in early 2017 the "Organizational Cyber Defence Methodology". Based on NIST CSF, it offers every organization in Israel, be it large or small, with tools for the management and optimization of its defense against the risks of cyber threats, and assists it with devising a well-ordered work plan. Thousands of Israeli organizations are already working according to this methodology, which is accessible to all as a free service rendered to the Israeli economy (pdf). Meanwhile, the NCSA has invested efforts in developing a professional cyber work force. This was carried out in several layers: initiating (in conjunction with the Ministry of Education) a strategic plan to educate youngsters in cyber; incentivizing the labor market to shift towards cyber defense jobs; and, finally, setting a professional benchmark for those who work in this field in the government ministries. In this context, the NCSA was working to incorporate diverse elements of Israeli society into the industry and the government. Thus, in the course of 2017, vocational courses were opened for the ultra-orthodox community (both men and women), financed by the Ministry of Labour and Welfare. Dissolution of the NCSA and establishment of a new unified body As mentioned above, following the recommendations of the INCB, the government decided in February 2015 to establish the NCSA as the central operational body for cyber Security in Israel, which will work alongside the INCB as part of a "National Cyber Directorate". The decision to operate two independent units within one directorate was made at the time due to the need to build and strengthen separately the two branches – both the policy (which the INCB is responsible for) and the operational (NCSA). Therefore, each of the units was appointed a separate Director General and they were managed as independent entities. Towards the end of 2017, following the Prime Minister's directive to concentrate efforts in the field of cyber defense, it was decided to unify the authority with the national cyber headquarters, and in December 2017 the Government of Israel passed the government's resolution to unify them into one unit, the National Cyber Directorate, which will be responsible for all aspects of cyber defense in the civilian sphere, from the formulation of policy through R&D, to the operational defense of cyberspace. Its first Director General was Igal Una, the first to be responsible for both operational defense (which was the responsibility of the NCSA) and for the construction of the state force (which was the responsibility of the INCB). References Israel Emergency services in Israel Organizations based in Tel Aviv
37066853
https://en.wikipedia.org/wiki/Comic%20Seer
Comic Seer
Comic Seer (Desktop) is a Freeware sequential image viewer application for Microsoft Windows and Linux used for viewing and reading Comic Book Archive files containing image formats, such as JPEG, PNG, and TIFF. Comic Seer is focused on the management of large comic book libraries, while still being a full featured comic book viewer and reader. Features Comic Seer's foundation is built upon CBR, CBZ, and image file viewing. This is expanded upon by an array of features supporting this, including: Single and dual image viewing, with automatic double-width image detection Zooming of images from 1-4X Browsing of images in a Comic Book Archive as thumbnails Full-screen viewing Bookmarking capability Magnification of image areas Image rotation with memory Library organization and searching, and efficient handling of large libraries Comic meta-data viewing View multiple comic book files at one time History and status Comic Seer is an open-source application that was originally released in June 2012 with initial version 0.8-beta. It is currently used more than 1000 people daily. The latest version is 2.51-3. It is built on the LGPL licensed framework, Qt, for cross-platform development. Windows App In 2014 Comic Seer for Windows RT, 8.1, and 10 desktops and tablets was released for purchase in the Windows Store Its features include: Reads CBR, CBZ, CB7 comic file archives and image files Supports all interface devices: mouse, keyboard, pen, touch Page memory Page rotation 1 & 2 page viewing (with auto-detect of wide pages) 1x-4x zoom Library browsing and visualization Library filtering Build your own CBZ files View and edit embedded comic information Bookmarking Comic Vine integration for finding comic information User selectable backgrounds Read progress indicators and filtering Primary and secondary live tiles Color correction Operating systems Microsoft Windows: XP, Vista, 7, 8, 2003, 2008, 2012 Linux: tested on Ubuntu 12.04+ References External links Image viewers 2012 software 2013 software 2014 software Linux image viewers Proprietary freeware for Linux Windows graphics-related software Graphics software that uses Qt
33094374
https://en.wikipedia.org/wiki/Telecommunications
Telecommunications
Telecommunication is the transmission of information by various types of technologies over wire, radio, optical, or other electromagnetic systems. It has its origin in the desire of humans for communication over a distance greater than that feasible with the human voice, but with a similar scale of expediency; thus, slow systems (such as postal mail) are excluded from the field. The transmission media in telecommunication have evolved through numerous stages of technology, from beacons and other visual signals (such as smoke signals, semaphore telegraphs, signal flags, and optical heliographs), to electrical cable and electromagnetic radiation, including light. Such transmission paths are often divided into communication channels, which afford the advantages of multiplexing multiple concurrent communication sessions. Telecommunication is often used in its plural form. Other examples of pre-modern long-distance communication included audio messages, such as coded drumbeats, lung-blown horns, and loud whistles. 20th- and 21st-century technologies for long-distance communication usually involve electrical and electromagnetic technologies, such as telegraph, telephone, television and teleprinter, networks, radio, microwave transmission, optical fiber, and communications satellites. A revolution in wireless communication began in the first decade of the 20th century with the pioneering developments in radio communications by Guglielmo Marconi, who won the Nobel Prize in Physics in 1909, and other notable pioneering inventors and developers in the field of electrical and electronic telecommunications. These included Charles Wheatstone and Samuel Morse (inventors of the telegraph), Antonio Meucci and Alexander Graham Bell (some of the inventors and developers of the telephone, see Invention of the telephone), Edwin Armstrong and Lee de Forest (inventors of radio), as well as Vladimir K. Zworykin, John Logie Baird and Philo Farnsworth (some of the inventors of television). According to Article 1.3 of the Radio Regulations (RR), telecommunication is defined as « Any transmission, emission or reception of signs, signals, writings, images and sounds or intelligence of any nature by wire, radio, optical, or other electromagnetic systems.» This definition is identical to those contained in the Annex to the Constitution and Convention of the International Telecommunication Union (Geneva, 1992). The early telecommunication networks were created with copper wires as the physical medium for signal transmission. For many years, these networks were used for basic phone services, namely voice and telegrams. Since the mid-1990s, as the internet has grown in popularity, voice has been gradually supplanted by data. This soon demonstrated the limitations of copper in data transmission, prompting the development of optics. Etymology The word telecommunication is a compound of the Greek prefix tele (τῆλε), meaning distant, far off, or afar, and the Latin communicare, meaning to share. Its modern use is adapted from the French, because its written use was recorded in 1904 by the French engineer and novelist Édouard Estaunié. Communication was first used as an English word in the late 14th century. It comes from Old French comunicacion (14c., Modern French communication), from Latin communicationem (nominative communicatio), noun of action from past participle stem of communicare "to share, divide out; communicate, impart, inform; join, unite, participate in", literally "to make common", from communis". History Beacons and pigeons Homing pigeons have occasionally been used throughout history by different cultures. Pigeon post had Persian roots, and was later used by the Romans to aid their military. Frontinus said that Julius Caesar used pigeons as messengers in his conquest of Gaul. The Greeks also conveyed the names of the victors at the Olympic Games to various cities using homing pigeons. In the early 19th century, the Dutch government used the system in Java and Sumatra. And in 1849, Paul Julius Reuter started a pigeon service to fly stock prices between Aachen and Brussels, a service that operated for a year until the gap in the telegraph link was closed. In the Middle Ages, chains of beacons were commonly used on hilltops as a means of relaying a signal. Beacon chains suffered the drawback that they could only pass a single bit of information, so the meaning of the message such as "the enemy has been sighted" had to be agreed upon in advance. One notable instance of their use was during the Spanish Armada, when a beacon chain relayed a signal from Plymouth to London. In 1792, Claude Chappe, a French engineer, built the first fixed visual telegraphy system (or semaphore line) between Lille and Paris. However semaphore suffered from the need for skilled operators and expensive towers at intervals of ten to thirty kilometres (six to nineteen miles). As a result of competition from the electrical telegraph, the last commercial line was abandoned in 1880. Telegraph and telephone On 25 July 1837 the first commercial electrical telegraph was demonstrated by English inventor Sir William Fothergill Cooke, and English scientist Sir Charles Wheatstone. Both inventors viewed their device as "an improvement to the [existing] electromagnetic telegraph" not as a new device. Samuel Morse independently developed a version of the electrical telegraph that he unsuccessfully demonstrated on 2 September 1837. His code was an important advance over Wheatstone's signaling method. The first transatlantic telegraph cable was successfully completed on 27 July 1866, allowing transatlantic telecommunication for the first time. The conventional telephone was patented by Alexander Bell in 1876. Elisha Gray also filed a caveat for it in 1876. Gray abandoned his caveat and because he did not contest Bell's priority, the examiner approved Bell's patent on 3 March 1876. Gray had filed his caveat for the variable resistance telephone, but Bell was the first to write down the idea and the first to test it in a telephone.[88] Antonio Meucci invented a device that allowed the electrical transmission of voice over a line nearly thirty years before in 1849, but his device was of little practical value because it relied on the electrophonic effect requiring users to place the receiver in their mouths to "hear". The first commercial telephone services were set up by the Bell Telephone Company in 1878 and 1879 on both sides of the Atlantic in the cities of New Haven and London. Radio and television Starting in 1894, Italian inventor Guglielmo Marconi began developing a wireless communication using the then newly discovered phenomenon of radio waves, showing by 1901 that they could be transmitted across the Atlantic Ocean. This was the start of wireless telegraphy by radio. On 17 December 1902, a transmission from the Marconi station in Glace Bay, Nova Scotia, Canada, became the world's first radio message to cross the Atlantic from North America and in 1904 a commercial service was established to transmit nightly news summaries to subscribing ships, which could incorporate them into their on-board newspapers. Millimetre wave communication was first investigated by Bengali physicist Jagadish Chandra Bose during 18941896, when he reached an extremely high frequency of up to 60GHz in his experiments. He also introduced the use of semiconductor junctions to detect radio waves, when he patented the radio crystal detector in 1901. World War I accelerated the development of radio for military communications. After the war, commercial radio AM broadcasting began in the 1920s and became an important mass medium for entertainment and news. World War II again accelerated the development of radio for the wartime purposes of aircraft and land communication, radio navigation and radar. Development of stereo FM broadcasting of radio took place from the 1930s on-wards in the United States and displaced AM as the dominant commercial standard by the 1960s, and by the 1970s in the United Kingdom. On 25 March 1925, John Logie Baird was able to demonstrate the transmission of moving pictures at the London department store Selfridges. Baird's device relied upon the Nipkow disk and thus became known as the mechanical television. It formed the basis of experimental broadcasts done by the British Broadcasting Corporation beginning 30 September 1929. However, for most of the twentieth-century televisions depended upon the cathode ray tube invented by Karl Braun. The first version of such a television to show promise was produced by Philo Farnsworth and demonstrated to his family on 7 September 1927. After World War II, the experiments in television that had been interrupted were resumed, and it also became an important home entertainment broadcast medium. Thermionic valves The type of device known as a thermionic tube or thermionic valve uses the phenomenon of thermionic emission of electrons from a heated cathode and is used for a number of fundamental electronic functions such as signal amplification and current rectification. Non-thermionic types, such as a vacuum phototube however, achieve electron emission through the photoelectric effect, and are used for such as the detection of light levels. In both types, the electrons are accelerated from the cathode to the anode by the electric field in the tube. The simplest vacuum tube, the diode invented in 1904 by John Ambrose Fleming, contains only a heated electron-emitting cathode and an anode. Electrons can only flow in one direction through the device—from the cathode to the anode. Adding one or more control grids within the tube allows the current between the cathode and anode to be controlled by the voltage on the grid or grids. These devices became a key component of electronic circuits for the first half of the twentieth century. They were crucial to the development of radio, television, radar, sound recording and reproduction, long-distance telephone networks, and analogue and early digital computers. Although some applications had used earlier technologies such as the spark gap transmitter for radio or mechanical computers for computing, it was the invention of the thermionic vacuum tube that made these technologies widespread and practical, and created the discipline of electronics. In the 1940s the invention of semiconductor devices made it possible to produce solid-state devices, which are smaller, more efficient, reliable and durable, and cheaper than thermionic tubes. From the mid-1960s, thermionic tubes were then being replaced with the transistor. Thermionic tubes still have some applications for certain high-frequency amplifiers. Semiconductor era The modern period of telecommunication history from 1950 onwards is referred to as the semiconductor era, due to the wide adoption of semiconductor devices in telecommunication technology. The development of transistor technology and the semiconductor industry enabled significant advances in telecommunication technology, and led to a transition away from state-owned narrowband circuit-switched networks to private broadband packet-switched networks. Metal–oxide–semiconductor (MOS) technologies such as large-scale integration (LSI) and RF CMOS (radio-frequency complementary MOS), along with information theory (such as data compression), led to a transition from analog to digital signal processing, with the introduction of digital telecommunications (such as digital telephony and digital media) and wireless communications (such as cellular networks and mobile telephony), leading to rapid growth of the telecommunications industry towards the end of the 20th century. Transistors The development of transistor technology has been fundamental to modern electronic telecommunication. The first transistor, a point-contact transistor, was invented by John Bardeen and Walter Houser Brattain at Bell Labs in 1947. The MOSFET (metal–oxide–silicon field-effect transistor), also known as the MOS transistor, was later invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959. The MOSFET is the building block or "workhorse" of the information revolution and the information age, and the most widely manufactured device in history. MOS technology, including MOS integrated circuits and power MOSFETs, drives the communications infrastructure of modern telecommunication. Along with computers, other essential elements of modern telecommunication that are built from MOSFETs include mobile devices, transceivers, base station modules, routers, RF power amplifiers, microprocessors, memory chips, and telecommunication circuits. According Edholm's law, the bandwidth of telecommunication networks has been doubling every 18 months. Advances in MOS technology, including MOSFET scaling (increasing transistor counts at an exponential pace, as predicted by Moore's law), has been the most important contributing factor in the rapid rise of bandwidth in telecommunications networks. Computer networks and the Internet On 11 September 1940, George Stibitz transmitted problems for his Complex Number Calculator in New York using a teletype, and received the computed results back at Dartmouth College in New Hampshire. This configuration of a centralized computer (mainframe) with remote dumb terminals remained popular well into the 1970s. However, already in the 1960s, researchers started to investigate packet switching, a technology that sends a message in portions to its destination asynchronously without passing it through a centralized mainframe. A four-node network emerged on 5 December 1969, constituting the beginnings of the ARPANET, which by 1981 had grown to 213 nodes. ARPANET eventually merged with other networks to form the Internet. While Internet development was a focus of the Internet Engineering Task Force (IETF) who published a series of Request for Comments documents, other networking advancements occurred in industrial laboratories, such as the local area network (LAN) developments of Ethernet (1983) and Token Ring (1984). Wireless telecommunication The wireless revolution began in the 1990s, with the advent of digital wireless networks leading to a social revolution, and a paradigm shift from wired to wireless technology, including the proliferation of commercial wireless technologies such as cell phones, mobile telephony, pagers, wireless computer networks, cellular networks, the wireless Internet, and laptop and handheld computers with wireless connections. The wireless revolution has been driven by advances in radio frequency (RF) and microwave engineering, and the transition from analog to digital RF technology. Advances in metal–oxide–semiconductor field-effect transistor (MOSFET, or MOS transistor) technology, the key component of the RF technology that enables digital wireless networks, has been central to this revolution, including MOS devices such as the power MOSFET, LDMOS, and RF CMOS. Digital media Practical digital media distribution and streaming were made possible by advances in data compression, due to the impractically high memory, storage and bandwidth requirements of uncompressed media. The most important compression technique is the discrete cosine transform (DCT), a lossy compression algorithm that was first proposed as an image compression technique in 1972. Realization and demonstration, on 29 October 2001, of the first digital cinema transmission by satellite in Europe of a feature film by Bernard Pauchon, Alain Lorentz, Raymond Melwig and Philippe Binant. Growth of transmission capacity The effective capacity to exchange information worldwide through two-way telecommunication networks grew from 281 petabytes (pB) of optimally compressed information in 1986, to 471 pB in 1993, to 2.2 exabytes (eB) in 2000, and to 65 eB in 2007. This is the informational equivalent of two newspaper pages per person per day in 1986, and six entire newspapers per person per day by 2007. Given this growth, telecommunications play an increasingly important role in the world economy and the global telecommunications industry was about a US$4.7 trillion sector in 2012. The service revenue of the global telecommunications industry was estimated to be $1.5 trillion in 2010, corresponding to 2.4% of the world's gross domestic product (GDP). Technical concepts Modern telecommunication is founded on a series of key concepts that experienced progressive development and refinement in a period of well over a century. Basic elements Telecommunication technologies may primarily be divided into wired and wireless methods. Overall though, a basic telecommunication system consists of three main parts that are always present in some form or another: A transmitter that takes information and converts it to a signal. A transmission medium, also called the physical channel that carries the signal. An example of this is the "free space channel". A receiver that takes the signal from the channel and converts it back into usable information for the recipient. For example, in a radio broadcasting station the station's large power amplifier is the transmitter; and the broadcasting antenna is the interface between the power amplifier and the "free space channel". The free space channel is the transmission medium; and the receiver's antenna is the interface between the free space channel and the receiver. Next, the radio receiver is the destination of the radio signal, and this is where it is converted from electricity to sound for people to listen to. Sometimes, telecommunication systems are "duplex" (two-way systems) with a single box of electronics working as both the transmitter and a receiver, or a transceiver. For example, a cellular telephone is a transceiver. The transmission electronics and the receiver electronics within a transceiver are actually quite independent of each other. This can be readily explained by the fact that radio transmitters contain power amplifiers that operate with electrical powers measured in watts or kilowatts, but radio receivers deal with radio powers that are measured in the microwatts or nanowatts. Hence, transceivers have to be carefully designed and built to isolate their high-power circuitry and their low-power circuitry from each other, as to not cause interference. Telecommunication over fixed lines is called point-to-point communication because it is between one transmitter and one receiver. Telecommunication through radio broadcasts is called broadcast communication because it is between one powerful transmitter and numerous low-power but sensitive radio receivers. Telecommunications in which multiple transmitters and multiple receivers have been designed to cooperate and to share the same physical channel are called multiplex systems. The sharing of physical channels using multiplexing often gives very large reductions in costs. Multiplexed systems are laid out in telecommunication networks, and the multiplexed signals are switched at nodes through to the correct destination terminal receiver. Analog versus digital communications Communications signals can be sent either by analog signals or digital signals. There are analog communication systems and digital communication systems. For an analog signal, the signal is varied continuously with respect to the information. In a digital signal, the information is encoded as a set of discrete values (for example, a set of ones and zeros). During the propagation and reception, the information contained in analog signals will inevitably be degraded by undesirable physical noise. Commonly, the noise in a communication system can be expressed as adding or subtracting from the desirable signal in a completely random way. This form of noise is called additive noise, with the understanding that the noise can be negative or positive at different instants of time. Unless the additive noise disturbance exceeds a certain threshold, the information contained in digital signals will remain intact. Their resistance to noise represents a key advantage of digital signals over analog signals. However, digital systems fail catastrophically when the noise exceeds the systems ability to autocorrect. On the other hand, analog systems fail gracefully. That is, as noise increases the signal becomes progressively more degraded, but still usable. Also, digital transmission of continuous data unavoidably adds quantization noise to the output. This can be reduced, but not entirely eliminated, only at the expense of increasing the channel bandwidth requirement. Communication channels The term "channel" has two different meanings. In one meaning, a channel is the physical medium that carries a signal between the transmitter and the receiver. Examples of this include the atmosphere for sound communications, glass optical fibers for some kinds of optical communications, coaxial cables for communications by way of the voltages and electric currents in them, and free space for communications using visible light, infrared waves, ultraviolet light, and radio waves. Coaxial cable types are classified by RG type or "radio guide", terminology derived from World War II. The various RG designations are used to classify the specific signal transmission applications. This last channel is called the "free space channel". The sending of radio waves from one place to another has nothing to do with the presence or absence of an atmosphere between the two. Radio waves travel through a perfect vacuum just as easily as they travel through air, fog, clouds, or any other kind of gas. The other meaning of the term "channel" in telecommunications is seen in the phrase communications channel, which is a subdivision of a transmission medium so that it can be used to send multiple streams of information simultaneously. For example, one radio station can broadcast radio waves into free space at frequencies in the neighborhood of 94.5 MHz (megahertz) while another radio station can simultaneously broadcast radio waves at frequencies in the neighborhood of 96.1 MHz. Each radio station would transmit radio waves over a frequency bandwidth of about 180 kHz (kilohertz), centered at frequencies such as the above, which are called the "carrier frequencies". Each station in this example is separated from its adjacent stations by 200 kHz, and the difference between 200 kHz and 180 kHz (20 kHz) is an engineering allowance for the imperfections in the communication system. In the example above, the "free space channel" has been divided into communications channels according to frequencies, and each channel is assigned a separate frequency bandwidth in which to broadcast radio waves. This system of dividing the medium into channels according to frequency is called "frequency-division multiplexing". Another term for the same concept is "wavelength-division multiplexing", which is more commonly used in optical communications when multiple transmitters share the same physical medium. Another way of dividing a communications medium into channels is to allocate each sender a recurring segment of time (a "time slot", for example, 20 milliseconds out of each second), and to allow each sender to send messages only within its own time slot. This method of dividing the medium into communication channels is called "time-division multiplexing" (TDM), and is used in optical fibre communication. Some radio communication systems use TDM within an allocated FDM channel. Hence, these systems use a hybrid of TDM and FDM. Modulation The shaping of a signal to convey information is known as modulation. Modulation can be used to represent a digital message as an analog waveform. This is commonly called "keying"—a term derived from the older use of Morse Code in telecommunications—and several keying techniques exist (these include phase-shift keying, frequency-shift keying, and amplitude-shift keying). The "Bluetooth" system, for example, uses phase-shift keying to exchange information between various devices. In addition, there are combinations of phase-shift keying and amplitude-shift keying which is called (in the jargon of the field) "quadrature amplitude modulation" (QAM) that are used in high-capacity digital radio communication systems. Modulation can also be used to transmit the information of low-frequency analog signals at higher frequencies. This is helpful because low-frequency analog signals cannot be effectively transmitted over free space. Hence the information from a low-frequency analog signal must be impressed into a higher-frequency signal (known as the "carrier wave") before transmission. There are several different modulation schemes available to achieve this [two of the most basic being amplitude modulation (AM) and frequency modulation (FM)]. An example of this process is a disc jockey's voice being impressed into a 96 MHz carrier wave using frequency modulation (the voice would then be received on a radio as the channel "96 FM"). In addition, modulation has the advantage that it may use frequency division multiplexing (FDM). Telecommunication networks A telecommunications network is a collection of transmitters, receivers, and communications channels that send messages to one another. Some digital communications networks contain one or more routers that work together to transmit information to the correct user. An analog communications network consists of one or more switches that establish a connection between two or more users. For both types of networks, repeaters may be necessary to amplify or recreate the signal when it is being transmitted over long distances. This is to combat attenuation that can render the signal indistinguishable from the noise. Another advantage of digital systems over analog is that their output is easier to store in memory, i.e. two voltage states (high and low) are easier to store than a continuous range of states. Societal impact Telecommunication has a significant social, cultural and economic impact on modern society. In 2008, estimates placed the telecommunication industry's revenue at US$4.7 trillion or just under three percent of the gross world product (official exchange rate). Several following sections discuss the impact of telecommunication on society. Microeconomics On the microeconomic scale, companies have used telecommunications to help build global business empires. This is self-evident in the case of online retailer Amazon.com but, according to academic Edward Lenert, even the conventional retailer Walmart has benefited from better telecommunication infrastructure compared to its competitors. In cities throughout the world, home owners use their telephones to order and arrange a variety of home services ranging from pizza deliveries to electricians. Even relatively poor communities have been noted to use telecommunication to their advantage. In Bangladesh's Narsingdi District, isolated villagers use cellular phones to speak directly to wholesalers and arrange a better price for their goods. In Côte d'Ivoire, coffee growers share mobile phones to follow hourly variations in coffee prices and sell at the best price. Macroeconomics On the macroeconomic scale, Lars-Hendrik Röller and Leonard Waverman suggested a causal link between good telecommunication infrastructure and economic growth. Few dispute the existence of a correlation although some argue it is wrong to view the relationship as causal. Because of the economic benefits of good telecommunication infrastructure, there is increasing worry about the inequitable access to telecommunication services amongst various countries of the world—this is known as the digital divide. A 2003 survey by the International Telecommunication Union (ITU) revealed that roughly a third of countries have fewer than one mobile subscription for every 20 people and one-third of countries have fewer than one land-line telephone subscription for every 20 people. In terms of Internet access, roughly half of all countries have fewer than one out of 20 people with Internet access. From this information, as well as educational data, the ITU was able to compile an index that measures the overall ability of citizens to access and use information and communication technologies. Using this measure, Sweden, Denmark and Iceland received the highest ranking while the African countries Nigeria, Burkina Faso and Mali received the lowest. Social impact Telecommunication has played a significant role in social relationships. Nevertheless, devices like the telephone system were originally advertised with an emphasis on the practical dimensions of the device (such as the ability to conduct business or order home services) as opposed to the social dimensions. It was not until the late 1920s and 1930s that the social dimensions of the device became a prominent theme in telephone advertisements. New promotions started appealing to consumers' emotions, stressing the importance of social conversations and staying connected to family and friends. Since then the role that telecommunications has played in social relations has become increasingly important. In recent years, the popularity of social networking sites has increased dramatically. These sites allow users to communicate with each other as well as post photographs, events and profiles for others to see. The profiles can list a person's age, interests, sexual preference and relationship status. In this way, these sites can play important role in everything from organising social engagements to courtship. Prior to social networking sites, technologies like short message service (SMS) and the telephone also had a significant impact on social interactions. In 2000, market research group Ipsos MORI reported that 81% of 15- to 24-year-old SMS users in the United Kingdom had used the service to coordinate social arrangements and 42% to flirt. Entertainment, news, and advertising In cultural terms, telecommunication has increased the public's ability to access music and film. With television, people can watch films they have not seen before in their own home without having to travel to the video store or cinema. With radio and the Internet, people can listen to music they have not heard before without having to travel to the music store. Telecommunication has also transformed the way people receive their news. A 2006 survey (right table) of slightly more than 3,000 Americans by the non-profit Pew Internet and American Life Project in the United States the majority specified television or radio over newspapers. Telecommunication has had an equally significant impact on advertising. TNS Media Intelligence reported that in 2007, 58% of advertising expenditure in the United States was spent on media that depend upon telecommunication. Regulation Many countries have enacted legislation which conforms to the International Telecommunication Regulations established by the International Telecommunication Union (ITU), which is the "leading UN agency for information and communication technology issues". In 1947, at the Atlantic City Conference, the ITU decided to "afford international protection to all frequencies registered in a new international frequency list and used in conformity with the Radio Regulation". According to the ITU's Radio Regulations adopted in Atlantic City, all frequencies referenced in the International Frequency Registration Board, examined by the board and registered on the International Frequency List "shall have the right to international protection from harmful interference". From a global perspective, there have been political debates and legislation regarding the management of telecommunication and broadcasting. The history of broadcasting discusses some debates in relation to balancing conventional communication such as printing and telecommunication such as radio broadcasting. The onset of World War II brought on the first explosion of international broadcasting propaganda. Countries, their governments, insurgents, terrorists, and militiamen have all used telecommunication and broadcasting techniques to promote propaganda. Patriotic propaganda for political movements and colonization started the mid-1930s. In 1936, the BBC broadcast propaganda to the Arab World to partly counter similar broadcasts from Italy, which also had colonial interests in North Africa. Modern insurgents, such as those in the latest Iraq War, often use intimidating telephone calls, SMSs and the distribution of sophisticated videos of an attack on coalition troops within hours of the operation. "The Sunni insurgents even have their own television station, Al-Zawraa, which while banned by the Iraqi government, still broadcasts from Erbil, Iraqi Kurdistan, even as coalition pressure has forced it to switch satellite hosts several times." On 10 November 2014, President Obama recommended the Federal Communications Commission reclassify broadband Internet service as a telecommunications service to preserve net neutrality. Modern media Worldwide equipment sales According to data collected by Gartner and Ars Technica sales of main consumer's telecommunication equipment worldwide in millions of units was: Telephone In a telephone network, the caller is connected to the person to whom they wish to talk by switches at various telephone exchanges. The switches form an electrical connection between the two users and the setting of these switches is determined electronically when the caller dials the number. Once the connection is made, the caller's voice is transformed to an electrical signal using a small microphone in the caller's handset. This electrical signal is then sent through the network to the user at the other end where it is transformed back into sound by a small speaker in that person's handset. As of 2015, the landline telephones in most residential homes are analog—that is, the speaker's voice directly determines the signal's voltage. Although short-distance calls may be handled from end-to-end as analog signals, increasingly telephone service providers are transparently converting the signals to digital signals for transmission. The advantage of this is that digitized voice data can travel side by side with data from the Internet and can be perfectly reproduced in long-distance communication (as opposed to analog signals that are inevitably impacted by noise). Mobile phones have had a significant impact on telephone networks. Mobile phone subscriptions now outnumber fixed-line subscriptions in many markets. Sales of mobile phones in 2005 totalled 816.6 million with that figure being almost equally shared amongst the markets of Asia/Pacific (204 m), Western Europe (164 m), CEMEA (Central Europe, the Middle East and Africa) (153.5 m), North America (148 m) and Latin America (102 m). In terms of new subscriptions over the five years from 1999, Africa has outpaced other markets with 58.2% growth. Increasingly these phones are being serviced by systems where the voice content is transmitted digitally such as GSM or W-CDMA with many markets choosing to deprecate analog systems such as AMPS. There have also been dramatic changes in telephone communication behind the scenes. Starting with the operation of TAT-8 in 1988, the 1990s saw the widespread adoption of systems based on optical fibers. The benefit of communicating with optical fibers is that they offer a drastic increase in data capacity. TAT-8 itself was able to carry 10 times as many telephone calls as the last copper cable laid at that time and today's optical fibre cables are able to carry 25 times as many telephone calls as TAT-8. This increase in data capacity is due to several factors: First, optical fibres are physically much smaller than competing technologies. Second, they do not suffer from crosstalk which means several hundred of them can be easily bundled together in a single cable. Lastly, improvements in multiplexing have led to an exponential growth in the data capacity of a single fibre. Assisting communication across many modern optical fibre networks is a protocol known as Asynchronous Transfer Mode (ATM). The ATM protocol allows for the side-by-side data transmission mentioned in the second paragraph. It is suitable for public telephone networks because it establishes a pathway for data through the network and associates a traffic contract with that pathway. The traffic contract is essentially an agreement between the client and the network about how the network is to handle the data; if the network cannot meet the conditions of the traffic contract it does not accept the connection. This is important because telephone calls can negotiate a contract so as to guarantee themselves a constant bit rate, something that will ensure a caller's voice is not delayed in parts or cut off completely. There are competitors to ATM, such as Multiprotocol Label Switching (MPLS), that perform a similar task and are expected to supplant ATM in the future. Radio and television In a broadcast system, the central high-powered broadcast tower transmits a high-frequency electromagnetic wave to numerous low-powered receivers. The high-frequency wave sent by the tower is modulated with a signal containing visual or audio information. The receiver is then tuned so as to pick up the high-frequency wave and a demodulator is used to retrieve the signal containing the visual or audio information. The broadcast signal can be either analog (signal is varied continuously with respect to the information) or digital (information is encoded as a set of discrete values). The broadcast media industry is at a critical turning point in its development, with many countries moving from analog to digital broadcasts. This move is made possible by the production of cheaper, faster and more capable integrated circuits. The chief advantage of digital broadcasts is that they prevent a number of complaints common to traditional analog broadcasts. For television, this includes the elimination of problems such as snowy pictures, ghosting and other distortion. These occur because of the nature of analog transmission, which means that perturbations due to noise will be evident in the final output. Digital transmission overcomes this problem because digital signals are reduced to discrete values upon reception and hence small perturbations do not affect the final output. In a simplified example, if a binary message 1011 was transmitted with signal amplitudes [1.0 0.0 1.0 1.0] and received with signal amplitudes [0.9 0.2 1.1 0.9] it would still decode to the binary message 1011— a perfect reproduction of what was sent. From this example, a problem with digital transmissions can also be seen in that if the noise is great enough it can significantly alter the decoded message. Using forward error correction a receiver can correct a handful of bit errors in the resulting message but too much noise will lead to incomprehensible output and hence a breakdown of the transmission. In digital television broadcasting, there are three competing standards that are likely to be adopted worldwide. These are the ATSC, DVB and ISDB standards; the adoption of these standards thus far is presented in the captioned map. All three standards use MPEG-2 for video compression. ATSC uses Dolby Digital AC-3 for audio compression, ISDB uses Advanced Audio Coding (MPEG-2 Part 7) and DVB has no standard for audio compression but typically uses MPEG-1 Part 3 Layer 2. The choice of modulation also varies between the schemes. In digital audio broadcasting, standards are much more unified with practically all countries choosing to adopt the Digital Audio Broadcasting standard (also known as the Eureka 147 standard). The exception is the United States which has chosen to adopt HD Radio. HD Radio, unlike Eureka 147, is based upon a transmission method known as in-band on-channel transmission that allows digital information to "piggyback" on normal AM or FM analog transmissions. However, despite the pending switch to digital, analog television remains being transmitted in most countries. An exception is the United States that ended analog television transmission (by all but the very low-power TV stations) on 12 June 2009 after twice delaying the switchover deadline. Kenya also ended analog television transmission in December 2014 after multiple delays. For analog television, there were three standards in use for broadcasting color TV (see a map on adoption here). These are known as PAL (German designed), NTSC (American designed), and SECAM (French-designed). For analog radio, the switch to digital radio is made more difficult by the higher cost of digital receivers. The choice of modulation for analog radio is typically between amplitude (AM) or frequency modulation (FM). To achieve stereo playback, an amplitude modulated subcarrier is used for stereo FM, and quadrature amplitude modulation is used for stereo AM or C-QUAM. Internet The Internet is a worldwide network of computers and computer networks that communicate with each other using the Internet Protocol (IP). Any computer on the Internet has a unique IP address that can be used by other computers to route information to it. Hence, any computer on the Internet can send a message to any other computer using its IP address. These messages carry with them the originating computer's IP address allowing for two-way communication. The Internet is thus an exchange of messages between computers. It is estimated that 51% of the information flowing through two-way telecommunications networks in the year 2000 were flowing through the Internet (most of the rest (42%) through the landline telephone). By the year 2007 the Internet clearly dominated and captured 97% of all the information in telecommunication networks (most of the rest (2%) through mobile phones). , an estimated 21.9% of the world population has access to the Internet with the highest access rates (measured as a percentage of the population) in North America (73.6%), Oceania/Australia (59.5%) and Europe (48.1%). In terms of broadband access, Iceland (26.7%), South Korea (25.4%) and the Netherlands (25.3%) led the world. The Internet works in part because of protocols that govern how the computers and routers communicate with each other. The nature of computer network communication lends itself to a layered approach where individual protocols in the protocol stack run more-or-less independently of other protocols. This allows lower-level protocols to be customized for the network situation while not changing the way higher-level protocols operate. A practical example of why this is important is because it allows an Internet browser to run the same code regardless of whether the computer it is running on is connected to the Internet through an Ethernet or Wi-Fi connection. Protocols are often talked about in terms of their place in the OSI reference model (pictured on the right), which emerged in 1983 as the first step in an unsuccessful attempt to build a universally adopted networking protocol suite. For the Internet, the physical medium and data link protocol can vary several times as packets traverse the globe. This is because the Internet places no constraints on what physical medium or data link protocol is used. This leads to the adoption of media and protocols that best suit the local network situation. In practice, most intercontinental communication will use the Asynchronous Transfer Mode (ATM) protocol (or a modern equivalent) on top of optic fiber. This is because for most intercontinental communication the Internet shares the same infrastructure as the public switched telephone network. At the network layer, things become standardized with the Internet Protocol (IP) being adopted for logical addressing. For the World Wide Web, these "IP addresses" are derived from the human-readable form using the Domain Name System (e.g. 72.14.207.99 is derived from www.google.com). At the moment, the most widely used version of the Internet Protocol is version four but a move to version six is imminent. At the transport layer, most communication adopts either the Transmission Control Protocol (TCP) or the User Datagram Protocol (UDP). TCP is used when it is essential every message sent is received by the other computer whereas UDP is used when it is merely desirable. With TCP, packets are retransmitted if they are lost and placed in order before they are presented to higher layers. With UDP, packets are not ordered nor retransmitted if lost. Both TCP and UDP packets carry port numbers with them to specify what application or process the packet should be handled by. Because certain application-level protocols use certain ports, network administrators can manipulate traffic to suit particular requirements. Examples are to restrict Internet access by blocking the traffic destined for a particular port or to affect the performance of certain applications by assigning priority. Above the transport layer, there are certain protocols that are sometimes used and loosely fit in the session and presentation layers, most notably the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. These protocols ensure that data transferred between two parties remains completely confidential. Finally, at the application layer, are many of the protocols Internet users would be familiar with such as HTTP (web browsing), POP3 (e-mail), FTP (file transfer), IRC (Internet chat), BitTorrent (file sharing) and XMPP (instant messaging). Voice over Internet Protocol (VoIP) allows data packets to be used for synchronous voice communications. The data packets are marked as voice type packets and can be prioritized by the network administrators so that the real-time, synchronous conversation is less subject to contention with other types of data traffic which can be delayed (i.e. file transfer or email) or buffered in advance (i.e. audio and video) without detriment. That prioritization is fine when the network has sufficient capacity for all the VoIP calls taking place at the same time and the network is enabled for prioritization i.e. a private corporate-style network, but the Internet is not generally managed in this way and so there can be a big difference in the quality of VoIP calls over a private network and over the public Internet. Local area networks and wide area networks Despite the growth of the Internet, the characteristics of local area networks (LANs)—computer networks that do not extend beyond a few kilometers—remain distinct. This is because networks on this scale do not require all the features associated with larger networks and are often more cost-effective and efficient without them. When they are not connected with the Internet, they also have the advantages of privacy and security. However, purposefully lacking a direct connection to the Internet does not provide assured protection from hackers, military forces, or economic powers. These threats exist if there are any methods for connecting remotely to the LAN. Wide area networks (WANs) are private computer networks that may extend for thousands of kilometers. Once again, some of their advantages include privacy and security. Prime users of private LANs and WANs include armed forces and intelligence agencies that must keep their information secure and secret. In the mid-1980s, several sets of communication protocols emerged to fill the gaps between the data-link layer and the application layer of the OSI reference model. These included AppleTalk, IPX, and NetBIOS with the dominant protocol set during the early 1990s being IPX due to its popularity with MS-DOS users. TCP/IP existed at this point, but it was typically only used by large government and research facilities. As the Internet grew in popularity and its traffic was required to be routed into private networks, the TCP/IP protocols replaced existing local area network technologies. Additional technologies, such as DHCP, allowed TCP/IP-based computers to self-configure in the network. Such functions also existed in the AppleTalk/ IPX/ NetBIOS protocol sets. Whereas Asynchronous Transfer Mode (ATM) or Multiprotocol Label Switching (MPLS) are typical data-link protocols for larger networks such as WANs; Ethernet and Token Ring are typical data-link protocols for LANs. These protocols differ from the former protocols in that they are simpler, e.g., they omit features such as quality of service guarantees, and offer collision prevention. Both of these differences allow for more economical systems. Despite the modest popularity of Token Ring in the 1980s and 1990s, virtually all LANs now use either wired or wireless Ethernet facilities. At the physical layer, most wired Ethernet implementations use copper twisted-pair cables (including the common 10BASE-T networks). However, some early implementations used heavier coaxial cables and some recent implementations (especially high-speed ones) use optical fibers. When optic fibers are used, the distinction must be made between multimode fibers and single-mode fibers. Multimode fibers can be thought of as thicker optical fibers that are cheaper to manufacture devices for, but that suffer from less usable bandwidth and worse attenuation—implying poorer long-distance performance. See also References Citations Bibliography Goggin, Gerard, Global Mobile Media (New York: Routledge, 2011), p. 176. . OECD, Universal Service and Rate Restructuring in Telecommunications, Organisation for Economic Co-operation and Development (OECD) Publishing, 1991. . Wheen, Andrew. Dot-Dash to Dot.Com: How Modern Telecommunications Evolved from the Telegraph to the Internet (Springer, 2011). External links International Teletraffic Congress International Telecommunication Union (ITU) ATIS Telecom Glossary Federal Communications Commission IEEE Communications Society International Telecommunication Union (Ericsson removed the book from their site in September 2005) Economics of transport and utility industries Mass media technology ja:通信
954216
https://en.wikipedia.org/wiki/Skeleton%20%28computer%20programming%29
Skeleton (computer programming)
Skeleton programming is a style of computer programming based on simple high-level program structures and so called dummy code. Program skeletons resemble pseudocode, but allow parsing, compilation and testing of the code. Dummy code is inserted in a program skeleton to simulate processing and avoid compilation error messages. It may involve empty function declarations, or functions that return a correct result only for a simple test case where the expected response of the code is known. Skeleton programming facilitates a top-down design approach, where a partially functional system with complete high-level structures is designed and coded, and this system is then progressively expanded to fulfill the requirements of the project. Program skeletons are also sometimes used for high-level descriptions of algorithms. A program skeleton may also be utilized as a template that reflects syntax and structures commonly used in a wide class of problems. Skeleton programs are utilized in the template method design pattern used in object-oriented programming. In object-oriented programming, dummy code corresponds to an abstract method, a method stub or a mock object. In the Java remote method invocation (Java RMI) nomenclature, a stub communicates on the client-side with a skeleton on the server-side. A class skeleton is an outline of a class that is used in software engineering. It contains a description of the class's roles, and describes the purposes of the variables and methods, but does not implement them. The class is later implemented from the skeleton. The skeleton can also be known as either an interface or an abstract class, with languages that follow a polymorphic paradigm. Background Software used in computers today is often complicated due to a host of reasons. This can mean that not just a single programmer can develop it, or that other modules or parts have to be separately imported. The programs can also be too complex on their own, some with multiple methods accessing a single variable at the same time or even generating pixels for displays. Skeleton code is used to assist programmers to develop their code with the fewest errors during the time of compilation. Skeleton code is most commonly found in parallel programming, but is also applied in other situations, like documentation in programming languages. This helps to simplify the core functionality of a potentially confusing method. It can also be used to allow for a small function within a larger program to operate without full functionality temporarily. This method of programming is easier than writing a complete function, as these skeleton functions do not have to include main functionalities and can instead be hardcoded to use during development. They usually involve syntactically correct code to introduce the method, as well as comments to indicate the operation of the program. This is not always necessary to call a piece of text skeleton code. Relation to Pseudocode Pseudocode is most commonly found when developing the structure of a new piece of software. It is a plain English portrayal of a particular function within a larger system, or can even be a representation of a whole program. Pseudocode is similar to skeleton programming, however deviates in the fact that pseudocode is primarily an informal method of programming. Dummy code is also very similar to this, where code is used simply as a placeholder, or to signify the intended existence of a method in a class or interface. Computer programmers are extremely dependent on pseudocode, so much so that it has a measurable impact on their psyche. A typical programmer is so conditioned with the idea of writing simplified code in some manner, be it by writing pseudocode or skeleton code, or even just by drawing a diagram, that this has a measurable impact on how well they can write their final implementation. This has been found over a number of applications, with different programmers working in different languages and varied programming paradigms. This method of program design is also most often done on pen and paper, further moving the text from what is actually to be implemented. Skeleton programming mimics this, but differs in the way that it is commonly written in an integrated development environment, or text editors. This assists the further development of the program after the initial design stage. Skeleton programs also allow for simplistic functions to operate, if run. Implementation Skeleton programming can be implemented in a range of different programming applications. Programming language documentation All, if not most programming languages have skeleton code used to assist in the definition of all built-in functions and methods. This provides a simple means for newer programmers to understand the syntax and intended implementation of the written methods. Java, an object oriented language, focuses heavily on a structured documentation page with completely separated methods for each object part of Java’s packages. Object oriented languages focus on a hierarchy based structure to their implementations, rather than a simple top-down approach found in other languages. ‘Objects’ store data and variables in them, allowing for a typically more efficient program to be written. These objects have individual functions that can access internal variables, known as methods. Each method is defined in the same format, with the name of the method as well as the syntax to be used in an integrated development environment clearly visible at the top of a block. With Java’s focus on scope, data types and inheritance, this syntax is extremely useful for new, if not all programmers. This is followed by an in-depth explanation of the operation of the method, with errors below. Python has a similar approach to document it’s in-built methods, however mimics the language’s lack of fixation on scope and data types. This documentation has the syntax of each method, along with a short description and an example of the typical use of the method or function. The skeleton code provided in the example gives programmers a good understanding of the function at a quick glance. Class definition Classes written by third-party developers, primarily as a part of libraries, also showcase their programming in the form of skeleton code. This helps to inform any that are new to the library as to how the functions and methods operate. P5.Js uses this format on their documentation page to explain the intended use of certain included functions. This is different to the programming language documentation however, using skeleton code to display parameters rather than all possible uses of the method. Natural Language Interfaces (NLIs) are most typically found in situations where programmers attempt to take an input, usually colloquially termed (without the use of programming language specific jargon) and use this to create a program or a method. An implementation of this uses a small set of skeleton code to imply the function running in the background. Other forms of NLIs use different forms of input, ranging from other users speaking different languages, to gesture based input to produce a very similar result. With programming languages being developed and written primarily in english, people speaking other languages find it hard to develop new software. NLIs have been used in some studies to assist people in these situations. The study showed classes written in Java through the use of NLIs. This removed the need for learning syntactical rules, however meant that the class was written using a basic set of skeleton code. Polymorphism based definitions Polymorphism is an ideology that follows with the object oriented programming paradigm, where methods can be overridden or overloaded (methods with the same name in a child class which will take priority over a method written in a parent class). The definition of methods is based on a skeleton framework defined by the syntax of the language. Very similar to class implementation, skeleton code can be used to define the methods that are part of an interface. An interface is essentially a blueprint of a class, which allows for strict object oriented languages (such as Java) to use classes from different packages without the need to fully understand the internal functions. Interfaces simply define the methods that have to be present within the class, allowing anyone else to use the methods or implement the class for their personal needs. public skeletonExample(); An abstract class is almost the same as a class implementation, however depending on the language, at least one method is defined as abstract. This implies that any children of this class (any classes that extend or implement) need to have a method defined for this. Abstract classes have a very similar definition style to interfaces, however a keyword ‘abstract’ is typically used to identify the fact that it needs to be implemented in child classes. public abstract skeletonExample(); These examples use the Java syntax. Parallel programming Parallel programming is the operation of multiple functions simultaneously most commonly used to increase efficiency. These are typically the hardest types of programs to develop, due to their complexity and interconnectedness with the hardware in question as well. Many developers have attempted to write programs with this core functionality, however this has been met by varied results. Algorithmic skeleton frameworks are used in parallel programming to abstractly describe the methods in question for later development. The frameworks are not limited to a single type, and each of these types have different purposes to increase the efficiency of the developer’s program. These can be categorised into three main types: data-parallel, task-parallel and resolution. Data-Parallel These skeleton algorithms are used to develop programs that work on large data based software, usually identifying the connections between data for later use. Data parallel algorithms include ‘maps’, ‘forks’ and ‘reduces’ or ‘scans’. ‘Maps’ are the most commonly used data parallel algorithms, and typically involve a single operation completed on a large set of data. To increase efficiency, a number of data sets have this operation applied to them simultaneously, before the data is structured together again at the end. ‘Forks’ are similar to ‘maps’ but they use a different operation for certain data types. This is known as multiple data parallelism. ‘Reduces’ or ‘scans’ are used to apply prefixes to a set of data, before then applying an operation upon the data. These are different to ‘maps’ as they have a set of partial results during the runtime of the method itself. Task-Parallel These operations, as their name suggests, work on tasks. Each type of algorithm under this is different due to a change in the behaviour between tasks. Task parallel algorithms include ‘sequentials’, ‘farms’, ‘pipes’, ‘if’, ‘for’ and ‘while’. ‘Sequential’ closes and terminates a nested set of skeleton algorithms. The methods and programs that are part of the skeletons are included as terminating aspects of the program, before closing. ‘Farms’ are known as a group of tasks, a worker, or as a master or slave of another function. It completes the given tasks by replicating the tasks over multiple threads and running these concurrently. This divides the load for a specific thread, effectively creating a master / slave relationship between the threads. ‘Pipes’ are the more traditional forms of algorithms, where each method or function is run in a sequence. This follows the order in which the programmer has written their code. This is made parallel by computing varied tasks on a set of data, typically input, simultaneously to improve performance and speed. Each simultaneous computation is known as a stage. The pipe algorithm can be nested, where one is within another, each splitting up responsibilities to increase speed and also the number of stages. ‘If’ gives the program a conditional split of tasks, where a set of skeleton code is split into two main sections. A conditional statement is given to the program, therefore giving it a specified algorithm to follow. ‘For’ operates a task a number of times, both specified by the programmer, allowing for a more efficient set of code. The number of times that the code runs is a preset value, indicating that at runtime, this cannot be changed. It must complete the task the number of times given. ‘While’ is an algorithm very similar to the operation of a ‘for’ algorithm, where a task is completed a number of times. However in ‘while’ algorithms, the program computes the task a number of times before a conditional statement is met. This means that the ‘while’ algorithm can perform its task a different number of times for each time it is run. Resolution Skeletons These skeletons are very different to the typical skeletons found above. ‘Resolution’ algorithms use a combination of methods to solve a specified problem. The algorithm’s given problem can be a “family of problems”. There are two main types of these skeletons, ‘divide and conquer’ or ‘brand and bound’. ‘Divide and conquer’ uses a map skeleton as it’s basis, combining this with a while skeleton to solve the problem. In map algorithms, functions on data are applied simultaneously. In ‘divide and conquer’ the set of data provided has a function applied to it using the map skeleton, however this can be applied recursively using the ‘while’ algorithm. The ‘while’ is only broken when the entire problem is solved. ‘Branch and bound’ is an algorithm that also uses map algorithms, however instead of applying the ‘while’ algorithm to run the tasks simultaneously, this algorithm splits the tasks into branches. Each branch has a specific purpose, or ‘bound’, where the conditional statement will cause it to stop. References Computer programming Programming language topics Software
14937349
https://en.wikipedia.org/wiki/Datacube%20Inc.
Datacube Inc.
Datacube Inc. (1978–2005) was an image processing company that developed real-time hardware and software products for the industrial, medical, military and scientific markets. Early history Datacube was founded in the mid-1970s by Stanley Karandanis and J Stewart Dunn. In the early days, Datacube manufactured board level products for the Multibus, which was one of the first computer buses developed for microprocessors. Early boards designed by Dunn were PROM, RAM and character generator boards. Of these, character display boards such as the VT103 and VR107 were the best sellers, and were used in programmable read-only memory (PROM) programmers and similar systems. Karandanis, Datacube's president and CEO, in his early career followed the leaders in the semiconductor field from Bell Labs through Transitron to Fairchild Semiconductor. Karandanis was director of engineering at Monolithic Memories (MMI) when John Birkner and H.T. Chua designed the first successful programmable logic device, the programmable array logic (PAL) device. His contacts in the semiconductor field were instrumental in providing Datacube with components for its products. An OEM asked Datacube if a frame grabber could be built on a Multibus board. At the time, a frame grabber was a large box with multiple boards. The VG120 was the first ever commercial single board frame grabber: based on programmable array logic (PAL), it had 320 x 240 x 6 bit resolution, grayscale video input and output. Karandanis hired Rashid Beg and Robert Wang from Matrox to develop the first Q-Bus (DEC LSI-11) frame grabber. They developed the QVG/QAF120 dual board, 8-bit product primarily for a new startup named Cognex. While the latter were developing the hardware for Datacube, they were also planning to spin off and form a competitor, Imaging Technology, which was later purchased by Dalsa. To recover from this loss, and to complete the QVG120 product, Dave Erickson was hired as a consultant in 1981 from Octek, by the engineering manager Paul Bloom. Dave came on full-time in 1982, as did Dave Simmons who was to head applications, and Bob Berger, who was to head software. At this time, Imaging Technology Inc. (ITI) was developing a line of frame grabber products for Multibus and Q-bus, with a 'real time' image processor based on a single point multiplier, adder and lookup table (LUT). In 1983, Karandanis hired Shep Siegel from Ampex, who had worked on the advanced and successful Ampex Digital Optics (ADO) real-time video spatial manipulator for the broadcast TV market. With Dunn's help, Simmons developed the VG123 Multibus and Q-bus frame grabber boards. During this development, Paul Bloom was killed in what was apparently a gangland style murder. The mystery of why this happened has never been solved. Dave Erickson was promoted to engineering manager to replace Bloom. Siegel came to add the SP123 image processor to the '123 family. But having worked on ADO, Siegel saw the limitations of the single-point architecture, and had a vision of what could be done by applying pipelined real-time imaging. He came with an understanding of digital signal processor devices (DSPs), image processing, filtering, and 2D warping, and with programmable logic in hand, saw what could be done. Erickson and Dunn had developed frame grabber boards deployed on most standard busses. Each potential new customer required features not currently available, and designing, laying out (using hand taped artwork) and manufacturing a board for a single customer was risky, slow and expensive. What was needed was a way to leverage the technology developed so that it could be applied to a wider customer base. Erickson felt that a modular architecture where functions could be easily added and a system tailored to a customers needs was critical. At this time, the VME bus was being introduced by Motorola for their Motorola 68000 processors. The automotive and military markets liked the VMEbus because it was open and rugged. Datacube developers embarked on a marketing road trip to visit potential customers in the medical, automotive and military markets to inquire what imaging functions they needed. MaxVideo 10 A Modular and expandable system based on the VMEbus form factor could meet many customer needs. MaxVideo and the MaxBus were born. Marketing research determined the primary functions required and a road map for the next few years. The first seven MaxVideo boards were Digimax (digitizer and display), Framestore (triple 512^2 framestore with unprecedented density), VFIR (first real-time 3x3 image filter, SNAP (3x3 Systolic Neighborhood Array Processor), Featuremax (real-time statistics) SP (single point general purpose processor) and Protomax (MaxVideo prototyping board). 10 beta customers were lined up to receive the first 7 boards. MaxWare was the software and drivers written to control the new boards. The first demo of the new hardware consisted of a camera's output being processed in real time by VFIR and displayed on a monitor. Siegel wrote a loop that varied the VFIR coefficients on a frame-by-frame basis to demonstrate not only the video real-time functionality, but that the function could be easily changed. In the spring of 1985, the product was not production-ready, so private viewings were set up with potential customers at the Detroit Vision '85 show. Customers' reaction was positive and three months later the first shipments to customers went out. MaxBus was based on the '123's expansion bus. It required accurate synchronization: clocking and timing of each board plus a flexible way to route data from function to function. A simple differential ECL bus with a driver on one end and terminator on the opposite end was used. For data, 14 pin ribbon cables allowed 8-bit 10 MHz data to be routed from any output to any input. At this time the company started to grow. Barry Egan was brought on to head manufacturing, entrepreneur Barry Ungar was brought on as President. Bob Berger expanded the software department, and moved the main computers from CP/M machines to Unix machines based on LSI-11s from Digital Equipment Corporation. A Unix based Pyramid mainframe computer was purchased for hardware and software development. Berger bought the first Sun workstations and set up an Ethernet LAN. He registered "datacube.com" as the 68th internet domain name in existence (now owned by Brad Mugford). In hardware, John Bloomfield was hired from Ampex. The second tier of MaxVideo products was developed. Siegel began the first image warper consisting of Addgen, Interp, and XFS. John Bloomfield expanded the fixed 512 x 512 processing to include Regions-of-interest (ROI) processing. He began developing with the new FPGAs from Xilinx. RoiStore, MaxScan (first arbitrary sensor interface), VFIR-II and MaxSigma. These products established Datacube as the technology leader in real-time imaging. It was clear that a better way than the low-level control of MaxScan was needed to manage complex new imaging pipelines. ImageFlow was developed. It provided full pipeline delay management and optimization, and a consistent API for programming imaging hardware. Key software programmers were brought on: Ken Woodland, Stephen Watkins and Ari Berman. Recognizing that not every imaging function could be best done in a pipeline, Siegel teamed with Analog Devices new digital signal processor (DSP) group to develop Euclid, based on the ADSP-2100. Color digitization was required for some markets, so Siegel teamed with broadcast consultant Robert Bleidt to develop Digicolor. Datacube's first generation image warper caught the attention of the 'image exploitation' industry and in particular, Lockheed. Later, Siegel developed the second generation warper for ROIs: Addgen MkII, based on the Weitek 3132, and Interp MkII. Dunn developed Megastore to handle the large images that this market required. By now the original SP and Featuremax were running out of steam so SP MKII and FeaturemaxMkII were developed. Erickson developed MaxMux, the first Datacube board to use a custom ASIC. The MaxMux ASIC was also used on ROIStore to route signals. To address the need to combine imaging and workstation graphics, Dunn and Erickson developed MaxView, a high resolution display with the ability to perform real time image display in a window. Watkins ported X Window to this display. Despite the fact that a single box of maxVideo hardware could replace a room full of hardware at Lockheed, the product was not bought. Lockheed made too much money on the legacy system to want to update to the newer, smaller, better system. A typical system now consisted of a MaxBox 20 slot VMEbus chassis with up to 20 boards installed. The largest MaxVideo system ever built was by Honeywell for aerial target identification. It consisted of five 20 slot chassis full of MaxVideo Hardware. A new MaxBus repeater was developed for these very large systems. Another important design-in for MaxVideo 10 was the FLIR pod test system built by Martin Marietta. Sandia National Labs adopted MaxVideo for a Radar image targeting system. MaxVideo 20 The next step was to implement up to a full rack of MaxVideo 10 hardware in a dual slot VMEbus package, increase the pipeline to 20 MHz, maintain the modularity and flexibility, and eliminate most of the blue MaxBus cables. MaxVideo 20 was born. This required a new 3-port image memory module base on the 72 pin SIMM form factor and was developed by Dunn. Up to 6 memories were used on each Max20. Max20 also leveraged a new line of Imaging chips from LSI Corporation, including a 32 x 32 digital crosspoint and an 8x8 20 MHz finite impulse response (FIR) filter. Dunn developed a new display controller, AG capable of up to 40 MHz display, and Erickson developed a new family of 20 MHz analog and flexible digital front ends, AS and AD. Dunn developed the color digitizer, AC. Another feature of MaxVideo20 was the new general processing ASIC, AU developed by Dunn. This device contained many innovative linear, nonlinear and statistical imaging functions. Its architecture was to be the core of not only Max20 but the next generation imaging system as well. Built in the pre-RTL age of schematics, Dunn's AU ASIC incorporated booth multipliers designed by mathematician Steve Gabriel. The memory SIMM was implemented with CPLDs, FPGAs and Graphics DRAM. It was limited to 1MB of memory and required 14 devices tightly packed onto the SIMM. Siegel developed VSIM, a fast and powerful ASIC to control high density SDRAMS and built a 3 device replacement SIMM. It was a triple ported image memory capable of 1, 4 or 16MB memory sizes, up to 40MB/s input and output bandwidths, and contained numerous image processing functions as well. VSIM technology was to be used on numerous future products. A number of MaxModule processing modules were developed for MaxVideo 20. One of these was Siegel's MiniWarper, a 20 MHz real-time warper based on a new ASIC design, MW4242. With the advent of MaxModules, it was now possible to implement an imaging function on a small and simple board with much less overhead than a full VME board. IBM military division in Gaithersburg MD was interested in a new image exploitation system, and so Datacube developed a third generation exploitation system for them. This powerful system used an extremely high bandwidth image memory and an address generator by Erich Whitney, capable of 7x7 spatial transformation matrices, all calculated with double precision floats. A powerful new display system, XI was developed to display the results. Unfortunately, due to the lack of a firm contract, IBM took only a couple of these systems and one year of Datacube's talented engineering efforts were effectively wasted. But Datacube had other projects going. It leveraged several key technologies with MaxVideo 20. An off-the-shelf disk storage system was integrated to be used for medical and image exploitation systems, but this system had unsolvable technical problems, so Siegel developed MD, based on an off-the-shelf external SCSI RAID box. A 12 bit digitizer, Digi-12 was developed by Erickson and was a key element in the Picker Digital Radiology system. Datacube designed an interface to a Sky array processor to obtain a GE military contract for a submarine sonar system. MaxPCI Until 1996, MaxVideo has been entirely VMEbus based. VMEbus, Unix, OS-9, VxWorks and Lynx-OS had served markets well, but Windows 95 and Pentium-based personal computers (PCs) with PCI bus were coming on strong. Clearly a PC version of MaxVideo was required. MaxPCI was developed over 2 years. VSIM was already capable of MAX PCI's target processing speed of 40 MHz, but everything else needed to be updated or redesigned. The core of MaxPCI was a new, giant crosspoint ASIC: 50 x 40 x 8 with full ROI timing crosspoint and many imaging functions as well, developed by Whitney. Dunn redesigned the AU ASIC to operate at 40 MHz, and a new statistics unit was developed. Tim Ganley developed the acquisition subsystem and Simmons developed a new family of 40 MHz analog and digital front-ends, QA and QD. For an integrated display, a VGA board from another imaging company, Univision was used. For a real-time disc solution, Shep developed NTD, a software solution for real-time disc access. Meanwhile, Datacube recognized the need to better help its customers develop complex solutions in the medical, web inspection and machine vision markets. So three vertical integration development groups were formed. Siegel headed Medical, Simmons headed Web, and Scott Roth headed Machine vision. Each of these groups developed systems for OEMs in their respective markets. MaxVision Toolkit In 1995, the machine vision group produced the MaxVision Toolkit, a software library for image acquisition, object finding, metrology, inspection functions, and camera calibration. More specifically, the Toolkit provided image acquisition (normalized correlation and connectivity), metrology tools (line fitting, arc fitting, and edge locators), inspection tools (golden template, pixel counting, and histogramming), image processing tools (Sobel edge filters, cross-gradient edge filters, threshold operations, morphology, image arithmetic, image copy, X & Y projections, and convolutions), and high accuracy calibration that corrected for perspective distortion. Swami Manickam, Scott Roth, and Tom Bushman of the machine vision group developed a significant tool called the Finder which performed intelligent normalized grayscale correlation that is invariant to rotation, scaling [to a limited extent], and perspective distortion. The effort resulted in a patent. Datacube designed and manufactured a single-board image processor with an embedded PowerPC CPU for the VMEbus, called mvPower. Datacube introduced MvTD, a compact machine vision system using mvPower. It had four front panel connectors for Hirose-type camera inputs, four auxiliary connectors, two serial ports, a PCI mezzanine card carrier connector, a display connector, and an acquisition connector. Next, Datacube created the mvPower-PCI with similar specifications as mvPower for VME. Both boards used Datacube ASICs for custom image processing and image acquisition. The MaxVision Toolkit ran on these boards using the VxWorks real-time operating system. Technologies Karandanis' contacts in the semiconductor market gave Datacube a competitive edge in applying new technologies. In the early days, Video digital-to-analog converters (DACs) were large modules or expensive and power hungry bipolar devices. Datacube worked with Silicon Valley startup Telmos to develop the first integrated Video DAC. This was used on the '128 family as well as Digimax. It was the starting point for all Video DACs and RAMDACs by Brooktree and others. Datacube was to ride several technological waves including fast ADCs, disk drives, DRAM, DSP devices and custom ASICs. Programmable logic was the key to Datacube's functional density: from the early days of bipolar programmable array logic (PAL) and programmable read-only memory (PROM) to generic array logic (GAL), to every generation of FPGAs from Xilinx and then Actel and Quick Logic, and Altera CPLDs. Many semiconductor manufacturers acknowledged that Datacube could help bring their new products to market. Datacube was an ideal beta site and they shared their roadmaps, latest offerings, and support. ASICs were critical to Datacube's success. From the first small crosspoint: 3000 gates in 2 micrometres, AU: 40,000 gates in 0.8 micrometre, through VSIM, MiniWarper, AU40 and IXP. Each of these devices were leveraged across several products. After IXP the density and cost of FPGAs began to catch up to full ASICs and so FPGAs were the technologies of choice. What happened? Datacube was always a hardware-centric company. Its products competed against software solutions running on CPUs. When CPUs were in the 100-1000MIPS range, Datacube's 1G-10G solutions were very appealing. When CPUs and multi-core CPUs began to exceed 1000 MIPS, Datacube solutions were no longer needed, except for the very highest end applications. And the profits on these applications were not adequate to sustain a business. The MaxVision Toolkit ran on CPU's, so it survived. It was licensed to a few companies over the years and the source code was ultimately purchased by Scott Roth, previously VP of Machine Vision. Datacube managers always had the attitude that the best way to protect intellectual property (IP) was to stay ahead of competition, and felt that patents were a waste of time and money, attracting competition and potential infringement suits. So despite the many inventions, firsts and ideas developed, there were few patents filed. This lack of patents ultimately left no base of technology to licensing opportunities. References External links About Stanley Karandanis A Tribute to Stanley Karandanis The Abingdon Cross Benchmark Warping Technology US Patent 5,063,608 "Adaptive Zonal Coder" ex-Datacube Employee Group on Yahoo Datacube Photos on SmugMug MaxVideo paper from Electronic Imaging 1985 Defunct computer companies based in Massachusetts
62379582
https://en.wikipedia.org/wiki/Edward%20S.%20Fris
Edward S. Fris
Edward Steve Fris (1 September 1921 – 17 May 2010) was a lieutenant general in the United States Marine Corps. He served as the Director of Aviation, Headquarters Marine Corps and is considered a pioneer in the development of today's Marine Air Command and Control System (MACCS). He was commissioned during World War II and originally trained as a radar officer. Following the war, he transitioned to become a naval aviator. His radar and electrical engineering background led to his near decade long involvement with the development of the Marine Tactical Data System (MTDS). He served as the commanding officer (CO) of Marine Air Control Squadron 3 (MACS-3) for more than four years from 1961-1965 as MTDS went through operational test and evaluation. Later assignments included a tour in Vietnam as the CO of Marine Air Control Group 18 (MACG-18), time as Commanding General Marine Corps Air Bases Western Area and two years as the Director of Marine Corps Aviation. His final assignment before retirement was as the Commanding General of the Marine Corps Development and Education Command. The Marine Corps Aviation Association award given annually to the top Marine Corps Aviation Command and Control Unit is named in his honor. Early Years Born September 1, 1921, in Orient, Illinois, Edward Fris attended Frankfort Community High School, West Frankfort, Illinois, graduating in 1939. He was the president of his senior class at the Missouri School of Mines and graduated with a bachelor's degree in electrical engineering in February 1943. He was commissioned a second lieutenant in the United States Marine Corps Reserve on 2 February 1943. World War II and transition to Naval Aviator Fris completed Officer Candidates School in June 1943. Afterwards he attended the Naval Trainings for radar at Harvard University and the Massachusetts Institute of Technology. He also completed the Radar Maintenance Course at Camp Murphy in Orlando, Florida. During World War II, Fris served at Marine Corps Air Station Cherry Point, North Carolina, as a radar officer with the 9th Marine Aircraft Wing (9th MAW) and later with Aircraft, Fleet Marine Force Pacific in Hawaii. Upon his return to the United States in January 1946, he entered flight training at Naval Air Station Corpus Christi, Texas. He received his wings and was designated a Naval Aviator upon completion of flight training at Naval Air Station Pensacola, Florida, on June 27, 1947. After completing advanced training at the Naval Air Station Jacksonville, he was assigned to the Marine Corps Air Station El Toro, California for duty as a flight officer with VMF-312 with a follow on tour as the Executive Officer of the station's Headquarters Squadron. He was promoted to captain in August 1947. 1950s He attended the Amphibious Warfare School Junior Course at Marine Corps Base Quantico, Virginia graduating in December 1950. He remained at Quantico for a few more months before attending United States Naval Postgraduate Schools at Annapolis, Maryland and Monterey in California. He graduated from those schools in November 1951 and June 1954, respectively. From June through October 1954 he refreshed his pilot qualifications with VMFT-10 prior to his next assignment as executive officer of VMF-115 from December 1954 until December 1955. He next served as the Electronics Officer for the 2nd Marine Aircraft Wing beginning in January 1956. In June 1957, Fris reported to Headquarters Marine Corps, Washington, D.C., for duty as Head, Aviation Electronics Logistics Section, Division of Aviation, for three years. During his time at HQMC Aviation, then Major Fris was responsible for writing the Marine Corps' requirements for their new aviation command and control program of record - Marine Tactical Data System (MTDS). MTDS development and Vietnam Promoted to lieutenant colonel in January 1959, Fris was made the Marine Corps Liaison Officer with Litton Industries in Los Angeles, California, in July. He oversaw the design and development of the MTDS program. At this time, MTDS was the largest research and development project in the Marine Corps. From September 1961 until February 1965, he served as the Commanding Officer of Marine Air Control Squadron 3 (MACS-3) at Marine Corps Air Facility Santa Ana, California. During this time, MACS-3 was the designated operational test and evaluation squadron for MTDS seeing it through numerous financial and developmental issues until officially fielded in 1966. He returned to Washington, D.C., in April 1965, and served as Head, Marine Corps Amphibious Electronics Branch, Electronics Division, Bureau of Ships. Fris was reassigned to Headquarters Marine Corps in August 1966, as Head, Aviation Command Control and Communications Branch, Office of the Deputy Chief of Staff (Air). He was the first officer to serve in this newly formed billet which today is known as Branch Head, Aviation Expeditionary Enablers (APX-1). Following this assignment he took command of Marine Air Control Group 18 (MACG-18), in July 1968 and served a year in Danang, South Vietnam. Promotion and Director of Aviation Upon his return to the United States, Fris was promoted to brigadier general on August 22, 1969, and designated as Inspector General of the Marine Corp until July 1970, when he assumed duty as Assistant Deputy Chief of Staff (Programs). Detached from Headquarters Marine Corps in October 1971, he again reported to MCAS El Toro, where he commanded the Air Station and Marine Corps Air Bases Western Area. In September 1972 Fris returned to Headquarters Marine Corps to serve as Deputy Chief of Staff for Aviation. Following his advancement to lieutenant general on August 27, 1974, he became commanding general, Marine Corps Development and Education Command (MCDEC), MCB Quantico, Virginia, remaining in that billet until his retirement. Retirement, death and legacy Fris retired from active duty on 1 September 1975. He was married to the former Minerva E. Fellows of East Orange, New Jersey. He had two daughters, two step-daughters and a son, Captain Steve A. Fris, who preceded him in death. Fris died on 17 May 2010 and is buried in Quantico National Cemetery with his wife and son. The Marine Corps Aviation Association's annual award for the Marine Aviation Command and Control Unit of the year is named after him. Medals and decorations Here is the ribbon bar of Lieutenant General Edward S. Fris: Citations References 1921 births 2010 deaths United States Marine Corps generals American electrical engineers United States Naval Aviators United States Marine Corps personnel of World War II Recipients of the Navy Distinguished Service Medal Recipients of the Legion of Merit People from Franklin County, Illinois Military personnel from Illinois Missouri University of Science and Technology alumni Aviators from Illinois Burials at Quantico National Cemetery
1179614
https://en.wikipedia.org/wiki/Postal%202
Postal 2
Postal 2 is a 2003 first-person shooter developed by Running with Scissors. It is the sequel to the 1997 game Postal and was released for Microsoft Windows in April 2003, macOS in April 2004 and Linux in April 2005. Postal 2, as well as its predecessor, have received notoriety for their high levels of violence, stereotyping, and black comedy. Unlike the first game, Postal 2 is played from a first-person perspective. Set in the fictional town of Paradise, Postal 2 follows the life of "The Postal Dude", who must carry out mundane tasks throughout an in-game week, with the player deciding how violently or passively he will react to various situations. The player navigates the open world to carry out his chores, with player choice having an effect on the setting. The game received a mixed reception from critics upon its release. The game has received several expansion packs, and in December 2003, a multiplayer expansion was released, titled Postal 2: Share the Pain. The game remains continually updated, with a new expansion pack titled Paradise Lost released in April 2015. The game received attention for its violent gameplay, and was responsible for multiple controversies. The game was followed by a sequel, Postal III, in December 2011. Plot In Postal 2, the player takes on the role of the Postal Dude, a tall and thin red-headed man with a goatee, sunglasses, a black leather trench coat, and a T-shirt with a grey alien's face printed on it. Postal Dude lives in a dilapidated trailer on land behind a house in the small town of Paradise, Arizona, with his nagging wife, who is identified in the credits as simply "The Bitch". The game's levels are split into days of the week starting Monday and finishing Friday. At the beginning of each day, Postal Dude is given several tasks to accomplish, such as "get milk", "confess sins", and other seemingly mundane tasks. The objective of Postal 2 is to finish all of the tasks throughout the week, and the player can accomplish these tasks in any way they wish, be it as peacefully and civilly as possible, or as violently and chaotically as possible. It is possible, if occasionally difficult, to complete most tasks without engaging in battle, or at least, harming or killing other characters, as evidenced by the game's tagline: "Remember, it's only as violent as you are!" The daily tasks can be accomplished in any order the player desires, and the game also includes one task that is activated only when Postal Dude urinates, in which the player is tasked with getting treatment for gonorrhea after Postal Dude discovers he has the infection. Throughout the course of the game, Postal Dude must put up with being provoked by other characters on a regular basis. He is given the finger, mugged, attacked by various groups of protesters, and is harassed by an obnoxious convenience store owner/terrorist and his patrons who cut before Postal Dude in the "money line". During the game, Postal Dude also encounters a marching band, a murderous toy mascot named Krotchy, the Paradise Police Department and its SWAT team, overzealous ATF agents, the National Guard, an eccentric religious cult, cannibalistic butcher shop workers, fanatical al-Qaeda terrorists, and former child actor Gary Coleman, among many others. By Friday afternoon, the final day in the game, the apocalypse occurs and societal collapse soon follows, with all law and order breaking down. Cats begin to fall out of a darkly-colored sky, and almost everyone in town becomes heavily armed, with random gun battles breaking out in the streets. Despite this, Postal Dude returns home to his trailer as normal, where he then gets into an argument with his wife, who demands that Postal Dude explain why he never picked up the "rocky road" she asked for at the beginning of the game. Postal 2 then ends with a gunshot being heard, before being kicked to the end credits. Gameplay One of the major concepts of Postal 2 is that it is meant to be a "living world", a simulation of a tongue-in-cheek off-kilter town. Game characters live out their lives completely separate from the actions of Dude—walking around town, buying and selling merchandise, and even engaging in random shootouts with each other and the police. The town features many cars but they are all "useless exploding props", according to Dude, and cannot be driven, although they can be blown up and sent flying into the air. In addition to cats and dogs, other animals present are elephants; these animals can be shot or set on fire—or simply annoyed by the player walking into them—causing them to trumpet with rage and attack anyone within stomping distance. A peculiar feature is the ability to pick up cats as an inventory item. When used, Postal Dude shoves the barrel of the currently equipped firearm into the cat's anus (cats can only be used while equipped with a shotgun or assault rifle) as a "silencer". Every time a shot is fired, the cat meows in apparent agony, and the gunshot is muffled. After nine shots, the cat has run out of lives and it will fly from the end of the weapon. Most dogs have the ability to befriend the Dude if he feeds them a continual supply of dog biscuits or feeds them any other food (pizza, donuts, fast food). Once a canine's loyalty has been earned, the dog will attack anyone who attacks the Dude, or alternatively, anyone whom the Dude attacks. Dogs will also chase and kill cats, and play fetch with the Dude's inventory items and severed heads. There were also going to be cows included in the game, but they were left unimplemented. They did appear in Apocalypse Weekend and the A Week in Paradise modification. The game also features a cameo by Gary Coleman, acting as himself, who appears early on as the objective of one of the game's tasks (travel to the local shopping mall to get Gary's autograph). The player can choose to fight and kill Coleman or simply have the book signed peacefully (after enduring a long line-up). The Dude twice mistakes Coleman as having starred in What's Happening!! and The Facts of Life, when he actually starred in Diff'rent Strokes. Regardless of the Dude's actions, the police storm the building in an attempt to arrest Gary Coleman and a gunfight ensues which invariably results in Coleman's apparent demise, with or without the player's help. Later on in the game he can also be seen in the Police Station, when the player escapes from his cell he also frees everyone else—including Coleman, who can be seen running alongside Krotchy. Coleman apparently survives as he can be seen in the Apocalypse Weekend expansion, bandaged up in the hospital (various evil Gary Coleman clones also serve as recurring enemies during Postal Dude's constant hallucinations). Release Sales Postal 2 became Linux Game Publishing's fastest selling game in its first month, and contributed greatly to the continuing profitability of the company. Expansions Share the Pain An updated edition of the game, entitled Postal 2: Share the Pain, included a multiplayer mode. The Macintosh and Linux versions of Postal 2 shipped only as Postal 2: Share the Pain. Share The Pain received an average score of 59 out of 100 based on 10 reviews on review aggregator website Metacritic, indicating "mixed or average reviews". Apocalypse Weekend Postal 2: Apocalypse Weekend is an expansion pack to Postal 2 released by Running with Scissors on August 1, 2004 for Microsoft Windows, and September 28, 2005 for the Mac OS X and Linux versions. Apocalypse Weekend expands the reaches of Paradise with new maps and missions, set on Saturday and Sunday, adds new weapons and foes, and raises the gore and violence to an even greater level. It was later included in both the Postal Fudge Pack and Postal X: 10th Anniversary compilations alongside Share the Pain and several fan produced mods, including A Week in Paradise which allows content from Apocalypse Weekend to appear in the original game as well as allowing the expansions levels to be played as part of the original five-day campaign. Apocalypse Weekend begins Saturday morning, with the Postal Dude waking up in the hospital, his head bandaged from a near-fatal gunshot wound. While the Postal 2 ending leaves it ambiguous as to whether or not the Dude shot his wife or if his wife shot him, after he wakes up in the hospital he finds a card from his wife saying that she is leaving him. It was later revealed on the official website that the Dude shot himself due to his wife nagging him. The Dude's ultimate goal is to recover his trailer and his dog Champ, and to this end, escapes from the hospital. With the exception of the zombies that appear later in the game, it would appear the madness depicted at the end of Friday on the previous game has petered out. The Dude proceeds through several missions including assignments from his former employers, Running with Scissors, encounters with mad cow tourette zombies, as well as confrontations with terrorists and the military. Periodically, the Dude's head wound causes him to enter a nether realm where he is attacked by Gary Coleman clones. Throughout the weekend, the Dude fights off hordes of zombies, Taliban and the National Guard until he finally faces a zombified Mike Jaret, an employee of Running with Scissors. Once the Dude destroys it, he leaves Paradise in his car with his dog and his trailer while Paradise explodes due to a massive nuclear warhead he "borrowed" to destroy a rival video game development and publishing company. The Dude's last words of the game are "I regret nothing". While gameplay is similar to its parent Postal 2, Apocalypse Weekend is not as open-ended. The gameplay is more linear in design, with the player mostly forced to follow a certain path to complete the game—typical of most first-person shooter games. In addition, the player cannot play as a pacifist and is forced to kill animals and zombies in order to progress in the game. Unlike the main game, Apocalypse Weekend also includes several "boss monster" encounters. All normal cats are also replaced with "dervish cats", which spin in a manner similar to that of Looney Tunes Tasmanian Devil, attacking any nearby character when agitated. Dervish cats can also be collected and, in addition to muffling guns, can be thrown at NPCs to attack them. Apocalypse Weekend received an average score of 45 out of 100 based on 4 reviews on Metacritic, indicating "generally unfavorable reviews". Corkscrew Rules! Postal 2: Corkscrew Rules! () is an official spin-off and expansion to Postal 2, developed by Avalon Style Entertainment, and released in 2005 by Akella. The plot concerns a man called Corkscrew (), who wakes up to find that his penis has somehow been amputated and goes on a mission to find it. The game was released only in Russia and Japan (under the title "ポスタル2 ロシアより愛をこめて" which translates to "From Russia with Love"). In 2017, an English version of the game was made available for free through the Steam Workshop. Paradise Lost Postal 2: Paradise Lost is an extension for Postal 2, announced for Steam at E3 2014 with a teaser trailer. It was released on April 17, 2015. Paradise Lost takes place 11 years after Apocalypse Weekend, the Postal Dude awakens from his 11-year radioactive-induced coma, the same amount of time between Postal 2 release and the release of Paradise Lost, only to find his dog Champ is missing and has to go back to his home town of Paradise, which is now a post-apocalyptic wasteland. Paradise Lost also retcons Postal III as it was revealed that the events of that game were just a nightmare that the Postal Dude had during his coma. Returning to Paradise, Dude allies with factions whom he had encountered in the previous games, including RWS, Al-Qaeda and the Kosher Mad Cow Tourettes Zombies. They attempt to help him find Champ. Near the end of the game, Dude has to go to Hell and battle Champ and his now-ex-wife, who has turned into a demon. Returning to Earth, he finds out that all the factions have gone to war and gives himself a choice: return to each faction and defeat its leader or leave the town. Eventually, he and Champ leave Paradise for the last time. Paradise Lost includes various characters based on real people, including former child actor Gary Coleman, a returning character from Postal 2 as well as Canadian actor Zack Ward who had previously depicted the Postal Dude in the 2007 Postal film. Former tech journalist and media personality Milo Yiannopoulos also had a less prominent role in the game as an NPC able to be found at the 'Fire in the Hole' club from Thursday onwards. All three of these characters were played by their real life counterparts - Coleman's dialogue was re-purposed from Postal 2 due to his death five years earlier. Compilations On November 13, 2006, RWS Released a compilation of Postal - Classic and Uncut, Postal 2: Share the Pain, Apocalypse Weekend, A Week in Paradise, and Eternal Damnation, along with extra content (Postal Babes and video clips from "their cutting room floor") as the Postal Fudge Pack on a 3-way hybrid DVD for Windows, Linux, and the Mac. Recent copies of the Fudge Pack also include a Steam key for Postal, Postal 2 Complete and Postal III. The Postal X: 10th Anniversary edition contains all the content from the Postal: Fudge Pack as well as introducing new content such as a cereal box, A Very Postal Christmas, Music to Go Postal By, and previews for both Postal III and the Postal film. Postal 2 Complete is an online compilation containing Postal 2: Share the Pain and its expansion Apocalypse Weekend which is available from both the Desura platform for Linux, Mac and Windows and from GOG.com for Windows. The Linux version available from Desura was newly updated for its release on the digital distribution platform. The pack was made available through Steam on November 2, 2012, after successfully getting Greenlit by the community. In November 2017, Running with Scissors released Postal XX: 20th Anniversary, a compilation of all Postal titles (including Postal III) and the Postal film. Mods Eternal Damnation Postal 2: Eternal Damnation is a total conversion of Postal 2 by Resurrection-Studios, released as a free download in 2005 and in the Postal Fudge Pack a year later. The plot concerns a man called John Murray, who is in a mental asylum after having killed a man who tried to hurt his girlfriend. Murray is also seen in Postal 2: Paradise Lost as an Easter egg. Controversies In 2004, the Office of Film and Literature Classification banned Postal 2 in New Zealand, citing high levels of violent content and animal violence. Distribution or purchase for personal use is a criminal offense, punishable by up to 10 years in prison and a fine of $50,000. In Australia, the game was not rated by the Australian Classification Board, but its multiplayer expansion Share the Pain was refused classification by the board in October 2005. Despite this, Share the Pain, along with the base game, is available for purchase on the Australian version of Steam. In Sweden, the Chancellor of Justice took the Swedish distributor of the game to court. He was prosecuted with "illegal depiction of violence", a crime falling under the Swedish freedom-of-speech act. The court dismissed the case on December 12, 2006. The game was removed from the German version of Steam, likely due to its content. Regarding his views on the subject, Linux and Macintosh developer Ryan C. Gordon, who ported the game to those platforms, stated that he feels that the game holds a mirror to the worst aspects of modern society, saying in an interview that the game is a "brilliant caricature of our mangled, disconnected, fast-food society, disguised as a collection of dirty jokes and ultraviolence." Michael Simms, founder of Linux Game Publishing, also at one point commented on the matter, stating that "although I wasn't a fan of the gameplay in Postal 2, I loved the message that the company was trying to put out. Because you can play Postal 2 in the most violent and graphic way, but you can also play it without hurting a single person. I don't know anyone who's played it like that, but I like that the people who made Postal are saying you can get through this game without any violence." In January 2008, three nineteen-year-olds were arrested following a three-week-long arson and theft spree in Gaston County, North Carolina. Their crimes were apparently inspired by actions that could be carried out in Postal 2. Reception Postal 2 received "mixed or average reviews" according to review aggregator website Metacritic. Some of the game's better reviews came from PC Gamer and Game Informer. On the other end of the spectrum, GMR and Computer Gaming World (CGW) both gave Postal 2 scores of zero, with CGW deriding Postal 2 as "the worst product ever foisted upon consumers." In response, negative quotes from Computer Gaming World'''s review ended up being proudly displayed on the box art of the Postal Fudge Pack. CNN journalist Marc Saltzman wrote that the game was "more offensive than fun" and concluded that "it simply goes too far, too often, and offers little else."GameSpot criticized the game's loading times, graphics and gameplay, and the gore was called "surprisingly subdued" in comparison to contemporary games like Soldier of Fortune II: Double Helix. In a middling review for IGN, author Ivan Sulic disliked the game's crude and childish humour, and dismissed the setting of Paradise as "bland". Eurogamer similarly attacked the game for being immature. Ivan Deez from IGN says that Postal Dude has a "sick mind", when referring to the source of some of the errands he has to complete. Macdonald and Rocha from Canada.com describe Postal Dude as a man whose "raison d'être was to eliminate anyone - man, woman and child - with a dizzying arsenal of weapons", but at the same time as "a misunderstood and ostracized man who takes his revenge on the world with a killing spree." In other media Scenes of the game can be seen in the music video of the Black Eyed Peas single "Where Is the Love?" Film adaptation Although acknowledged as an adaptation of the first Postal game, the 2007 film adaptation of the same title directed by Uwe Boll borrows many elements from Postal 2, including the Krotchy doll, the trailer park, the cat silencer, The Lucky Ganesh convenience store, the terrorists, and Uncle Dave and his compound, among others. Gary Coleman was not involved in this film; instead Verne Troyer, appearing as himself, fulfilled Coleman's function in the movie. In 2013, Boll announced the second Postal film. On August 28, 2013, Boll announced he was funding production of Postal 2'' through Kickstarter, but the project was cancelled in October 2013. References External links Steam store page 2003 video games Black comedy video games Censored video games First-person shooters Linux games MacOS games Obscenity controversies in video games Open-world video games Parody video games Postal (franchise) Satirical video games Self-reflexive video games Steam Greenlight games Video games with Steam Workshop support Termination of employment in popular culture Unreal Engine games Video games about terrorism Video games about zombies Video games adapted into films Video games developed in the United States Video games set in 2003 Video games set in 2015 Video games set in Arizona Video games with expansion packs Windows games Cultural depictions of Osama bin Laden
9111229
https://en.wikipedia.org/wiki/Interrupt%20flag
Interrupt flag
The Interrupt flag (IF) is a flag bit in the CPU's FLAGS register, which determines whether or not the (CPU) will respond immediately to maskable hardware interrupts. If the flag is set to 1 maskable interrupts are enabled. If reset (set to 0) such interrupts will be disabled until interrupts are enabled. The Interrupt flag does not affect the handling of non-maskable interrupts (NMIs) or software interrupts generated by the INT instruction. Setting and clearing In a system using x86 architecture, the instructions CLI (Clear Interrupt) and STI (Set Interrupt). The POPF (Pop Flags) removes a word from the stack into the FLAGS register, which may result in the Interrupt flag being set or cleared based on the bit in the FLAGS register from the top of the stack. Privilege level In systems that support privileged mode, only privileged applications (usually the OS kernel) may modify the Interrupt flag. In an x86 system this only applies to protected mode code (Real mode code may always modify the Interrupt flag). CLI and STI are privileged instructions, which cause a general protection fault if an unprivileged application attempts to execute them. The POPF instruction will not modify the Interrupt flag if the application is unprivileged. Old DOS programs Some old DOS programs that use a protected mode DOS extender and install their own interrupt handlers (usually games) use the CLI instruction in the handlers to disable interrupts and either POPF (after a corresponding PUSHF) or IRET (which restores the flags from the stack as part of its effects) to restore it. This works if the program was started in real mode, but causes problems when such programs are run in a DPMI-based container on modern operating systems (such as NTVDM under Windows NT or later). Since CLI is a privileged instruction, it triggers a fault into the operating system when the program attempts to use it. The OS then typically stops delivering interrupts to the program until the program executes STI (which would cause another fault). However, the POPF instruction is not privileged and simply fails silently to restore the IF. The result is that the OS stops delivering interrupts to the program, which then hangs. DOS programs that do not use a protected mode extender do not suffer from this problem, as they execute in V86 mode where POPF does trigger a fault. There are few satisfactory resolutions to this issue. It is usually not possible to modify the program as source code is typically not available and there is no room in the instruction stream to introduce a STI without massive editing at the assembly level. Removing CLI's from the program or causing the V86 host to ignore CLI completely might cause other bugs if the guest's interrupt handlers are not re-entrant safe (though when executed on a modern processor, they typically execute fast enough to avoid overlapping of interrupts). Disabling interrupts In the x86 instruction set CLI is commonly used as a synchronization mechanism in uniprocessor systems. For example, a CLI is used in operating systems to disable interrupts so kernel code (typically a driver) can avoid race conditions within an interrupt handler. This is necessary when modifying multiple associated tables without interruption. Enabling Interrupts The STI of the x86 instruction set enables interrupts by setting the IF. In some implementations of the instruction which enables interrupts, interrupts are not enabled until after the next instruction. In this case the sequence of enabling interrupts immediately followed by disabling interrupts results in interrupts not being recognized. Multiprocessor Considerations The Interrupt flag only affects a single processor. In multiprocessor systems an interrupt handler must use other synchronization mechanisms such as locks. See also Interrupt FLAGS register (computing) Intel 8259 Advanced Programmable Interrupt Controller (APIC) Interrupt handler Non-maskable interrupt (NMI) Programmable Interrupt Controller (PIC) x86 References External links Intel 64 and IA-32 Architectures Software Developer Manuals - Retrieved 2017-09-14 X86 instructions Interrupts
12152395
https://en.wikipedia.org/wiki/Espresso%20heuristic%20logic%20minimizer
Espresso heuristic logic minimizer
The ESPRESSO logic minimizer is a computer program using heuristic and specific algorithms for efficiently reducing the complexity of digital logic gate circuits. ESPRESSO-I was originally developed at IBM by Robert K. Brayton et al. in 1982. and improved as ESPRESSO-II in 1984. Richard L. Rudell later published the variant ESPRESSO-MV in 1986 and ESPRESSO-EXACT in 1987. Espresso has inspired many derivatives. Introduction Electronic devices are composed of numerous blocks of digital circuits, the combination of which performs the required task. The efficient implementation of logic functions in the form of logic gate circuits (such that no more logic gates are used than are necessary) is necessary to minimize production costs, and/or maximize a device's performance. Designing digital logic circuits All digital systems are composed of two elementary functions: memory elements for storing information, and combinational circuits that transform that information. State machines, like counters, are a combination of memory elements and combinational logic circuits. Since memory elements are standard logic circuits they are selected out of a limited set of alternative circuits; so designing digital functions comes down to designing the combinational gate circuits and interconnecting them. In general the instantiation of logic circuits from high-level abstraction is referred to as logic synthesis, which can be carried out by hand, but usually some formal method by computer is applied. In this article the design methods for combinational logic circuits are briefly summarized. The starting point for the design of a digital logic circuit is its desired functionality, having derived from the analysis of the system as a whole, the logic circuit is to make part of. The description can be stated in some algorithmic form or by logic equations, but may be summarized in the form of a table as well. The below example shows a part of such a table for a 7-segment display driver that translates the binary code for the values of a decimal digit into the signals that cause the respective segments of the display to light up. Digit Code Segments A B C D E F G 0 0000 1 1 1 1 1 1 0 -A- 1 0001 0 1 1 0 0 0 0 | | 2 0010 1 1 0 1 1 0 1 F B 3 0011 1 1 1 1 0 0 1 | | 4 0100 0 1 1 0 0 1 1 -G- 5 0101 1 0 1 1 0 1 1 | | 6 0110 1 0 1 1 1 1 1 E C 7 0111 1 1 1 0 0 0 0 | | 8 1000 1 1 1 1 1 1 1 -D- 9 1001 1 1 1 1 0 1 1 The implementation process starts with a logic minimization phase, to be described below, in order to simplify the function table by combining the separate terms into larger ones containing fewer variables. Next, the minimized result may be split up in smaller parts by a factorization procedure and is eventually mapped onto the available basic logic cells of the target technology. This operation is commonly referred to as logic optimization. Classical minimization methods Minimizing Boolean functions by hand using the classical Karnaugh maps is a laborious, tedious and error prone process. It isn't suited for more than six input variables and practical only for up to four variables, while product term sharing for multiple output functions is even harder to carry out. Moreover, this method doesn't lend itself to be automated in the form of a computer program. However, since modern logic functions are generally not constrained to such a small number of variables, while the cost as well as the risk of making errors is prohibitive for manual implementation of logic functions, the use of computers became indispensable. The first alternative method to become popular was the tabular method developed by Willard Quine and Edward McCluskey. Starting with the truth table for a set of logic functions, by combining the minterms for which the functions are active (the ON-cover) or for which the function value is irrelevant (the Don't-Care-cover or DC-cover) a set of prime implicants is composed. Finally, a systematic procedure is followed to find the smallest set of prime implicants the output functions can be realised with. Although this Quine–McCluskey algorithm is very well suited to be implemented in a computer program, the result is still far from efficient in terms of processing time and memory usage. Adding a variable to the function will roughly double both of them, because the truth table length increases exponentially with the number of variables. A similar problem occurs when increasing the number of output functions of a combinational function block. As a result the Quine–McCluskey method is practical only for functions with a limited number of input variables and output functions. ESPRESSO algorithm A different approach to this issue is followed in the ESPRESSO algorithm, developed by Brayton et al. at the University of California, Berkeley. It is a resource and performance efficient algorithm aimed at solving the heuristic hazard-free two-level logic minimization problem. Rather than expanding a logic function into minterms, the program manipulates "cubes", representing the product terms in the ON-, DC-, and OFF- covers iteratively. Although the minimization result is not guaranteed to be the global minimum, in practice this is very closely approximated, while the solution is always free from redundancy. Compared to the other methods, this one is essentially more efficient, reducing memory usage and computation time by several orders of magnitude. Its name reflects the way of instantly making a cup of fresh coffee. There is hardly any restriction to the number of variables, output functions and product terms of a combinational function block. In general, e.g. tens of variables with tens of output functions are readily dealt with. The input for ESPRESSO is a function table of the desired functionality; the result is a minimized table, describing either the ON-cover or the OFF-cover of the function, depending on the selected options. By default, the product terms will be shared as much as possible by the several output functions, but the program can be instructed to handle each of the output functions separately. This allows for efficient implementation in two-level logic arrays such as a PLA (Programmable Logic Array) or a PAL (Programmable Array Logic). The ESPRESSO algorithm proved so successful that it has been incorporated as a standard logic function minimization step into virtually any contemporary logic synthesis tool. For implementing a function in multi-level logic, the minimization result is optimized by factorization and mapped onto the available basic logic cells in the target technology, whether this concerns an field-programmable gate array (FPGA) or an application-specific integrated circuit (ASIC). Software ESPRESSO The original ESPRESSO program is available as C source code from the University of California, Berkeley website. The last release was version 2.3 dated 1988. The ESPRESSO-AB and EQNTOTT (equation to truth table) program, an updated version of ESPRESSO for modern POSIX systems, is available in Debian Linux distribution (.deb) file format as well the C source code. The last release was version 9.0 dated 2008. Logic Friday Logic Friday is a free Windows program that provides a graphical interface to Espresso, as well as to misII, another module in the Berkeley Octtools package. With Logic Friday users can enter a logic function as a truth table, equation, or gate diagram, minimize the function, and then view the results in both of the other two representations. The last release was version 1.1.4 dated 2012. Minilog Minilog is a free Windows program that provides logic minimization exploiting this Espresso algorithm. It is able to generate a two-level gate implementation for a combinational function block with up to 40 inputs and outputs or a synchronous state machine with up to 256 states. It is part of the Publicad educational design package. ESPRESSO-IISOJS ESPRESSO-IISOJS is a JavaScript implementation of ESPRESSO-II for single output functions. It employs unit propagation as an additional optimization technique for the various algorithms in ESPRESSO-II that are based on the unate recursive paradigm. Another addition is allowing control over when literals can be raised which can be exploited to effectively minimize Kleene logic functions. References Further reading Electronic design automation software Electronics optimization Free simulation software Electronic circuit simulators
1423563
https://en.wikipedia.org/wiki/Comparison%20of%20Internet%20forum%20software
Comparison of Internet forum software
This article outlines the general features commonly found in various Internet forum software packages. It highlights major features that the manager of a forum might want and should expect to be commonly available in different forum software. These comparisons do not include remotely hosted services which use their own proprietary software, rather than offering a package for download which webmasters can host by themselves. General information Basic general information about the forums: creator/company, license/price etc. Features Flat vs. threaded A flat forum is one where each message is added onto the end of the discussion, with no set relation to any prior messages (other than being on the same discussion topic — except in case of Off-Topic posting). But, there is normally a feature to 'quote' another user's post, to allow referencing back to other posts. A threaded forum is one where users can specify their message is a reply to an existing message. Threaded forums can display relationships between message topics and associated replies, such as by indenting replies and placing them below the post they reference. Threaded forums are most commonly used for discussions where individual messages tend to be short, such as on social news sites (e.g. Slashdot or reddit), or in commenting systems like Disqus. User-selectable themes Most forums provide an option for the forum owner to customize the look and feel. Some forums also allow the administrator to create multiple styles or themes, and allow the user to choose which one they wish to view. Themes may simply be a different set of colors and graphics, or they may involve a different layout to the forum, such as one optimized for small-screen devices. The comparison table shows whether a forum software allows forum administrators to customise the "template" of the forum (or specific sections thereof) without altering the released code. Support is considered partial where it is only possible to add CSS rules in this way. Unread message tracking Unread message tracking refers to the way that is used by forum software to track and display messages that have not yet been read by the current user. This can be one of the following: Session — when a user's session starts, this method relies on the user's "last visit time" to display all messages created since that date as unread. Everything that was posted before "last visit time" is considered "read" regardless of whether the user has actually seen it or not. Until the user's session expires, this method properly tracks read/unread messages, starting with messages that were selected as "unread" when the user's session started. This method is broadly used, due to the simplicity and speed benefits of only storing and checking against a single database value. Full — forum software which records in the persistent database what messages have been read or unread by each user, regardless of user session expiration. Some forum software also allows the user to 'mark as unread', so that they can come back to a message later. Export, portability This column judges the ability to allow users to export data from the forum installation and then import it in new installations of the same software (cf. right to fork and data portability) or feed it to data conversion tools. Software portability is a key assessment criterion for the choice and procurement of software. Email/NNTP interface Whether the software can be used with standard Email or NNTP clients. Adoption of standard protocols is key for interoperability. Languages Whether the internationalization and localization of the software are sufficient to both allow and actually provide grammatically correct support for the native language of the target users. For the purposes of this table, we consider languages used by a number of installations/users equal to at least 1% of the top million web properties using that software, without major reported bugs in support. Single sign-on Single sign-on is often required to enhance web accessibility of the application. This can be accomplished with standards like OAuth and OpenID. Post drafts To prevent loss of content from browser crashes and the like, forum software may be equipped with the ability to automatically save unfinished posts as drafts, like some word processor and text editor software. Drafts may be stored server-side (in the forum account) or client-side (browser's local storage). The former benefits from the ability to immediately resume drafts on another device, whereas the latter allows saving drafts while internet is disconnected and slightly reduces bandwidth consumption. Other features Duplicate thread prevention Automatic recommendation systems based on content and users can help forum users find existing discussions similar to those they are browsing or on the topic they are searching, or else reach the correct users with their posts, avoiding to "spam" the forum with duplicate or off-topic posts. Many users do not bother to search a forum and directly create new threads to seek an answer to a question. On some forums, when the user types a new thread subject, the forum software brings up similar threads automatically on the side. This helps keep the number of redundant threads (or the overall forum pollution) to a lower level as users who neglect to search for a topic and are posting a thread may find the answer to their question as they are creating the new thread. Forum spam defenses Most forums are at risk of continuous attack by forum spammers, largely promoting websites with no relevance to the forum's niche. Systems vary in how they are geared to defense, and checking the offering is an important consideration before selection. A forum cannot succeed unless there is an effective system of defense, and an efficient set of tools for spam removal. CAPTCHAs are a common feature used among most internet forum software and are often used to prevent automated registrations. Banning or deleting membership should come as standard, with the ability to blacklist the username, email address or IP address for variable time spans. Reference to an anti-forum spam database can be built into the forum software, or offered as add-ons for specific databases. User-friendly URLs Human-friendly forum URLs do not have a query string and instead contain only the path of the topic. A user-unfriendly URL may contain cryptic parameters, numeric IDs, or file type extensions (e.g. .php) that do not matter to the user and could change if the forum is reimplemented using a different programming language. User-friendly URLs are easy to remember and to type, and may enhance search engine optimization (SEO). user-unfriendly URL example: http://example.com/forum/index.php?t=rview&th=120029 user-friendly URL example: http://example.com/usability-issues/user-friendly-urls In general this is accomplished via URL mapping, however, historically in many forum software packages, human-friendly URLs are an afterthought implemented via URL rewriting, and URLs often contain a numeric ID which represents the thread, while the remainder of the URL can in reality be any string: http://example.com/forum/12345/lets-use-friendly-urls and http://example.com/forum/12345/bogus-path-actually point to the same thread, http://example.com/forum/12345/. These are also known as URL slugs. The only non-controversial counter-argument to using user-friendly URLs is that they would be leaked in the HTTP referer header field when a user clicks on an external link from a post, which is undesirable for private (sub)forums, since a URL derived from the topic title could convey sensitive information. This issue can be resolved by rewriting external links to point to a redirection page that performs referer hiding. Data storage Information about what data storage system can be used. See also Comparison of civic technology platforms Comparison of Q&A sites List of Internet forums References Internet forum software Forum software