id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
40312341
https://en.wikipedia.org/wiki/Steven%20Stalinsky
Steven Stalinsky
Steven Stalinsky is an expert on the Middle East, terrorism and terrorist use of the Internet, and encryption technologies, and has served as Executive Director of the Middle East Media Research Institute (MEMRI) since 1999. Since 2006, his research has focused on detailing and developing strategies against cyber jihad, describing how terrorist groups such as Al-Qaeda, ISIS, and others use the Internet, social media, and encryption for propaganda, recruiting, and hacking. He was an early advocate of calling on the tech community to take stronger action on removing terrorist content from their platforms and for creating industry standards to combat it. Research on terrorist use of social media Stalinsky has published extensive research and documentation of the use of Facebook, YouTube, Twitter, Tumblr and other social media by Al-Qaeda and ISIS. He has been interviewed about terrorist use of social media by Fox News, The Washington Post, The Telegraph, the South China Morning Post, The Washington Times, The Mercury News, The Hill, WIRED, and The Daily Telegraph. Vice's social media website, Motherboard, reported on MEMRI research co-authored by Stalinsky. Stalinsky was credited by Fast Company with publishing "one of the only studies to date" on how Jihadists use the social media service Instagram. In the article, Stalinsky noted that much of this content also appeared on other corners of the Internet and was shared via other forms of social media. Stalinsky was one of the first to write about terrorist use of the encrypted messaging app Telegram in his research report, 'Supporters of the Islamic State' – Anatomy Of A Private Jihadi Group On Encrypted App Telegram. He also debated the issue with the app's founder, Pavel Durov, via Twitter. Stalinsky has been interviewed numerous times about his research on terrorist use of Telegram, VK and encryption, including articles from The Wall Street Journal, The Washington Post, The Washington Times, Voice of America, The Hill, SCmagazine.com, CNN, NBC, The Jerusalem Post, The Los Angeles Times, Discovery, FedScoop, The Dallas Morning News, Homeland Security Today, Wired, CBS, Business Insider and others. Stalinsky's research on Al-Qaeda's online magazine Inspire was cited in a U.S. Department of Justice terrorism case. The U.S. government used translations and analysis quoted in Stalinsky's research as Exhibit 1 to answer a lawsuit by the father of Anwar Al-Awlaki, who petitioned President Barack Obama, Secretary of Defense Robert Gates and CIA Director Leon Panetta, seeking his son's removal from the U.S. government's "kill list." In 2013 and 2014, several media organizations used Stalinsky's research describing the indoctrination and exploitation of young children by Al-Qaeda and other Jihadist groups. A Voice of America article quoted Stalinsky, "There is a concerted effort by Al-Qaeda central and splinter groups – greater than ever – to concentrate on children. Al-Qaeda has realized that this is an effective way for the group to spread its ideology and grow."[ A Washington Post article included Stalinsky's research, quoting him, "This is the future threat... These are the children of Al-Qaeda." Fox News also reported on the issue and included his research. Research for the Middle East Media Research Institute Mr. Stalinsky has authored over 100 (nonacademic) research reports while at the Middle East Media Research Institute, on issues ranging from reform in the Arab world to online activity by Al-Qaeda, ISIS, the Taliban and other terrorist organizations, as well as their use of encryption technology. Other research reports detailed terrorist use of U.S.-based libraries such as the Internet Archive, Arab and Iranian hacking groups and more. He was one of the first to write about Jihadism's use of social media, including YouTube, Twitter and Telegram, with a series of research reports on specific terrorist activity, such as Hezbullah's presence on Facebook, YouTube, Twitter and apps from Google Play and iTunes. He also reported on the thousands of YouTube videos – with over 3 million views at that time – featuring extremist Yemeni-American sheikh Anwar Al-Awlaki . Stalinsky led early efforts to persuade YouTube to add a feature to flag terrorist content, and one of his reports documented his 2010 meeting with Google officials on this matter. Another report detailed his years of effort to prompt Twitter to take action about Jihadis' use of their social networking service – efforts which culminated in a 2013 Congressional letter to the FBI urging them to take action. Stalinsky authored a report for the Middle East Media Research Institute (MEMRI) titled, "Will President-Elect Trump Defeat Cyber Jihad?" The report describes how Islamic terrorists use social media and the Internet for recruitment, fundraising, planning and propaganda, and calls on the Trump administration to address the issue. It calls on the Trump administration to bring together technology experts and researchers in a Bletchley Park like setting and urges the government to be forward thinking on the issue. Chronicling Al-Qaeda leader Adam Gadahn Stalinsky first wrote about Gadahn on September 13, 2006, when the New York Sun published, “A Jewish Musician's Son Joins Al Qaeda's Ranks,” by Stalinsky. The op-ed provides details of the life of Adam Gadahn (born Pearlman), the American who left his home in California to join the ranks of Al-Qaeda. Gadahn was put on the FBI's Most Wanted list in 2004, reportedly received training at terrorist camps in Afghanistan and was sent to Baltimore on a suicide-bombing mission. The op-ed notes Gadahn's appearances in several Al-Qaeda media productions, including his formal introduction in a September 2, 2006 video by then-Al-Qaeda second-in-command Ayman Al-Zawahiri. After the United States Government announced it had killed Gadahn in a drone attack, Stalinsky wrote “Why Adam Gadahn’s Killing Matters to Al Qaeda,” which was published in Homeland Security Today on May 19, 2015. The op-ed discusses the significance of Adam Gadahn's death to Al-Qaeda, and his role as part of the organization's media outreach efforts to the Western world. Stalinsky notes that Gadahn was one of the few people remaining in contact with Al-Qaeda leader Osama bin Laden. Stalinsky also points out that, as an Al-Qaeda propagandist, Gadahn's story could resonate with susceptible populations in the United States and other Western countries, and expand the organization's effort to reach a broader audience. A longer version of the article "Why Adam Gadahn's Killing Matters to Al Qaeda" appeared as MEMRI Daily Brief 45 on the website of MEMRI on April 24, 2015. On the one-year anniversary of Gadahn's death, Stalinsky wrote, “Revisiting American Al-Qaeda Spokesman And Leader Adam Gadahn's Influence On The First Anniversary Of His Death." Stalinsky also authored a report published on September 8, 2016, "Al-Qaeda's U.S.-Born Leader Adam Gadahn and 9/11" that includes a very detailed report on Al-Qaeda's American spokesman Adam Gadahn. Stalinsky's book about Adam Gadahn, AMERICAN TRAITOR: The rise and fall of Al-Qaeda's U.S.-Born Leader Adam Gadahn, provides detailed background on Gadahn's life story, including his American upbringing, his conversion to Islam and subsequent radicalization, his move to Pakistan, the translation and video work he did for Al-Qaeda, and how he became accepted by Al-Qaeda's top echelons - including the architects of the September 11 attacks. Gadahn was the first American since WWII to be indicted for treason by the U.S. Government. In writing the book, Stalinsky had access to research from the Middle East Media Research Institute (MEMRI), including videos featuring Gadahn and a lengthy interview Gadahn made for publication after his death. Research on terrorist use of drones Stalinsky co-authored a major study for MEMRI on the Islamic State of Iraq and the Levant’s and other Jihadi organizations use of drones that has been cited by many media outlets. The Washington Post subsequently interviewed Stalinsky for an article on how ISIS [Islamic State] uses Unmanned aerial vehicle. The website TheStreet.com interviewed Stalinsky for an article about ISIS and drones. The website MeriTalk.com quoted Stalinsky for an article on ISIS and UAVs. A Discover article draws on a report authored by Stalinsky. See also MEMRI Hezbollah Islamic terrorism References External links Articles.washingtonpost.com Living people American nonprofit executives American non-fiction writers Year of birth missing (living people) Place of birth missing (living people)
30991801
https://en.wikipedia.org/wiki/Data%20Integrity%20Field
Data Integrity Field
Data Integrity Field (DIF) is an approach to protect data integrity in computer data storage from data corruption. It was proposed in 2003 by the T10 subcommittee of the International Committee for Information Technology Standards. A similar approach for data integrity was added in 2016 to the NVMe 1.2.1 specification. Packet-based storage transport protocols have CRC protection on command and data payloads. Interconnect buses have parity protection. Memory systems have parity detection/correction schemes. I/O protocol controllers at the transport/interconnect boundaries have internal data path protection. Data availability in storage systems is frequently measured simply in terms of the reliability of the hardware components and the effects of redundant hardware. But the reliability of the software, its ability to detect errors, and its ability to correctly report or apply corrective actions to a failure have a significant bearing on the overall storage system availability. The data exchange usually takes place between the host CPU and storage disk. There may be a storage data controller in between these two. The controller could be RAID controller or simple storage switches. DIF included extending the disk sector from its traditional 512 bytes, to 520 bytes, by adding eight additional protection bytes. This extended sector is defined for Small Computer System Interface (SCSI) devices, which is in turn used in many enterprise storage technologies, such as Fibre Channel. Oracle Corporation included support for DIF in the Linux kernel. An evolution of this technology called Protection Information was introduced by 2012. One large vendor promoting the technology is EMC Corporation. References External links Linux Data Integrity, August 30, 2008, Oracle Corporation, by Martin K. Petersen (archived from the original on January 9, 2015) Linux Storage Topology and Advanced Features, November 24, 2009, by Martin K. Petersen Data Integrity Field - T10.org, working on Feb 15 2019. Error detection and correction
12207850
https://en.wikipedia.org/wiki/Comparison%20of%20software%20for%20molecular%20mechanics%20modeling
Comparison of software for molecular mechanics modeling
This is a list of computer programs that are predominantly used for molecular mechanics calculations. See also Car–Parrinello molecular dynamics Comparison of force field implementations Comparison of nucleic acid simulation software List of molecular graphics systems List of protein structure prediction software List of quantum chemistry and solid state physics software List of software for Monte Carlo molecular modeling List of software for nanostructures modeling Molecular design software Molecular dynamics Molecular modeling on GPUs Molecule editor Notes and references External links SINCRIS Linux4Chemistry Collaborative Computational Project World Index of Molecular Visualization Resources Short list of Molecular Modeling resources OpenScience Biological Magnetic Resonance Data Bank Materials modelling and computer simulation codes A few tips on molecular dynamics atomistic.software - atomistic simulation engines and their citation trends Computational chemistry software Computational chemistry Molecular mechanics modeling Molecular dynamics software Molecular modelling software Science software
181904
https://en.wikipedia.org/wiki/Bluefish%20%28software%29
Bluefish (software)
Bluefish is a free software advanced text editor with a variety of tools for programming and website development. It supports coding languages including HTML, XHTML, CSS, XML, PHP, C, C++, JavaScript, Java, Go, Vala, Ada, D, SQL, Perl, ColdFusion, JSP, Python, Ruby, and shell. It is available for many platforms, including Linux, macOS and Windows, and can be used via integration with GNOME or run as a stand-alone application. Designed as a compromise between plain text editors and full programming IDEs, Bluefish is lightweight, fast and easy to learn, while providing many IDE features. It has been translated into 17 languages. Features Bluefish's wizards can be used to assist in task completion. Its other features include syntax highlighting, auto-completion, code folding, auto-recovery, upload/download functionality, a code-aware spell-checker, a Unicode character browser, code navigation, and bookmarks. It has a multiple document interface that can quickly load codebases or websites, and it has many tools search-and-replace tools that can be used with scripts and regular expressions. It can store the current states of projects to reopen them in that state. Zencoding/emmet is supported for web development. Bluefish is extensible via plugins and scripts. Many scripts come preconfigured, including statical code analysis, and syntax and markup checks for many different markup and programming languages. History Bluefish was started by Chris Mazuc and Olivier Sessink in 1997 to facilitate web development professionals on Linux desktop platforms. Its development has been continued by a changing group of professional web developers under project organizer Olivier Sessink. It was originally called Thtml editor, which was considered too cryptic; then Prosite, which was abandoned to avoid clashes with web-development companies already using that name. The name Bluefish was chosen after a logo (a child's drawing of a blue fish) was proposed on its mailing list. Since version 1.0, the original logo was replaced with a new, more polished one. Source code and development Bluefish is written in C and uses the cross-platform GTK library for its GUI widgets. Markup and programming language support is defined in XML files. Bluefish has a plugin API in C, but it has been used mainly to separate non-maintained parts (such as the infobrowser-plugin) from maintained parts. A few Python plugins exist as well, but they need a C plugin to interact with the main program. Bluefish also supports very loosely coupled plugins: external scripts that read standard input and return their results via standard output can be configured by the user in the preferences panel. It uses autoconf/automake to configure and set up its build environment. Both llvm and GCC can be used to compile Bluefish. On Windows, MinGW is used to build the binaries. Reception A Softpedia review found the software powerful, feature-rich and easy to use. See also Comparison of HTML editors List of HTML editors List of PHP editors List of text editors References External links Bluefish: The Definitive Guide Interview with main developer Olivier Sessink Free integrated development environments HTML editors Free HTML editors Web development software Linux integrated development environments Linux text editors Software using the GPL license Text editors that use GTK MacOS text editors
455719
https://en.wikipedia.org/wiki/Mass%20storage
Mass storage
In computing, mass storage refers to the storage of large amounts of data in a persisting and machine-readable fashion. In general, the term is used as large in relation to contemporaneous hard disk drives, but it has been used large in relation to primary memory as for example with floppy disks on personal computers. Devices and/or systems that have been described as mass storage include tape libraries, RAID systems, and a variety of computer drives such as hard disk drives, magnetic tape drives, magneto-optical disc drives, optical disc drives, memory cards, and solid-state drives. It also includes experimental forms like holographic memory. Mass storage includes devices with removable and non-removable media. It does not include random access memory (RAM). There are two broad classes of mass storage: local data in devices such as smartphones or computers, and enterprise servers and data centers for the cloud. For local storage, SSDs are on the way to replacing HDDs. Considering the mobile segment from phones to notebooks, the majority of systems today is based on NAND Flash. As for Enterprise and data centers, storage tiers have established using a mix of SSD and HDD. Definition The notion of "large" amounts of data is of course highly dependent on the time frame and the market segment, as storage device capacity has increased by many orders of magnitude since the beginnings of computer technology in the late 1940s and continues to grow; however, in any time frame, common mass storage devices have tended to be much larger and at the same time much slower than common realizations of contemporaneous primary storage technology. Papers at the 1966 Fall Joint Computer Conference (FJCC) used the term mass storage for devices substantially larger than contemporaneous hard disk drives. Similarly, a 1972 analysis identified mass storage systems from Ampex (Terabit Memory) using video tape, Precision Industries (Unicon 690-212) using lasers and International Video (IVC-1000) using video tape and states "In the literature, the most common definition of mass storage capacity is a trillion bits.". The first IEEE conference on mass storage was held in 1974 and at that time identified mass storage as "capacity on the order of 1012 bits" (1 gigabyte). In the mid-1970s IBM used the term to in the name of the IBM 3850 Mass Storage System, which provided virtual disks backed up by Helical scan magnetic tape cartridges, slower than disk drives but with a capacity larger than was affordable with disks. The term mass storage was used in the PC marketplace for devices, such as floppy disk drives, far smaller than devices that were not considered mass storage in the mainframe marketplace. Mass storage devices are characterized by: Sustainable transfer speed Seek time Cost Capacity Storage media Magnetic disks are the predominant storage media in personal computers. Optical discs, however, are almost exclusively used in the large-scale distribution of retail software, music and movies because of the cost and manufacturing efficiency of the molding process used to produce DVD and compact discs and the nearly-universal presence of reader drives in personal computers and consumer appliances. Flash memory (in particular, NAND flash) has an established and growing niche as a replacement for magnetic hard disks in high performance enterprise computing installations due to its robustness stemming from its lack of moving parts, and its inherently much lower latency when compared to conventional magnetic hard drive solutions. Flash memory has also long been popular as removable storage such as USB sticks, where it de facto makes up the market. This is because it scales better cost-wise in lower capacity ranges, as well as its durability. It has also made its way onto laptops in the form of SSDs, sharing similar reasons with enterprise computing: Namely, markedly high degrees of resistance to physical impact, which is again, due to the lack of moving parts, as well as a performance increase over conventional magnetic hard disks and markedly reduced weight and power consumption. Flash has also made its way onto cell phones. The design of computer architectures and operating systems are often dictated by the mass storage and bus technology of their time. Usage Mass storage devices used in desktop and most server computers typically have their data organized in a file system. The choice of file system is often important in maximizing the performance of the device: general purpose file systems (such as NTFS and HFS, for example) tend to do poorly on slow-seeking optical storage such as compact discs. Some relational databases can also be deployed on mass storage devices without an intermediate file system or storage manager. Oracle and MySQL, for example, can store table data directly on raw block devices. On removable media, archive formats (such as tar archives on magnetic tape, which pack file data end-to-end) are sometimes used instead of file systems because they are more portable and simpler to stream. On embedded computers, it is common to memory map the contents of a mass storage device (usually ROM or flash memory) so that its contents can be traversed as in-memory data structures or executed directly by programs. See also Data storage for general overview of storage methods Computer data storage for storage methods specific to computing field Disk storage for both magnetic and optical recording of disks Magnetic tape data storage Computer storage density List of device bandwidths Solid-state drive RAM disk RAID Notes References Computer storage devices
7235078
https://en.wikipedia.org/wiki/Heuristic%20analysis
Heuristic analysis
Heuristic analysis is a method employed by many computer antivirus programs designed to detect previously unknown computer viruses, as well as new variants of viruses already in the "wild". Heuristic analysis is an expert based analysis that determines the susceptibility of a system towards particular threat/risk using various decision rules or weighing methods. MultiCriteria analysis (MCA) is one of the means of weighing. This method differs from statistical analysis, which bases itself on the available data/statistics. Operation Most antivirus programs that utilize heuristic analysis perform this function by executing the programming commands of a questionable program or script within a specialized virtual machine, thereby allowing the anti-virus program to internally simulate what would happen if the suspicious file were to be executed while keeping the suspicious code isolated from the real-world machine. It then analyzes the commands as they are performed, monitoring for common viral activities such as replication, file overwrites, and attempts to hide the existence of the suspicious file. If one or more virus-like actions are detected, the suspicious file is flagged as a potential virus, and the user alerted. Another common method of heuristic analysis is for the anti-virus program to decompile the suspicious program, then analyze the machine code contained within. The source code of the suspicious file is compared to the source code of known viruses and virus-like activities. If a certain percentage of the source code matches with the code of known viruses or virus-like activities, the file is flagged, and the user alerted. Effectiveness Heuristic analysis is capable of detecting many previously unknown viruses and new variants of current viruses. However, heuristic analysis operates on the basis of experience (by comparing the suspicious file to the code and functions of known viruses). This means it is likely to miss new viruses that contain previously unknown methods of operation not found in any known viruses. Hence, the effectiveness is fairly low regarding accuracy and the number of false positives. As new viruses are discovered by human researchers, information about them is added to the heuristic analysis engine, thereby providing the engine the means to detect new viruses. References External links Retrospective/proActive antivirus test from AV-Comparatives.org Antivirus software de:Heuristik#Informatik
12155645
https://en.wikipedia.org/wiki/Separation%20of%20mechanism%20and%20policy
Separation of mechanism and policy
The separation of mechanism and policy is a design principle in computer science. It states that mechanisms (those parts of a system implementation that control the authorization of operations and the allocation of resources) should not dictate (or overly restrict) the policies according to which decisions are made about which operations to authorize, and which resources to allocate. While most commonly discussed in the context of security mechanisms (authentication and authorization), separation of mechanism and policy is applicable to a range of resource allocation problems (e.g. CPU scheduling, memory allocation, quality of service) as well as the design of software abstractions. Per Brinch Hansen introduced the concept of separation of policy and mechanism in operating systems in the RC 4000 multiprogramming system. Artsy and Livny, in a 1987 paper, discussed an approach for an operating system design having an "extreme separation of mechanism and policy". In a 2000 article, Chervenak et al. described the principles of mechanism neutrality and policy neutrality. Rationale and implications The separation of mechanism and policy is the fundamental approach of a microkernel that distinguishes it from a monolithic one. In a microkernel, the majority of operating system services are provided by user-level server processes.<ref>Raphael Finkel, Michael L. Scott, Artsy Y. and Chang, H. [www.cs.rochester.edu/u/scott/papers/1989_IEEETSE_Charlotte.pdf Experience with Charlotte: simplicity and function in a distributed operating system]. IEEE Trans. Software Engng 15:676-685; 1989. Extended abstract presented at the IEEE Workshop on Design Principles for Experimental Distributed Systems, Purdue University; 1986.</ref> It is important for an operating system to have the flexibility of providing adequate mechanisms to support the broadest possible spectrum of real-world security policies. It is almost impossible to envision all of the different ways in which a system might be used by different types of users over the life of the product. This means that any hard-coded policies are likely to be inadequate or inappropriate for some (or perhaps even most) potential users. Decoupling the mechanism implementations from the policy specifications makes it possible for different applications to use the same mechanism implementations with different policies. This means that those mechanisms are likely to better meet the needs of a wider range of users, for a longer period of time. If it is possible to enable new policies without changing the implementing mechanisms, the costs and risks of such policy changes can be greatly reduced. In the first instance, this could be accomplished merely by segregating mechanisms and their policies into distinct modules: by replacing the module which dictates a policy (e.g. CPU scheduling policy) without changing the module which executes this policy (e.g. the scheduling mechanism), we can change the behaviour of the system. Further, in cases where a wide or variable range of policies are anticipated depending on applications' needs, it makes sense to create some non-code means for specifying policies, i.e. policies are not hardcoded into executable code but can be specified as an independent description. For instance, file protection policies (e.g. Unix's user/group/other read/write/execute) might be parametrized. Alternatively an implementing mechanism could be designed to include an interpreter for a new policy specification language. In both cases, the systems are usually accompanied by a deferred binding mechanism (e.g. late binding of configuration options via configuration files, or runtime programmability via APIs) that permits policy specifications to be incorporated to the system or replaced by another after it has been delivered to the customer. An everyday example of mechanism/policy separation is the use of card keys to gain access to locked doors. The mechanisms (magnetic card readers, remote controlled locks, connections to a security server) do not impose any limitations on entrance policy (which people should be allowed to enter which doors, at which times). These decisions are made by a centralized security server, which (in turn) probably makes its decisions by consulting a database of room access rules. Specific authorization decisions can be changed by updating a room access database. If the rule schema of that database proved too limiting, the entire security server could be replaced while leaving the fundamental mechanisms (readers, locks, and connections) unchanged. Contrast this with issuing physical keys: if you want to change who can open a door, you have to issue new keys and change the lock. This intertwines the unlocking mechanisms with the access policies. For a hotel, this is significantly less effective than using key cards. See also Separation of protection and security Separation of concerns Unix philosophy X Window System Notes References included in book: (p. 18) (pp. 238–241) Chervenak et al. The data grid'' Journal of Network and Computer Applications, Volume 23, Issue 3, July 2000, Pages 187-200 Artsy, Yeshayahu, and Livny, Miron, An Approach to the Design of Fully Open Computing Systems (University of Wisconsin / Madison, March 1987) Computer Sciences Technical Report #689. External links Raphael Finkel's "An operating system Vade Mecum" Mechanism and policy for HTC Dichotomies Operating system technology Programming principles
3757156
https://en.wikipedia.org/wiki/PowerPlant
PowerPlant
PowerPlant is an object-oriented GUI toolkit, application framework and set of class libraries for the Classic Mac OS, created by Metrowerks. The framework was fairly popular during the late (OS versions 8 and 9) Classic Mac OS era, and was primarily used with CodeWarrior. It was designed to work with a GUI editor called Constructor, which was primarily a resource editor specializing in UI elements. Constructor used several custom resource types, 'PPob' ("PowerPlant object"—a general view description), 'CTYP' (custom widgets), and Mcmd (used for dispatching menu-related events). Later it was ported to also support MacOS X development with a single code base. After Metrowerks was acquired by Motorola, then spun out as part of Freescale Semiconductor, PowerPlant and the rest of the CodeWarrior desktop development tools were discontinued. During its heyday from the mid-1990s until the early 2000s, PowerPlant was the most popular framework available for Mac programmers, replacing both the THINK Class Library and MacApp as the premier object-oriented toolkit for the MacOS; however, the transition to MacOS X was rather difficult for many PowerPlant programmers. In 1997, there was no plan to port PowerPlant to the Yellow Box API found on Rhapsody, a radically different API that would become Cocoa, the official MacOS X API. Instead Metrowerks plan was to port PowerPlant using Codewarior Latitude, a Mac to UNIX porting library they acquired recently. In 2000, as Apple revised its transition plans, PowerPlant was ported to Carbon, with the Aqua user interface on MacOS X, offering a solution for developers wanting to support the new operating system. A new version, PowerPlant X, was introduced in 2004 as a native Carbon framework, using Carbon Events but never became as popular on Mac OS X as PowerPlant had been on Classic Mac OS. In February 2006, the PowerPlant class libraries were released as open source under the BSD license hosted on SourceForge. Although it could theoretically be recompiled for x86-64 Macs, it is Carbon-dependent and therefore can only be used in 32-bit mode, which preclude its use for software to run on macOS Catalina or later as 32-bit application support was dropped by the system. References External links Widget toolkits
1527520
https://en.wikipedia.org/wiki/Surreal%20Software
Surreal Software
Surreal Software was a video game developer based in Kirkland, Washington, United States, and a subsidiary of Warner Bros. Interactive Entertainment, known for The Lord of the Rings: The Fellowship of the Ring, The Suffering and Drakan series. Surreal Software employed over 130 designers, artists, and programmers. Surreal was acquired by Warner Bros. Games during the bankruptcy of Midway Games in July 2009. After a significant layoff in January 2011, the remaining employees were integrated into WBG's Kirkland offices, along with developers Monolith and Snowblind. The studio last worked on This Is Vegas, a title which was scheduled to be released on Xbox 360, PlayStation 3 and PC. The first screenshots, video and game information for This Is Vegas were unveiled the week of February 4, 2008, at IGN. History Surreal Software was founded in 1995 as an independent video game development studio by Alan Patmore, Stuart Denman, Nick Radovich and Mike Nichols. Patmore, Nichols and Radovich attended Eastside Catholic High School in Bellevue, Washington together. They found Stuart Denman, a University of Washington grad, through an online message board. The group began operating in 1995 in an office in Seattle's Queen Anne neighborhood. Previously, Radovich sold real estate, Patmore worked at a wireless company, Nichols was working at local game company Boss Studios, and Denman had just interned at Microsoft on the Excel team. Their first contract was with Bothell-based children's game developer Humongous, which found Denman's website and called to recruit programmers for Humongous. Surreal instead offered to do contract work. Surreal developed the Riot Engine for its games in 1996. First receiving critical acclaim with the 1999 release of Drakan: Order of the Flame, Surreal Software continued its success with Drakan: The Ancients' Gates in early 2002, both games selling in excess of 250,000 units. Having grown to two development teams, Surreal released The Lord of the Rings: The Fellowship of the Ring later that same year, selling over 1.8 million units. In March 2004, Surreal Software released The Suffering, an original concept action-packed horror game set in a secluded island prison, with monster designs by Stan Winston. Gamers and critics alike enjoyed this bold new contribution to the horror genre and in 2005, The Suffering: Ties That Bind followed. In April 2004, Midway Games acquired Surreal Software as an in-house game studio. This was the only studio that kept its original name following its acquisition by Midway in 2004 In 2006, the Surreal Software staff moved from Fremont to their new waterfront studio on Elliott Avenue next to the Olympic Sculpture Park. In 2009, Surreal Software was among the Midway Games assets purchased by Warner Bros. Interactive Entertainment. In 2010, the company was merged into the nearby studio Monolith Productions. Founders All of the founders had left the company prior to its merging with Monolith. Stuart Denman – CTO Alan Patmore – CEO and Creative Director Nick Radovich – CFO Mike Nichols – Art Director List of games Canceled Gunslinger The Lord of the Rings: The Treason of Isengard This Is Vegas References External links Official Surreal website Official WB Games website Career page GDC 08 Interview - This Is Vegas Drakan interview Screenshots and video clips from The Suffering: Ties That Bind GameSpot interview Stuart Denman's Game Development Blog Defunct video game companies of the United States Video game development companies Software companies based in Washington (state) Defunct companies based in Washington (state) Companies based in Kirkland, Washington Video game companies established in 1995 Video game companies disestablished in 2010 Video game companies of the United States Midway Games Warner Bros. Interactive Entertainment
23130866
https://en.wikipedia.org/wiki/Google%20Quick%20Search%20Box
Google Quick Search Box
Google Quick Search Box (GQSB) is an application launcher and desktop search tool developed by Google for Mac OS X computers. It allows users to search files, URLs, and contacts on their computer, as well as performing actions on the results. History and status GQSB was first released as a developer preview on January 12, 2009. It is still in beta, and a new version is released approximately monthly. The releases follow the sequence of chemical elements from the periodic table. The first public release was named Scandium and the current release is Cobalt. Like other Google products such as the Chrome browser, QSB is open-source software. However, just as with Chrome, Google distributes official builds with extra functionality. In the case of QSB, this includes plugin validation, auto-update, and Google-branded icons. Later it became a fully open source product, and just called Quick Search Box. In Mac OS X Snow Leopard, QSB has replaced Google Desktop. Comparisons to other products QSB is similar to another Google product, Google Desktop. However, there are several key differences between the two products: Operating system compatibility: While Google Desktop is cross-platform, QSB is at present Mac-only software. Google currently has an app that allows users to search the web using the iPhone. Search methodology: Google Desktop maintains its own index of files for searching. It also indexes Gmail messages. QSB uses macOS's built-in indexing technology, Spotlight. Because of this, QSB is less resource-intensive than Google Desktop. However, there are drawbacks. QSB does not support indexing of Gmail messages (because Spotlight doesn't), and some aspects do not function if Spotlight is disabled. Search philosophy: Google Desktop offers a search-only paradigm. On the other hand, QSB allows actions to be defined, which can be applied to search result. For example, after locating a file in QSB, it is possible to select among "open," "get info," "move to trash" and other actions. In this respect, it is similar to another macOS software tool, Quicksilver. The developer of Quicksilver, Nicholas Jitkoff is employed by Google and is one of the lead developers of QSB. Extensibility: Both QSB and Google Desktop offer plugin APIs. However, in QSB it is possible to add both search result plugins and action plugins (integrating with the actions described immediately above). Google indicates that there is more leeway to expand QSB. Features In addition to file search, QSB is distributed with a suite of plugins that allow additional functionality. These include: Bookmarks from common browsers (Firefox, Camino, Safari) Definitions from the operating system dictionary Results of simple calculations Integration with Google Documents and Picasa Criticisms Users have noted that the functionality as compared to Desktop is reduced, especially in the area of in-document text searching, Gmail message searching and web history searching. References External links https://code.google.com/p/qsb-mac/ Desktop search engines Quick Search Box Utilities for macOS Free software programmed in Objective-C MacOS-only free software
37153
https://en.wikipedia.org/wiki/Supercomputer
Supercomputer
A supercomputer is a computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, there are supercomputers which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS). For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS to tens of teraFLOPS. Since November 2017, all of the world's fastest 500 supercomputers run Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, India, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers. Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion). They have been essential in the field of cryptanalysis. Supercomputers were introduced in the 1960s, and for several decades the fastest were made by Seymour Cray at Control Data Corporation (CDC), Cray Research and subsequent companies bearing his name or monogram. The first such machines were highly tuned conventional designs that ran more quickly than their more general-purpose contemporaries. Through the decade, increasing amounts of parallelism were added, with one to four processors being typical. In the 1970s, vector processors operating on large arrays of data came to dominate. A notable example is the highly successful Cray-1 of 1976. Vector computers remained the dominant design into the 1990s. From then until today, massively parallel supercomputers with tens of thousands of off-the-shelf processors became the norm. The US has long been the leader in the supercomputer field, first through Cray's almost uninterrupted dominance of the field, and later through a variety of technology companies. Japan made major strides in the field in the 1980s and 90s, with China becoming increasingly active in the field. As of June 2020, the fastest supercomputer on the TOP500 supercomputer list is Fugaku, in Japan, with a LINPACK benchmark score of 415.5 PFLOPS, followed by Summit, at 148.8 PFLOPS, about 2.8 times fewer than Fugaku. The US has four of the top 10; China and Italy have two each, Switzerland has one. In June 2018, all combined supercomputers on the TOP500 list broke the 1 exaFLOPS mark. History In 1960, UNIVAC built the Livermore Atomic Research Computer (LARC), today considered among the first supercomputers, for the US Navy Research and Development Center. It still used high-speed drum memory, rather than the newly emerging disk drive technology. Also, among the first supercomputers was the IBM 7030 Stretch. The IBM 7030 was built by IBM for the Los Alamos National Laboratory, which in 1955 had requested a computer 100 times faster than any existing computer. The IBM 7030 used transistors, magnetic core memory, pipelined instructions, prefetched data through a memory controller and included pioneering random access disk drives. The IBM 7030 was completed in 1961 and despite not meeting the challenge of a hundredfold increase in performance, it was purchased by the Los Alamos National Laboratory. Customers in England and France also bought the computer, and it became the basis for the IBM 7950 Harvest, a supercomputer built for cryptanalysis. The third pioneering supercomputer project in the early 1960s was the Atlas at the University of Manchester, built by a team led by Tom Kilburn. He designed the Atlas to have memory space for up to a million words of 48 bits, but because magnetic storage with such a capacity was unaffordable, the actual core memory of the Atlas was only 16,000 words, with a drum providing memory for a further 96,000 words. The Atlas operating system swapped data in the form of pages between the magnetic core and the drum. The Atlas operating system also introduced time-sharing to supercomputing, so that more than one program could be executed on the supercomputer at any one time. Atlas was a joint venture between Ferranti and the Manchester University and was designed to operate at processing speeds approaching one microsecond per instruction, about one million instructions per second. The CDC 6600, designed by Seymour Cray, was finished in 1964 and marked the transition from germanium to silicon transistors. Silicon transistors could run more quickly and the overheating problem was solved by introducing refrigeration to the supercomputer design. Thus, the CDC6600 became the fastest computer in the world. Given that the 6600 outperformed all the other contemporary computers by about 10 times, it was dubbed a supercomputer and defined the supercomputing market, when one hundred computers were sold at $8 million each. Cray left CDC in 1972 to form his own company, Cray Research. Four years after leaving CDC, Cray delivered the 80 MHz Cray-1 in 1976, which became one of the most successful supercomputers in history. The Cray-2 was released in 1985. It had eight central processing units (CPUs), liquid cooling and the electronics coolant liquid Fluorinert was pumped through the supercomputer architecture. It reached 1.9 gigaFLOPS, making it the first supercomputer to break the gigaflop barrier. Massively parallel designs The only computer to seriously challenge the Cray-1's performance in the 1970s was the ILLIAC IV. This machine was the first realized example of a true massively parallel computer, in which many processors worked together to solve different parts of a single larger problem. In contrast with the vector systems, which were designed to run a single stream of data as quickly as possible, in this concept, the computer instead feeds separate parts of the data to entirely different processors and then recombines the results. The ILLIAC's design was finalized in 1966 with 256 processors and offer speed up to 1 GFLOPS, compared to the 1970s Cray-1's peak of 250 MFLOPS. However, development problems led to only 64 processors being built, and the system could never operate more quickly than about 200 MFLOPS while being much larger and more complex than the Cray. Another problem was that writing software for the system was difficult, and getting peak performance from it was a matter of serious effort. But the partial success of the ILLIAC IV was widely seen as pointing the way to the future of supercomputing. Cray argued against this, famously quipping that "If you were plowing a field, which would you rather use? Two strong oxen or 1024 chickens?" But by the early 1980s, several teams were working on parallel designs with thousands of processors, notably the Connection Machine (CM) that developed from research at MIT. The CM-1 used as many as 65,536 simplified custom microprocessors connected together in a network to share data. Several updated versions followed; the CM-5 supercomputer is a massively parallel processing computer capable of many billions of arithmetic operations per second. In 1982, Osaka University's LINKS-1 Computer Graphics System used a massively parallel processing architecture, with 514 microprocessors, including 257 Zilog Z8001 control processors and 257 iAPX 86/20 floating-point processors. It was mainly used for rendering realistic 3D computer graphics. Fujitsu's VPP500 from 1992 is unusual since, to achieve higher speeds, its processors used GaAs, a material normally reserved for microwave applications due to its toxicity. Fujitsu's Numerical Wind Tunnel supercomputer used 166 vector processors to gain the top spot in 1994 with a peak speed of 1.7 gigaFLOPS (GFLOPS) per processor. The Hitachi SR2201 obtained a peak performance of 600 GFLOPS in 1996 by using 2048 processors connected via a fast three-dimensional crossbar network. The Intel Paragon could have 1000 to 4000 Intel i860 processors in various configurations and was ranked the fastest in the world in 1993. The Paragon was a MIMD machine which connected processors via a high speed two-dimensional mesh, allowing processes to execute on separate nodes, communicating via the Message Passing Interface. Software development remained a problem, but the CM series sparked off considerable research into this issue. Similar designs using custom hardware were made by many companies, including the Evans & Sutherland ES-1, MasPar, nCUBE, Intel iPSC and the Goodyear MPP. But by the mid-1990s, general-purpose CPU performance had improved so much in that a supercomputer could be built using them as the individual processing units, instead of using custom chips. By the turn of the 21st century, designs featuring tens of thousands of commodity CPUs were the norm, with later machines adding graphic units to the mix. Systems with a massive number of processors generally take one of two paths. In the grid computing approach, the processing power of many computers, organized as distributed, diverse administrative domains, is opportunistically used whenever a computer is available. In another approach, many processors are used in proximity to each other, e.g. in a computer cluster. In such a centralized massively parallel system the speed and flexibility of the becomes very important and modern supercomputers have used various approaches ranging from enhanced Infiniband systems to three-dimensional torus interconnects. The use of multi-core processors combined with centralization is an emerging direction, e.g. as in the Cyclops64 system. As the price, performance and energy efficiency of general-purpose graphics processing units (GPGPUs) have improved, a number of petaFLOPS supercomputers such as Tianhe-I and Nebulae have started to rely on them. However, other systems such as the K computer continue to use conventional processors such as SPARC-based designs and the overall applicability of GPGPUs in general-purpose high-performance computing applications has been the subject of debate, in that while a GPGPU may be tuned to score well on specific benchmarks, its overall applicability to everyday algorithms may be limited unless significant effort is spent to tune the application to it. However, GPUs are gaining ground, and in 2012 the Jaguar supercomputer was transformed into Titan by retrofitting CPUs with GPUs. High-performance computers have an expected life cycle of about three years before requiring an upgrade. The Gyoukou supercomputer is unique in that it uses both a massively parallel design and liquid immersion cooling. Special purpose supercomputers A number of special-purpose systems have been designed, dedicated to a single problem. This allows the use of specially programmed FPGA chips or even custom ASICs, allowing better price/performance ratios by sacrificing generality. Examples of special-purpose supercomputers include Belle, Deep Blue, and Hydra for playing chess, Gravity Pipe for astrophysics, MDGRAPE-3 for protein structure prediction and molecular dynamics, and Deep Crack for breaking the DES cipher. Energy usage and heat management Throughout the decades, the management of heat density has remained a key issue for most centralized supercomputers. The large amount of heat generated by a system may also have other effects, e.g. reducing the lifetime of other system components. There have been diverse approaches to heat management, from pumping Fluorinert through the system, to a hybrid liquid-air cooling system or air cooling with normal air conditioning temperatures. A typical supercomputer consumes large amounts of electrical power, almost all of which is converted into heat, requiring cooling. For example, Tianhe-1A consumes 4.04 megawatts (MW) of electricity. The cost to power and cool the system can be significant, e.g. 4 MW at $0.10/kWh is $400 an hour or about $3.5 million per year. Heat management is a major issue in complex electronic devices and affects powerful computer systems in various ways. The thermal design power and CPU power dissipation issues in supercomputing surpass those of traditional computer cooling technologies. The supercomputing awards for green computing reflect this issue. The packing of thousands of processors together inevitably generates significant amounts of heat density that need to be dealt with. The Cray-2 was liquid cooled, and used a Fluorinert "cooling waterfall" which was forced through the modules under pressure. However, the submerged liquid cooling approach was not practical for the multi-cabinet systems based on off-the-shelf processors, and in System X a special cooling system that combined air conditioning with liquid cooling was developed in conjunction with the Liebert company. In the Blue Gene system, IBM deliberately used low power processors to deal with heat density. The IBM Power 775, released in 2011, has closely packed elements that require water cooling. The IBM Aquasar system uses hot water cooling to achieve energy efficiency, the water being used to heat buildings as well. The energy efficiency of computer systems is generally measured in terms of "FLOPS per watt". In 2008, Roadrunner by IBM operated at 3.76 MFLOPS/W. In November 2010, the Blue Gene/Q reached 1,684 MFLOPS/W and in June 2011 the top two spots on the Green 500 list were occupied by Blue Gene machines in New York (one achieving 2097 MFLOPS/W) with the DEGIMA cluster in Nagasaki placing third with 1375 MFLOPS/W. Because copper wires can transfer energy into a supercomputer with much higher power densities than forced air or circulating refrigerants can remove waste heat, the ability of the cooling systems to remove waste heat is a limiting factor. , many existing supercomputers have more infrastructure capacity than the actual peak demand of the machine designers generally conservatively design the power and cooling infrastructure to handle more than the theoretical peak electrical power consumed by the supercomputer. Designs for future supercomputers are power-limited the thermal design power of the supercomputer as a whole, the amount that the power and cooling infrastructure can handle, is somewhat more than the expected normal power consumption, but less than the theoretical peak power consumption of the electronic hardware. Software and system management Operating systems Since the end of the 20th century, supercomputer operating systems have undergone major transformations, based on the changes in supercomputer architecture. While early operating systems were custom tailored to each supercomputer to gain speed, the trend has been to move away from in-house operating systems to the adaptation of generic software such as Linux. Since modern massively parallel supercomputers typically separate computations from other services by using multiple types of nodes, they usually run different operating systems on different nodes, e.g. using a small and efficient lightweight kernel such as CNK or CNL on compute nodes, but a larger system such as a Linux-derivative on server and I/O nodes. While in a traditional multi-user computer system job scheduling is, in effect, a tasking problem for processing and peripheral resources, in a massively parallel system, the job management system needs to manage the allocation of both computational and communication resources, as well as gracefully deal with inevitable hardware failures when tens of thousands of processors are present. Although most modern supercomputers use Linux-based operating systems, each manufacturer has its own specific Linux-derivative, and no industry standard exists, partly due to the fact that the differences in hardware architectures require changes to optimize the operating system to each hardware design. Software tools and message passing The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. Software tools for distributed processing include standard APIs such as MPI and PVM, VTL, and open source software such as Beowulf. In the most common scenario, environments such as PVM and MPI for loosely connected clusters and OpenMP for tightly coordinated shared memory machines are used. Significant effort is required to optimize an algorithm for the interconnect characteristics of the machine it will be run on; the aim is to prevent any of the CPUs from wasting time waiting on data from other nodes. GPGPUs have hundreds of processor cores and are programmed using programming models such as CUDA or OpenCL. Moreover, it is quite difficult to debug and test parallel programs. Special techniques need to be used for testing and debugging such applications. Distributed supercomputing Opportunistic approaches Opportunistic supercomputing is a form of networked grid computing whereby a "super virtual computer" of many loosely coupled volunteer computing machines performs very large computing tasks. Grid computing has been applied to a number of large-scale embarrassingly parallel problems that require supercomputing performance scales. However, basic grid and cloud computing approaches that rely on volunteer computing cannot handle traditional supercomputing tasks such as fluid dynamic simulations. The fastest grid computing system is the distributed computing project Folding@home (F@h). , F@h reported 2.5 exaFLOPS of x86 processing power. Of this, over 100 PFLOPS are contributed by clients running on various GPUs, and the rest from various CPU systems. The Berkeley Open Infrastructure for Network Computing (BOINC) platform hosts a number of distributed computing projects. , BOINC recorded a processing power of over 166 petaFLOPS through over 762 thousand active Computers (Hosts) on the network. , Great Internet Mersenne Prime Search's (GIMPS) distributed Mersenne Prime search achieved about 0.313 PFLOPS through over 1.3 million computers. The Internet PrimeNet Server supports GIMPS's grid computing approach, one of the earliest and most successful grid computing projects, since 1997. Quasi-opportunistic approaches Quasi-opportunistic supercomputing is a form of distributed computing whereby the "super virtual computer" of many networked geographically disperse computers performs computing tasks that demand huge processing power. Quasi-opportunistic supercomputing aims to provide a higher quality of service than opportunistic grid computing by achieving more control over the assignment of tasks to distributed resources and the use of intelligence about the availability and reliability of individual systems within the supercomputing network. However, quasi-opportunistic distributed execution of demanding parallel computing software in grids should be achieved through implementation of grid-wise allocation agreements, co-allocation subsystems, communication topology-aware allocation mechanisms, fault tolerant message passing libraries and data pre-conditioning. High-performance computing clouds Cloud computing with its recent and rapid expansions and development have grabbed the attention of high-performance computing (HPC) users and developers in recent years. Cloud computing attempts to provide HPC-as-a-service exactly like other forms of services available in the cloud such as software as a service, platform as a service, and infrastructure as a service. HPC users may benefit from the cloud in different angles such as scalability, resources being on-demand, fast, and inexpensive. On the other hand, moving HPC applications have a set of challenges too. Good examples of such challenges are virtualization overhead in the cloud, multi-tenancy of resources, and network latency issues. Much research is currently being done to overcome these challenges and make HPC in the cloud a more realistic possibility. In 2016, Penguin Computing, Parallel Works, R-HPC, Amazon Web Services, Univa, Silicon Graphics International, Rescale, Sabalcore, and Gomput started to offer HPC cloud computing. The Penguin On Demand (POD) cloud is a bare-metal compute model to execute code, but each user is given virtualized login node. POD computing nodes are connected via non-virtualized 10 Gbit/s Ethernet or QDR InfiniBand networks. User connectivity to the POD data center ranges from 50 Mbit/s to 1 Gbit/s. Citing Amazon's EC2 Elastic Compute Cloud, Penguin Computing argues that virtualization of compute nodes is not suitable for HPC. Penguin Computing has also criticized that HPC clouds may have allocated computing nodes to customers that are far apart, causing latency that impairs performance for some HPC applications. Performance measurement Capability versus capacity Supercomputers generally aim for the maximum in capability computing rather than capacity computing. Capability computing is typically thought of as using the maximum computing power to solve a single large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can, e.g. a very complex weather simulation application. Capacity computing, in contrast, is typically thought of as using efficient cost-effective computing power to solve a few somewhat large problems or many small problems. Architectures that lend themselves to supporting many users for routine everyday tasks may have a lot of capacity but are not typically considered supercomputers, given that they do not solve a single very complex problem. Performance metrics In general, the speed of supercomputers is measured and benchmarked in FLOPS (floating-point operations per second), and not in terms of MIPS (million instructions per second), as is the case with general-purpose computers. These measurements are commonly used with an SI prefix such as tera-, combined into the shorthand TFLOPS (1012 FLOPS, pronounced teraflops), or peta-, combined into the shorthand PFLOPS (1015 FLOPS, pronounced petaflops.) Petascale supercomputers can process one quadrillion (1015) (1000 trillion) FLOPS. Exascale is computing performance in the exaFLOPS (EFLOPS) range. An EFLOPS is one quintillion (1018) FLOPS (one million TFLOPS). No single number can reflect the overall performance of a computer system, yet the goal of the Linpack benchmark is to approximate how fast the computer solves numerical problems and it is widely used in the industry. The FLOPS measurement is either quoted based on the theoretical floating point performance of a processor (derived from manufacturer's processor specifications and shown as "Rpeak" in the TOP500 lists), which is generally unachievable when running real workloads, or the achievable throughput, derived from the LINPACK benchmarks and shown as "Rmax" in the TOP500 list. The LINPACK benchmark typically performs LU decomposition of a large matrix. The LINPACK performance gives some indication of performance for some real-world problems, but does not necessarily match the processing requirements of many other supercomputer workloads, which for example may require more memory bandwidth, or may require better integer computing performance, or may need a high performance I/O system to achieve high levels of performance. The TOP500 list Since 1993, the fastest supercomputers have been ranked on the TOP500 list according to their LINPACK benchmark results. The list does not claim to be unbiased or definitive, but it is a widely cited current definition of the "fastest" supercomputer available at any given time. This is a recent list of the computers which appeared at the top of the TOP500 list, and the "Peak speed" is given as the "Rmax" rating. In 2018, Lenovo became the world's largest provider for the TOP500 supercomputers with 117 units produced. Applications The stages of supercomputer application may be summarized in the following table: The IBM Blue Gene/P computer has been used to simulate a number of artificial neurons equivalent to approximately one percent of a human cerebral cortex, containing 1.6 billion neurons with approximately 9 trillion connections. The same research group also succeeded in using a supercomputer to simulate a number of artificial neurons equivalent to the entirety of a rat's brain. Modern-day weather forecasting also relies on supercomputers. The National Oceanic and Atmospheric Administration uses supercomputers to crunch hundreds of millions of observations to help make weather forecasts more accurate. In 2011, the challenges and difficulties in pushing the envelope in supercomputing were underscored by IBM's abandonment of the Blue Waters petascale project. The Advanced Simulation and Computing Program currently uses supercomputers to maintain and simulate the United States nuclear stockpile. In early 2020, COVID-19 was front and center in the world. Supercomputers used different simulations to find compounds that could potentially stop the spread. These computers run for tens of hours using multiple paralleled running CPU's to model different processes. Development and trends In the 2010s, China, the United States, the European Union, and others competed to be the first to create a 1 exaFLOP (1018 or one quintillion FLOPS) supercomputer. Erik P. DeBenedictis of Sandia National Laboratories has theorized that a zettaFLOPS (1021 or one sextillion FLOPS) computer is required to accomplish full weather modeling, which could cover a two-week time span accurately. Such systems might be built around 2030. Many Monte Carlo simulations use the same algorithm to process a randomly generated data set; particularly, integro-differential equations describing physical transport processes, the random paths, collisions, and energy and momentum depositions of neutrons, photons, ions, electrons, etc. The next step for microprocessors may be into the third dimension; and specializing to Monte Carlo, the many layers could be identical, simplifying the design and manufacture process. The cost of operating high performance supercomputers has risen, mainly due to increasing power consumption. In the mid 1990s a top 10 supercomputer required in the range of 100 kilowatts, in 2010 the top 10 supercomputers required between 1 and 2 megawatts. A 2010 study commissioned by DARPA identified power consumption as the most pervasive challenge in achieving Exascale computing. At the time a megawatt per year in energy consumption cost about 1 million dollars. Supercomputing facilities were constructed to efficiently remove the increasing amount of heat produced by modern multi-core central processing units. Based on the energy consumption of the Green 500 list of supercomputers between 2007 and 2011, a supercomputer with 1 exaFLOPS in 2011 would have required nearly 500 megawatts. Operating systems were developed for existing hardware to conserve energy whenever possible. CPU cores not in use during the execution of a parallelized application were put into low-power states, producing energy savings for some supercomputing applications. The increasing cost of operating supercomputers has been a driving factor in a trend toward bundling of resources through a distributed supercomputer infrastructure. National supercomputing centers first emerged in the US, followed by Germany and Japan. The European Union launched the Partnership for Advanced Computing in Europe (PRACE) with the aim of creating a persistent pan-European supercomputer infrastructure with services to support scientists across the European Union in porting, scaling and optimizing supercomputing applications. Iceland built the world's first zero-emission supercomputer. Located at the Thor Data Center in Reykjavík, Iceland, this supercomputer relies on completely renewable sources for its power rather than fossil fuels. The colder climate also reduces the need for active cooling, making it one of the greenest facilities in the world of computers. Funding supercomputer hardware also became increasingly difficult. In the mid 1990s a top 10 supercomputer cost about 10 million euros, while in 2010 the top 10 supercomputers required an investment of between 40 and 50 million euros. In the 2000s national governments put in place different strategies to fund supercomputers. In the UK the national government funded supercomputers entirely and high performance computing was put under the control of a national funding agency. Germany developed a mixed funding model, pooling local state funding and federal funding. In fiction Many science fiction writers have depicted supercomputers in their works, both before and after the historical construction of such computers. Much of such fiction deals with the relations of humans with the computers they build and with the possibility of conflict eventually developing between them. Examples of supercomputers in fiction include HAL-9000, Multivac, The Machine Stops, GLaDOS, The Evitable Conflict, Vulcan's Hammer, Colossus, WOPR, and Deep Thought. See also ACM/IEEE Supercomputing Conference ACM SIGHPC High-performance technical computing Jungle computing Nvidia Tesla Personal Supercomputer Parallel computing Supercomputing in China Supercomputing in Europe Supercomputing in India Supercomputing in Japan Testing high-performance computing applications Ultra Network Technologies Quantum computing Notes and references External links McDonnell, Marshall T. (2013) Supercomputer Design: An Initial Effort to Capture the Environmental, Economic, and Societal Impacts. Chemical and Biomolecular Engineering Publications and Other Works. American inventions Cluster computing Concurrent computing Distributed computing architecture Parallel computing
50691191
https://en.wikipedia.org/wiki/Enfish%2C%20LLC%20v.%20Microsoft%20Corp.
Enfish, LLC v. Microsoft Corp.
Enfish, LLC v. Microsoft Corp., 822 F.3d 1327 (Fed. Cir. 2016), is a 2016 decision of the United States Court of Appeals for the Federal Circuit in which the court, for the second time since the United States Supreme Court decision in Alice Corp. v. CLS Bank upheld the patent–eligibility of software patent claims. The Federal Circuit reversed the district court's summary judgment ruling that all claims were patent–ineligible abstract ideas under Alice. Instead, the claims were directed to a specific improvement to the way computers operate, embodied in the claimed "self-referential table" for a database, which the relevant prior art did not contain. Background Enfish, LLC and Microsoft Corp. develop and sell software database products. Enfish received U.S. Patents 6,151,604 ('604 patent) and 6,163,775 ('775 patent) in late 2000, which both claim a logical model for a computer database. A logical model is a system for a computer database that explains how the various elements of information in the database are related to one another. Contrary to conventional logical models, Enfish's logical model includes all data entities in a single table, with column definitions provided by rows in that same table. The patents describe this as the "self-referential" property of the database. In a standard, conventional relational database, each entity (i.e., each type of thing) that is modeled is provided in a separate table. For instance, a relational model for a corporate file repository might include the following tables: document table, person table, and company table. The document table might contain information about stored documents, the person table might contain information about authors of the documents, and the company table might contain information about the companies that employ the persons. In contrast, Enfish's patents describe a table structure that allows the information that would normally appear in several different tables to be stored in a single table. The columns are defined by rows in the same table. Enfish's patents assert that the self-referential arrangement has several advantages: faster look-ups, more efficient storage of data other than structured text, no requirement to model each thing in the database as a separate table, and thus the ability to be "configured on-the-fly." A representative claim 17 of the '604 patent claims: A data storage and retrieval system for a computer memory, comprising: means for configuring said memory according to a logical table, said logical table including: a plurality of logical rows, each said logical row including an object identification number (OID) to identify each said logical row, each said logical row corresponding to a record of information; a plurality of logical columns intersecting said plurality of logical rows to define a plurality of logical cells, each said logical column including an OID to identify each said logical column; and means for indexing data stored in said table. Ruling of district court The district court (Pfaelzer, J.) held that the fact that the patents claim a "logical table" demonstrated abstractness, since '[t]he term 'logical table' refers to a logical data structure, as opposed to a physical data structure." Thus the court's claim construction order had stated that a logical table has "a data structure that is logical as opposed to physical, and therefore does not need to be stored contiguously in memory." Therefore: In essence, the claims capture the concept of organizing information using tabular formats. As such, the claims preempt a basic way of organizing information, without regard to the physical data structure. There can be little argument that a patent on this concept, without more, would greatly impede progress.<blockquote> Given these observations, the Court determines that the claims are addressed to the abstract purpose of storing, organizing, and retrieving memory in a logical table. This abstract purpose does not become tangible because it is necessarily limited to the technological environment of computers. . . . When a claim recites a computer generically, the Court should ignore this element in defining the claim's purpose.</ref></blockquote> The court then proceeded to the second step of the Alice analysis, which is to determine whether "the claims contain additional limitations that amount to an inventive concept." The court concluded: "The claims do not. Instead, the claims recite conventional elements. These elements, when viewed individually or in a combination, do not sufficiently cabin the claims' scope." Accordingly, the court granted summary judgment invalidating the patents. Ruling of Federal Circuit The Federal Circuit (Hughes, J.) interpreted the first step of the Alice analysis as asking "whether the focus of the claims is on the specific asserted improvement in computer capabilities (i.e., the self-referential table for a computer database) or, instead, on a process that qualifies as an 'abstract idea' for which computers are invoked merely as a tool." But claim 17, for example, is focused on "an improvement to computer functionality itself, not on economic or other tasks for which a computer is used in its ordinary capacity." Accordingly, "we find that the claims at issue in this appeal are not directed to an abstract idea within the meaning of Alice. Rather, they are directed to a specific improvement to the way computers operate, embodied in the self-referential table." Therefore, the court does not need to proceed to step two of the Alice analysis. The Federal Circuit rejected the conclusion of district court Judge Pfaelzer that the claims were abstract, and rejected the argument that the claims are directed to "the concepts of organizing data into a logical table with identified columns and rows where one or more rows are used to store an index or information defining columns." Instead, the court insisted, "describing the claims at such a high level of abstraction and untethered from the language of the claims all but ensures that the exceptions to § 101 swallow the rule." The Federal Circuit said that "the district court oversimplified the self-referential component of the claims and downplayed the invention's benefits." The court explained that its "conclusion that the claims are directed to an improvement of an existing technology is bolstered by the specification's teachings that the claimed invention achieves other benefits over conventional databases, such as increased flexibility, faster search times, and smaller memory requirements." While the claims at issue in other cases such as Alice merely added "conventional computer components to well-known business practices," Enfish's claims "are directed to a specific improvement to computer functionality." Thus: In sum, the self-referential table recited in the claims on appeal is a specific type of data structure designed to improve the way a computer stores and retrieves data in memory. The specification's disparagement of conventional data structures, combined with language describing the "present invention" as including the features that make up a self-referential table, confirm that our characterization of the "invention" for purposes of the § 101 analysis has not been deceived by the "draftsman's art." . . . Rather, the claims are directed to a specific implementation of a solution to a problem in the software arts. Accordingly, we find the claims at issue are not directed to an abstract idea. That ended the § 101 analysis: Because the claims are not directed to an abstract idea under step one of the Alice analysis, we do not need to proceed to step two of that analysis. . . . [W]e think it is clear for the reasons stated that the claims are not directed to an abstract idea, and so we stop at step one. We conclude that the claims are patent-eligible. Subsequent developments In TLI Communications LLC v. AV Automotive, L.L.C., five days later, a different panel, including Judge Hughes who authored the Enfish decision and then TLI, invalidated software claims for failure to meet the Alice test. In TLI the court held that a patent on a method and system for taking, transmitting, and organizing digital images was patent–ineligible because it "claims no more than the abstract idea of classifying and storing digital images in an organized manner." Several district courts have reacted to Enfish already, in cases in which they had granted summary judgment motions on grounds of lack of patent eligibility under Alice. In Mobile Telecommunications Technologies v. Blackberry Corp. in the Northern District of Texas, the court requested supplemental briefs on the Enfish decision. In Activision Publishing Inc. v. xTV Networks, Ltd. in the Central District of California, the court requested technology tutorials on the effect of Enfish. Commentary ● In Patent Docs, blogger Michael Borella comments on the Enfish case. He emphasizes the Federal Circuit panel's statement that "describing the claims at such a high level of abstraction and untethered from the language of the claims all but ensures that the exceptions to § 101 swallow the rule." He also emphasizes the court's nod to "the importance of software and the potential for innovation therewith", pointing to the opinion's statement: Much of the advancement made in computer technology consists of improvements to software that, by their very nature, may not be defined by particular physical features but rather by logical structures and processes . . . [w]e do not see in Bilski or Alice, or our cases, an exclusion to patenting this large field of technological progress. Borella sees the case as telling drafters of patents (and perhaps patent claims) to describe 'how an invention improves over the prior art, especially if it improves the operation of a computer." He cautions, however, that "for purposes of avoiding estoppel, one should be careful when pointing out the deficiencies of what came before." Finally, he hails the opinion as downplaying the need for recitation of hardware in claims, in order to salvage patent eligibility: Additionally, the Court made it clear that whether such an improvement resides in hardware or software is not material. Since Bilski, there has been a knee-jerk reaction to add a hardware component to at least one element of each independent claim. Perhaps that is no longer necessary when the innovation is in software. ● In a second blog posting in Patent Docs, Borella compares Enfish with TLI. He disagrees with those who find the two opinions inconsistent: Already, some are contending that Enfish and TLI cannot be reconciled with one another. This is not true. Enfish stands for the notion that an improvement to computing technology, whether software or hardware, is not in and of itself abstract. TLI, on the other hand, reaffirms that merely using generic computer technology to carry out a procedure does not add patentable weight to a claimed invention. In one case, the claims recite the invention of new technology, and in the other case, the claims recite the use of old technology. A clear line has been drawn. Borella reluctantly concludes that many "new and useful . . . inventions are at risk in a post-Alice world," but advises: Nonetheless, if we accept that we have to live in the world of Alice at least for now, the distinction between Enfish and TLI is critical to understand, as it provides a roadmap to patent-eligibility for a great many software inventions. ● Michael Mireles, in The IP Kat, counts judicial noses on the patent eligibility of computer-implemented inventions. He tabulates them: Even though Judge Hughes wrote both Enfish and TLI, the composition of the panels is quite different. The Enfish panel included Judges Moore and Taranto. The TLI panel included Judges Dyk and Schall. The DDR Holdings v. Hotels.com decision finding a computer-implemented invention patent eligible was authored by Judge Chen and joined by Judge Wallach. Judge Mayer dissented. There are now five Federal Circuit judges who appear to lean toward favoring patentability of computer-implemented inventions: Hughes, Moore, Taranto, Chen and Wallach. If Enfish is heard en banc, it may be a close decision. Importantly, Enfish provides important guidance for step one analysis under Alice and a general attitude supporting patent eligibility for computer-implemented inventions. ● Steve Marshall found the Federal Circuit's efforts at harmonization a failure and saw the two opinions as addressing similar technologies but treating them disparately: Despite the attempted harmonizing discussion of Enfish in TLI, the latter exposes several inconsistencies between the opinions as well as potential flaws in the reasoning of Enfish. As an initial matter, the Federal Circuit's descriptions of the claimed technologies in each of the opinions share similarities in areas that impacted the legal analysis. Each involved a database implementation on a commodity computer. Also, the benefits of each purport to include increased search speed and dynamic configuration of data files. Additionally, the disclosure of each was largely functional with little to no emphasis on new physical components. Yet, he argues, "The court's treatment of these apparent factual similarities could not have been more different." In Enfish, "the performance benefit is attributable to the algorithm," and the court found that "the self-referential model indeed provided an 'improvement in computer capabilities' " that was patent eligible even though the underlying computer received no improvement in its physical operation. "In contrast, the TLI court lambasted the claimed image database using classification data for failing to improve the recited telephone unit or server." Furthermore, both patents claimed similar benefits such as faster search times, but the Federal Circuit gave Enfish's device credit for this and denied TLI's device such credit. Enfish's patent described an old computer but the TLI court criticized the TLI patent's disclosure for failing to provide technical details about physical features or describing "a new server," and instead focusing on "purely functional terms." Marshall asserts, "[H]ad the Federal Circuit applied the same analysis from TLI in the Enfish case, the Enfish patents should not have been found subject matter eligible." He concludes that these cases "fail the public" in performing the needed "notice function of software patent claims," for "[b]etween the endpoints of firmware that makes a machine functional and software that does little more than use a computer as a calculator lay applications that, based on Enfish and TLI, may or may not be patent eligible." ● Gene Quinn in the IPWatchdog applauds the Enfish decision as restoring the legitimacy of software patents: The Federal Circuit also explicitly put a nail in the coffin of the ridiculous argument that software shouldn't be patent eligible if it could run on a general-purpose computer. The Federal Circuit explained: "We are not persuaded that the invention's ability to run on a general-purpose computer dooms the claims." Some jurists have long claimed that if software can run on a general-purpose computer it cannot be patented, which is utterly asinine given that software is most useful when it can run regardless of the platform selected. This statement, as correct as it is profound, will no doubt lead those in the anti-patent community to fly into an apoplectic fit. He predicts, however, that this case is not over yet: If I had to guess I'd say I expect Microsoft with file a petition for en banc rehearing, and ultimately will probably file a petition for certiorari to the United States Supreme Court. In the meantime, however, this case will bring cheer to the heart of those who have been long frustrated by what had seemingly become a de facto rule that software was not patent eligible in the United States. ● Brian Mudge and Christopher Gresalfi examine the impact of Enfish on Covered Business Method proceedings within the PTO in the IPR Blog. They analyze the different results in two post-Enfish decisions of the PTO—Informatica Corp. v. Protegrity Corp. and Apple, Inc. v. Mirror World Techs., LLC. In the Informatica case, the PTO considered claims to a system and method for protecting data against unauthorized access. An O-DB data base stored data element values and an IAM-DB database contained a data protection catalogue that stored so-called protection attributes for data element types associated with data element values. The claimed method granted access to a requested data element value only if rules associated with a data element type associated with the relevant data element value were satisfied. The PTO found the claimed invention was directed to the abstract idea of "rule based data access." The patentee Protegrity argued that the DDR Holdings case supported patent eligibility because the invention provided a solution rooted in method technology because it protected data in a first database by rules stored in a second data base. The PTO said that Enfish did not help Protegrity because the patent claims were not directed to a specific improvement in the way computers operate; instead, the databases and access rules performed their normal functions and they achieved the usual, expected results—storing rules in a separate database merely changes the location of the rules, not the expected operation of the rules or the database. Under Alice step 2, there was no inventive concept because everything operated in a conventional manner. Therefore, the patent claimed a patent-ineligible abstract idea. In the Mirror World case, the patent claimed a method and apparatus for organizing "data units" (i.e., documents) into "streams" and "substreams." This was said to organize, locate, summarize, and monitor incoming data. Apple argued that this just meant that data was being organized, say, in chronological order—and that was just an abstract idea. Apple said this was not a computer problem, particularly, and paper documents could be organized the same way. Anyway, the operation could be performed conventionally on a conventional computer. In response, Mirror World argued that streams and substreams were computer concepts not found in the pre-computer world of paper documents. This argument impressed the PTO. The PTO considered that the streams and substreams were manipulated by a computer electronically in a way that did not duplicate prior art paper shuffling, and that improved computer functionality as in Enfish. Therefore, the claims were patent eligible. The authors assert that both decisions turned on the PTO's determination "whether the patent claimed generic computer functionality carrying out conventional activity, or a specific technical solution to a technical problem occurring only in the realm of computer processes." In Informatica the PTO found that the patent just claimed well-known computer elements carrying out routine steps. In Mirror World the PTO concluded that the claimed invention was directed to solving problems specifically arising in computer technology, which the specification emphasized—apparently in a way that the specification in the Informatica'' case did not do. References External links Software patent case law United States patent case law United States Court of Appeals for the Federal Circuit cases 2016 in United States case law Microsoft litigation
22531984
https://en.wikipedia.org/wiki/Michael%20R.%20Lyu
Michael R. Lyu
Michael R. Lyu, Ph.D., is a software engineer. He is now a professor at the Chinese University of Hong Kong in Shatin, Hong Kong. Michael is well known to the software engineering community as the editor of two classic book volumes in software reliability engineering: Software Fault Tolerance and the Handbook of Software Reliability Engineering. Both books have also been translated into Chinese and published in China. References Living people Chinese software engineers Chinese University of Hong Kong faculty Chinese computer scientists Year of birth missing (living people)
1973261
https://en.wikipedia.org/wiki/Cullinet
Cullinet
Cullinet was a software company whose products included the database management system IDMS and the integrated software package Goldengate. In 1989, the company was bought by Computer Associates. Cullinet was headquartered at 400 Blue Hill Drive in Westwood, Massachusetts. History Early years The company was started by John Cullinane and Larry English in 1968 as Cullinane Corporation. Their idea was to sell pre-packaged software to mainframe users, which was at that time a new concept in an era when enterprises only used internally developed applications or the software that came bundled with the hardware. Rather than write its own products, Cullinane approached IT (information technology) departments (at that time called Data Processing departments) of major enterprises, particularly banks, to identify internally developed applications that he felt had potential to be productized and licensed to others. However, it proved difficult to sell these applications because most weren't generalized and supportable systems. As a result, the company decided to develop a source code management system, called PLUS, that competed with Pansophic's (PanDA) and UCC's products (UCC-1 tape management system, etc.). The first version of PLUS (which stood for Program Library Update System) required the use of magnetic tape devices, and was not competitive with the other, disk-based products. Although the company eventually responded with a disk-based version, called PLUS-DA (which stood for Direct Access, a common name for disks at the time) they did not become successful in this market. The first breakthrough product was a report writer named Culprit, developed in-house by Gil Curtis and Anna Marie Thron, who had built the PHI payroll system. The product competed with Mark IV from Informatics, Inc., but was perceived as a late entry in the report writer category. The company struggled with financial stability until it branded a variation of Culprit, EDP Auditor, which was nothing more than a second name for the same product with a collection of predefined reports, but more importantly, special services aimed at the new discipline of EDP (electronic data processing) Auditing including the first EDP Auditors User, special support to give auditors independence of data processing which was very important to them. What was remarkable is that many corporations licensed essentially identical products. This led to serendipitous prosperity for Cullinane. As EDP auditors developed knowledge about business systems and computers, they could invariably produce reports faster than slower-moving, internal IT departments. As a result, MIS (management information systems) departments would feel compelled to buy the Culprit version for their own use — to compete. 1970s As the company prospered in the early 1970s, it was approached by a consultant to BFGoodrich, Naomi O. Seligman, to consider taking over development of a Honeywell database management system called Integrated Data Store (IDS) that had been modified to operate on IBM and IBM compatible (RCA) mainframes. IDS was originally developed by General Electric, and Bill Curtis had supposedly gotten the rights to convert the system to run on IBM equipment. The decision was made in early 1973 — primarily by John Cullinane, Jim Baker and Tom Muerer — to bet the company on the effort. Several executives joined the effort over the next three years, including Andrew Filipowski, Robert Goldman, Jon Nackerud, Ron McKinney, William Casey, Bob Davis, Bill Linn, and Ray Nawara. IDMS was to be a great bet for the company as it became the leader among many capable and popular products of the mainframe era. It competed with Cincom's Total, Software AG's ADABAS, Applied Data Research's DATACOM/DB, Computer Corporation of America's Model 204, MRI (later Intel's) System 2000 and IBM Information Management System (IMS) and DL/I. In 1976, the source code was sold to International Computers Limited (ICL), whose developers ported the software to run on their 2900 Series mainframes, and subsequently also on the older 1900 machines. ICL continued development of the software independently of Cullinane, selling the originally ported product under the name IDMS and an enhanced version as IDMSX. In this form, it was used by many large UK and international users — examples being the Pay-As-You-Earn system operated by Inland Revenue and a system for Barclays Bank in South Africa. Many of these systems are still running in 2010 on Fujitsu equipment. John Cullinane mentored a series of future entrepreneurs and software industry executives. One of the early executives was Andrew 'Flip' Filipowski, who later founded Platinum Technology, Inc.. Another was Robert Goldman who became the CEO of several public software companies including AICorp. Jon Nackerud was a co-founder of Relational Technology, Inc., formed to commercialize the Ingres database management system. Prior to becoming a public company in 1978, the company's name was changed to Cullinane Database Systems, Inc. The company changed its name again to Cullinet Software in 1983, partly because John Cullinane wanted to distance his name from the personal connection to the business when he turned the company over to Bob Goldman, and also in a nod to the importance of computer networking (as evidenced by the company's simultaneous acquisition of Computer Pictures, whose microcomputer-based desktop system linked to IDMS data). Joe McNay, a board member, was particularly important regarding the company's IPO, the first ever in the software products industry. Greylock purchased some shares from John Cullinane in 1977, less than a year before the company was to go public. It was to be the early foundation on which their Greylock's software technology investment prowess rested. It was Greylock’s first investment in a software company. Cullinane's public offering was of note as it was the first successful offering of a pure software products company ever and the first software company Hambrecht & Quist ever took public. Cullinet was also the first software company to have a billion dollar valuation, and the first to do a Super Bowl advertisement. Cullinane Database Systems, Inc., went public in 1978. On April 27, 1982 the company became the first computer software firm to be listed on the New York Stock Exchange and later, the first to become a component stock of the S&P 500 Index. However, two quarters after the company went public IBM introduced its 4300 series. Its salesmen told all mutual clients that IDMS didn't run on the 4300 series and that all IBM software of the future would be built with IMS/DL1. This caused a major problem as every IDMS customer went ballistic and every prospect went on hold. The company only had three months to solve this marketing problem and technical problem, and remarkably, they did. Technically, it only required the modification of one instruction to get IDMS running on a 4300. The solution to the company's revenue problem turned out to be its new Integrated Data Dictionary. By moving very fast, the company used it to put IBM on the defensive and made its numbers, no small accomplishment. It then went from winning one out five competitions to winning four out five and this fueled its growth. Beginning in 1979, in an attempt to promote less dependence on the database sales alone, Cullinane fully integrated financial and manufacturing applications with IDMS and decision support systems, another first. The company acquired financial applications from McCormack & Dodge ("M&D"), a financial software company (acquired by Dun & Bradstreet later in 1983) and completely rewrote them using IDMS. They also acquired an MRP system from Rath & Strong and completely rewrote it using IDMS. Thus, Cullinet had a suite of integrated financial and manufacturing systems (called CIMS Cullinet Integrated Manufacturing System), the first on-line database driven applications, and was a major competitor in what is now called ERP. The company had become a software powerhouse. Eventually, it acquired a small Boston-based company called Computer Pictures whose graphics-focused decision support system TrendSpotter had already been integrated with IDMS and was very successful. This team developed Goldengate, a Lotus Symphony-like PC product. Goldengate was a part of Cullinet's flawed ICMS (Information Center Management System). The promise of ICMS was the ability to move data between the mainframe and PC desktop. Apple Computer was supposed to do the same for the Apple Lisa, but never delivered. ICMS was unveiled in 1983 as part of a splashy 20+ city closed circuit TV broadcast that focused on IDMS/R and fueled the market for Cullinet for the next two years, but it was obvious that it was getting harder to maintain its unbroken string of quarters with sales and earnings in excess of 50%. The company should have developed PC based IDMS development tools, instead. Ironically, it had the technology under development which was later to become the foundation of PowerBuilder at Powersoft. In fairness, many failures mark the landscape in that space and era including the Ovation product introduced with great fanfare by Ovation Technologies in a race with Lotus's Symphony suite attempting to create the early office suites dominated by Microsoft Corp. Goldengate was built pre-Windows, which was expensive for Cullinet because of all the permutations and combinations of PC hardware and memory configurations. 1980s In 1983 John Cullinane, after 25 years in the software business, handed over the helm of Cullinet to Bob Goldman. Eventually the company ran into trouble and Cullinane brought in a recent acquaintance, David Chapman, as CEO of the company. At the time, Cullinet had $50 million in cash reserves. David Chapman, a veteran IBM and Data General executive, started an aggressive campaign to acquire technology from other companies. The reason for bringing in Chapman was that the company had gotten hung up on the open architecture and relational issues. In other words, a company with an unparalleled record of outpositioning competition every two years, for sixteen years, including IBM, allowed itself to get outpositioned by IBM and others, with the help of E.F. Codd and C.J. Date. In 1986-87, Chapman attempted to move the company to the more and more powerful minicomputers such as Digital Equipment Corporation's VAX line of computers. In the process, Cullinet acquired some very questionable VAX companies, but one had an outstanding relational DBMS. By then it was too late — the company's $50 million of cash had been spent. In 1988, John Cullinane returned to Cullinet, fired Chapman and tried to salvage the company. By repositioning the company's product line with a new product called Enterprise Generator, he solved the open architecture problem and the company was able to return to profitability by the fourth quarter, which made it possible to negotiate a deal with Charles Wang, head of Computer Associates. In 1989, Wang bought the company for $330 million in stock. It was a good deal for investors, which was reflected in the fact that shares of CA increased in value at least tenfold during the 1990s. It was a good deal for John Cullinane, too. Much later, CA Technologies (formerly CA, Inc. and Computer Associates International, Inc.) still marketed and supported the CA IDMS relational database system for IBM z/OS, z/VSE and z/VM, Fujitsu Siemens BS2000/OSD, Linux (CA IDMS Server), UNIX (CA IDMS Server) and Windows (CA IDMS Server). Products IDMS A CODASYL network database management system first developed at B.F. Goodrich. John Cullinane acquired the rights to market IDMS in the early 1970s. IDMS legacy systems are still being run today. Only a few customers have migrated to IDMS/R. IDMS/R This was an evolution of IDMS in approximately 1984 involving the addition of relational features. IDMS/SQL This was a completely separate database engine developed in California by Dr. Kapali Eswaran who was originally from IBM's System R project. The company had also developed a 4GL for use with the database engine. The components were all named after planets. This product was designed to run on the Digital VAX system. Eswaron's company Esvel was acquired by Cullinet in July 1987 and its main product re-launched as IDMS/R. The 4GL was dropped in favour of one developed by a Cancor, a Canadian company based in Mississauga, Ontario which was acquired in January 1987. IDMS-DC A teleprocessing system similar to IBM's CICS system. When it was first released, it was reported that IBM challenged Cullinane to prove that the code had not violated copyright. This suspicion was due to the fact that many internal CICS codes begin with the initials "RH". Many IDMS-DC modules also begin with "RH" after it two authors, Nick Rini and Don Heitzmann, both employees of Cullinane. ADS/Online IDMS-DC help spawn a fourth generation (4GL) programming system called ADS/Online (Application Development System). The original name of the product was "AIDS". ADS/Online was a COBOL-like language and was successful because it competed against CICS, which tended to be used mainly by COBOL programmers. ADS/O was later ported to run directly in CICS and was adopted by nearly 1,500 companies. ADS/Batch A port of ADS/Online to the batch mainframe environment. It was not well received by Cullinet's customers. Culprit An RPG-like reporting tool. It was also marketed as tool for use by auditors under the name EDP Auditor. Online Query (OLQ) A powerful online reporting tool. Online English A powerful online reporting tool that used the "Intellect" natural language AI engine from Artificial Intelligence Corporation (AICorp). IDD (Integrated Data Dictionary) A renowned integrated data dictionary. PLUS An early, tape-based, source code management system. References External links Oral history interview with John Cullinane. Charles Babbage Institute, University of Minnesota. Discusses the firm's development and marketing of a number of new software products, including Culprit, Library Update System, EDP Auditor, and IDMS, and IDMSDC. CA IDMS current main site Defunct software companies Software companies based in Massachusetts CA Technologies American companies established in 1968 Software companies established in 1968 Software companies disestablished in 1989 Software companies of the United States
46915582
https://en.wikipedia.org/wiki/PH7Builder
PH7Builder
pH7Builder (formerly known as pH7CMS and pH7 Social Dating CMS) is an open source Social Dating software that allows creation of online communities and social dating services. pH7Builder is written in PHP 5.6, is object-oriented and uses the MVC pattern (Model-View-Controller). The software is based on the homemade pH7Framework and is designed with the KISS principle in mind. For better flexibility, the software uses PDO (PHP Data Objects) abstraction which allows the choice of the database. The principle of development is DRY (Don't Repeat Yourself) aimed at reducing repetition of information of all kinds (not duplicate code). It also wants to be fast, low-resource-intensive, extremely powerful and very secure. pH7CMS is distributed in two distinct packages. One free with less features, no update/upgrade script and only for personal sites and another one sold for commercial sites, including premium features and update/upgrade scripts. Improvement history In pH7CMS 1.0.10, the template syntax has been totally rewritten and gives a better understanding for Web designers. pH7CMS 1.1 introduced a new hash algorithm password and uses from now the Password Hashing API introduced by PHP 5.5. The version also includes many bug fixes, some new features and removes the Donation plugin from the Page module. pH7CMS 1.1.2 provide a huge improvement for the Payment module, a lot of bug fixes and a better Database Language integration. pH7CMS 1.1.8 is the last version of the 1.1 branch. From the 1.2 version, the software has a full responsive design. The 1.2 version is also more focus on "Dating features than the 1.0 and 1.1 branches. Since pH7CMS 1.2, the company doesn't only provide a "dating software provider", but also a Real Social/Dating Business Solution with a support from the "dating idea startup" until the "profitable and popular online dating business". The service is mainly provided by e-Dating Marketing. pH7CMS 1.2.1 becomes the first dating software provider to offer Bitcoin as a payment gateway. Bitcoin is very appreciate on a dating websites because it allows people to make payments in an anonymous way. pH7CMS 1.2.3 has a new module called "api" allowing to use pH7CMS as a RESTful Web app and since that version, all pH7CMS installation has a unique API key in pH7CMS config.ini file. Integration to external software/site or mobile app (such iOS & Android) is possible with minimum modification and maximum security. Better Geo recognition has also been implemented. pH7CMS 1.2.5 has a lot of bugfixes and improvements (including a better displaying on small devices with the responsive theme) pH7CMS 1.2.7 has been released on the 24th of December just for Christmas. It has a lot of improvements such as better banner positions increasing the click-through-rate. The benchmark visible when pH7CMS is on the development mode. Better search experiences with the new SISE (Smart Intuitive Search Engine) and better translation is now done. Finally, the new release has a much better CSV User Importer and is now 100% compatible with PHP 7+. 1.2.8, Several improvements and bugfixes as usual and add the possibility to enable/disable system modules/features pH7CMS 1.4 integrates the Two-Factor Authentication (2FA) working with TOTP mobile apps. pH7CMS 2.0 adds a nudity detector for easier moderation of any photos uploaded Starting from version 16.3.0, the software is distributed under MIT license instead of GPLv3. System modules pH7CMS is included with 31 natives modules Admin Panel Affiliate Blog Chat Chatroulette Comment Connect (Facebook, Twitter and Google Connect) Contact Error (Allows the customization of error pages (e.g., 403, 404, 500)) Field (Profile Fields) Forum Game Hot or Not (Random Profile Photo Rating) IM (Instant Messenger) Invite (Invite friends by sending an invitation email) Lost-Password (Requesting new password for User, Admin and Affiliate Modules) Love Calculator Mail Newsletter Note Page Payment Picture Report (report an abusing user/content) User Video Webcam XML (RSS & Sitemap generator) HelloWorld (Example Module for mod developers) Template engines pH7CMS Core uses its homemade pH7Tpl and the installer uses Smarty. In addition, pH7CMS is also included with the PH7Xsl, a XSLT PHP template engine. Installation In almost each version, the installation of the software is improved and is easier. pH7CMS is also included with a Web setup wizard and is also available on Softaculous. Recognition Recommended Social Networking Software by BestHostingSearch References External links pH7CMS.com Official pH7CMS website pH7CMS on GitHub Official GitHub repository pH7CMS on SourceForge Free version available on SourceForge Social networking services Online dating applications Blog software Photo software
41901695
https://en.wikipedia.org/wiki/Jaggaer
Jaggaer
JAGGAER, formerly SciQuest, is a provider of cloud-based business automation technology for Business Spend Management headquartered at Morrisville, North Carolina, US. It has offices in Chicago, IL, US; Malvern, PA, US; Newtown Square, PA, US; Philadelphia, PA, US; Pittsburgh, PA, US; Vestal, NY, US; Abu Dhabi, UAE; Dubai, UAE; London, UK; Madrid, Spain; Mexico City, Mexico; Milan, Italy; Munich, Germany; Paris, France; Rawalpindi, Pakistan; Rome, Italy; Milan, Italy; Belgrade, Serbia; Shanghai, China; Singapore; Sydney, Australia; and Vienna, Austria. The company's tagline is Procurement Simplified. SciQuest conducted an IPO in 1999 following its establishment in 1995 as a B2B eCommerce exchange. In 2001, SciQuest transitioned from a B2B exchange company into eProcurement software and supplier enablement platforms. SciQuest was taken private in 2004 to continue its move into eProcurement, inventory management and accounts payable automation. SciQuest completed an IPO in September, 2010 raising approximately $57 million. SciQuest was taken private in June 2016 as part of an acquisition by Accel-KKR, a private equity firm headquartered in Menlo Park, CA. Robert Bonavito became CEO of SciQuest in September 2016, bringing 30+ years of leadership in the procurement and sourcing software space. Vic Chynoweth joined the company as CFO, in May 2018. In Q1 of 2017, SciQuest underwent a rebranding, emerging as "JAGGAER" with a more directed focus on a complete, integrated source-to-pay suite, coupled with Advanced Sourcing and Chemical Inventory Management. Along with the name change, the company expanded its market focus to manufacturing, healthcare, consumer packaged goods, retail, education, life sciences, logistics and public sector. JAGGAER acquired the European direct materials procurement specialist Pool4Tool in June 2017 giving it end-to-end direct as well as indirect materials procurement coverage. In Q4 of 2017, JAGGAER acquired spend management company BravoSolution, and entered into a joint venture with UAE based Tejari. Gartner, Inc. has named JAGGAER (Advantage) highest for Ability to Execute in its 2018 “Magic Quadrant for Strategic Sourcing Application Suites.” JAGGAER was named as a Leader in Gartner, Inc's 2018 “Magic Quadrant for Procure-to-Pay Suites” in early 2018 and again in 2019. In February 2019 JAGGAER launched JAGGAER ONE, which unifies its full product suite on a single platform. In 2019 the UK-based private equity firm Cinven acquired a majority holding in the company. Jim Bureau was subsequently named JAGGAER's Chief Executive Officer. Bureau joined JAGGAER in 2018 and had been responsible for the company's customer success, sales and commercial operations globally. Prior to JAGGAER, Bureau served in senior leadership positions within several software organizations including Verint Systems/KANA Software, Shared Health, Pegasystems and Oracle. Product Categories The JAGGAER ONE platform supports the following products: Spend Analytics Category Management Supplier Management Sourcing Contracts eProcurement Invoicing Inventory Management Supply Chain Collaboration Quality Management Acquisitions SciQuest has acquired the following companies over the previous years: AECsoft - January 2011. Provider of supplier management and sourcing technology. Upside Software, Inc. - August 2012. Provider of contract lifecycle management (CLM) solutions. Spend Radar, LLC - October 2012, Provider of spend analysis software. CombineNet - September 2013, Provider of advanced sourcing software JAGGAER has acquired the following companies over the previous years: POOL4TOOL - June 2017, Provider of direct sourcing and supply chain management software BravoSolution - December 2017, Provider of global platform spend management solutions References Application software Research Triangle Software companies based in North Carolina 1995 establishments in North Carolina Software companies established in 1995 Software companies of the United States Companies formerly listed on the Nasdaq 2010 initial public offerings 2016 mergers and acquisitions
38628314
https://en.wikipedia.org/wiki/Hardware%20stress%20test
Hardware stress test
A stress test (sometimes called a torture test) of hardware is a form of deliberately intense and thorough testing used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results. Reasons can include: to determine breaking points and safe usage limits; to confirm that the intended specifications are being met; to determine modes of failure (how exactly a system may fail), and to test stable operation of a part or system outside standard usage. Reliability engineers often test items under expected stress or even under accelerated stress in order to determine the operating life of the item or to determine modes of failure. The term stress test as it relates to hardware (including electronics, physical devices, nuclear power plants, etc.) is likely to have different refined meanings in specific contexts. One example is in materials, see Fatigue (material). Hardware stress test Stress testing, in general, should put computer hardware under exaggerated levels of stress in order to ensure stability when used in a normal environment. These can include extremes of workload, type of task, memory use, thermal load (heat), clock speed, or voltages. Memory and CPU are two components that are commonly stress tested in this way. There is considerable overlap between stress testing software and benchmarking software, since both seek to assess and measure maximum performance. Of the two, stress testing software aims to test stability by trying to force a system to fail; benchmarking aims to measure and assess the maximum performance possible at a given task or function. When modifying the operating parameters of a CPU, such as temperature, humidity, overclocking, underclocking, overvolting, and undervolting, it may be necessary to verify if the new parameters (usually CPU core voltage and frequency) are suitable for heavy CPU loads. This is done by running a CPU-intensive program for extended periods of time, to test whether the computer hangs or crashes. CPU stress testing is also referred to as torture testing. Software that is suitable for torture testing should typically run instructions that utilise the entire chip rather than only a few of its units. Stress testing a CPU over the course of 24 hours at 100% load is, in most cases, sufficient to determine that the CPU will function correctly in normal usage scenarios such as in a desktop computer, where CPU usage typically fluctuates at low levels (50% and under). Hardware stress testing and stability are subjective and may vary according to how the system will be used. A stress test for a system running 24/7 or that will perform error sensitive tasks such as distributed computing or "folding" projects may differ from one that needs to be able to run a single game with a reasonable amount of reliability. For example, a comprehensive guide on overclocking Sandy Bridge found that: Even though in the past IntelBurnTest was just as good, it seems that something in the SB uArch [Sandy Bridge microarchitecture] is more heavily stressed with Prime95 ... IBT really does pull more power [make greater thermal demands]. But ... Prime95 failed first every time, and it failed when IBT would pass. So same as Sandy Bridge, Prime95 is a better stability tester for Sandy Bridge-E than IBT/LinX. Stability is subjective; some might call stability enough to run their game, other like folders [folding projects] might need something that is just as stable as it was at stock, and ... would need to run Prime95 for at least 12 hours to a day or two to deem that stable ... There are [bench testers] who really don’t care for stability like that and will just say if it can [complete] a benchmark it is stable enough. No one is wrong and no one is right. Stability is subjective. [But] 24/7 stability is not subjective. An engineer at ASUS advised in a 2012 article on overclocking an Intel X79 system, that it is important to choose testing software carefully in order to obtain useful results: Unvalidated stress tests are not advised (such as Prime95 or LinX or other comparable applications). For high grade CPU/IMC and System Bus testing Aida64 is recommended along with general applications usage like PC Mark 7. Aida has an advantage as it is stability test has been designed for the Sandy Bridge E architecture and test specific functions like AES, AVX and other instruction sets that prime and like synthetics do not touch. As such not only does it load the CPU 100% but will also test other parts of CPU not used under applications like Prime 95. Other applications to consider are SiSoft 2012 or Passmark BurnIn. Be advised validation has not been completed using Prime 95 version 26 and LinX (10.3.7.012) and OCCT 4.1.0 beta 1 but once we have internally tested to ensure at least limited support and operation. See also Black box testing Burn-in Destructive testing Highly Accelerated Life Test Load and performance test tools Load testing Stress test for other uses (disambiguation) Stress testing (software) References Hardware testing Environmental testing
51141210
https://en.wikipedia.org/wiki/2016%20Democratic%20National%20Committee%20email%20leak
2016 Democratic National Committee email leak
The 2016 Democratic National Committee email leak is a collection of Democratic National Committee (DNC) emails stolen by one or more hackers operating under the pseudonym "Guccifer 2.0" who are alleged to be Russian intelligence agency hackers, according to indictments carried out by the Mueller investigation. These emails were subsequently leaked by DCLeaks in June and July 2016 and by WikiLeaks on July 22, 2016, just before the 2016 Democratic National Convention. This collection included 19,252 emails and 8,034 attachments from the DNC, the governing body of the United States' Democratic Party. The leak includes emails from seven key DNC staff members, and date from January 2015 to May 2016. On November 6, 2016, WikiLeaks released a second batch of DNC emails, adding 8,263 emails to its collection. The leaks resulted in allegations of bias against Bernie Sanders' presidential campaign, in apparent contradiction with the DNC leadership's publicly stated neutrality, as several DNC operatives openly derided Sanders' campaign and discussed ways to advance Hillary Clinton's nomination. Later reveals included controversial DNC–Clinton agreements dated before the primary, regarding financial arrangements and control over policy and hiring decisions. The revelations prompted the resignation of DNC chair Debbie Wasserman Schultz before the 2016 Democratic National Convention. The DNC issued a formal apology to Bernie Sanders and his supporters "for the inexcusable remarks made over email" that did not reflect the DNC's "steadfast commitment to neutrality during the nominating process." After the convention, DNC CEO Amy Dacey, CFO Brad Marshall, and Communications Director Luis Miranda also resigned in the wake of the controversy. On December 9, 2016, the CIA told U.S. legislators that the U.S. Intelligence Community concluded Russia conducted operations during the 2016 U.S. election to prevent Hillary Clinton from winning the presidency. Multiple U.S intelligence agencies concluded people with direct ties to the Kremlin gave WikiLeaks hacked emails from the Democratic National Committee. WikiLeaks did not reveal its source. Later Julian Assange, founder of Wikileaks, claimed that the source of the emails was not Russia. On July 13, 2018, Special Counsel Robert Mueller indicted 12 Russian military intelligence agents of a group known as Fancy Bear alleged to be responsible for the attack, who were behind the Guccifer 2.0 pseudonym which claimed responsibility. Contents of leak The emails leaked by Wikileaks, in two phases (the first on July 22, 2016 and the second on November 6, 2016), revealed information about the DNC's interactions with the media, Hillary Clinton's and Bernie Sanders' campaigns, and financial contributions. It also includes personal information about the donors of the Democratic Party, including credit card and Social Security numbers, which could facilitate identity theft. Earlier, in late June 2016, Guccifer 2.0 instructed reporters to visit the DCLeaks website for emails stolen from Democrats. With the WikiLeaks disclosure of additional stolen emails beginning on July 22, 2016, more than 150,000 stolen emails from either personal Gmail addresses or via the DNC that were related to the Hillary Clinton 2016 Presidential campaign were published on the DCLeaks and WikiLeaks websites. On August 12, 2016, DCLeaks released information about more than 200 Democratic lawmakers, including their personal cellphone numbers. The numerous prank calls that Hillary Clinton received from this disclosure along with the loss of her campaign's email security severely disrupted her campaign, which changed its contact information on October 7, 2016 by calling each of her contacts one at a time. Media The emails include DNC staff's "off-the-record" correspondence with media personalities, including the reporters at CNN, Politico, The Wall Street Journal, and The Washington Post. Bernie Sanders' campaign In the emails, DNC staffers derided the Sanders campaign. The Washington Post reported: "Many of the most damaging emails suggest the committee was actively trying to undermine Bernie Sanders's presidential campaign." In a May 2016 email chain, the DNC chief financial officer (CFO) Brad Marshall told the DNC chief executive officer, Amy Dacey, that they should have someone from the media ask Sanders if he is an atheist prior to the West Virginia primary. On May 21, 2016, DNC National Press Secretary Mark Paustenbach sent an email to DNC Spokesman Luis Miranda mentioning a controversy that ensued in December 2015, when the National Data Director of the Sanders campaign and three subordinate staffers accessed the Clinton campaign's voter information on the NGP VAN database. (The party accused Sanders' campaign of impropriety and briefly limited its access to the database. The Sanders campaign filed suit for breach of contract against the DNC, but dropped the suit on April 29, 2016.) Paustenbach suggested that the incident could be used to promote a "narrative for a story, which is that Bernie never had his act together, that his campaign was a mess." The DNC rejected this suggestion. The Washington Post wrote: "Paustenbach's suggestion, in that way, could be read as a defense of the committee rather than pushing negative information about Sanders. But this is still the committee pushing negative information about one of its candidates." Debbie Wasserman Schultz's emails Following the Nevada Democratic convention, Debbie Wasserman Schultz wrote about Jeff Weaver, manager of Bernie Sanders' campaign: "Damn liar. Particularly scummy that he barely acknowledges the violent and threatening behavior that occurred". In another email, Wasserman Schultz said of Bernie Sanders, "He isn't going to be president." Other emails showed her stating that Sanders doesn't understand the Democratic Party. In May 2016, MSNBC's Mika Brzezinski accused the DNC of bias against the Sanders campaign and called on Wasserman Schultz to step down. Wasserman Schultz was upset at the negative media coverage of her actions, and she emailed the political director of NBC News, Chuck Todd, that such coverage of her "must stop". Describing the coverage as the "LAST straw", she ordered the DNC's communications director to call MSNBC president Phil Griffin to demand an apology from Brzezinski. Financial and donor information According to The New York Times, the cache included "thousands of emails exchanged by Democratic officials and party fund-raisers, revealing in rarely seen detail the elaborate, ingratiating and often bluntly transactional exchanges necessary to harvest hundreds of millions of dollars from the party's wealthy donor class. The emails capture a world where seating charts are arranged with dollar totals in mind, where a White House celebration of gay pride is a thinly disguised occasion for rewarding wealthy donors and where physical proximity to the president is the most precious of currencies." As is common in national politics, large party donors "were the subject of entire dossiers, as fund-raisers tried to gauge their interests, annoyances and passions." In a series of email exchanges in April and May 2016, DNC fundraising staff discussed and compiled a list of people (mainly donors) who might be appointed to federal boards and commissions. Center for Responsive Politics senior fellow Bob Biersack noted that this is a longstanding practice in the United States: "Big donors have always risen to the top of lists for appointment to plum ambassadorships and other boards and commissions around the federal landscape." The White House denied that financial support for the party was connected to board appointments, saying: "Being a donor does not get you a role in this administration, nor does it preclude you from getting one. We've said this for many years now and there's nothing in the emails that have been released that contradicts that." France In 2011, France, under President Nicolas Sarkozy, led calls for international intervention in the Libyan Civil War, voted in favor of United Nations Security Council Resolution 1973 and, subsequently, dispatched the French Air Force into direct military action in Libya in support of the National Transitional Council. At the time, France said the move was to protect Libyan civilians. But in a private email from Sidney Blumenthal to Hillary Clinton – revealed as part of the 2016 Democratic National Committee email leak – Blumenthal claimed France was more concerned with Libya's large gold reserves, which might pose a threat to the value of the Central African Franc, thereby weakening French influence in Africa, and that Sarkozy was interested in increased access to Libyan oil. Former French diplomat Patrick Haimzadeh called Blumenthal's analysis, while it reflected a popular theory on conspiracy websites, "not credible" because "the timeline just doesn't add up" with Sarkozy's decision to intervene preceding knowledge of Gaddafi's plans. French investigative journalist Fabrice Arfi dismissed Blumenthal's claim as "far-fetched," while also acknowledging that even U.S. intelligence did not find France's publicly stated motivations for the Libya intervention to be entirely credible either. Perpetrators Cybersecurity analysis A self-styled hacker going by the moniker "Guccifer 2.0" claimed to be the source of the leaks; WikiLeaks did not reveal its source. Cybersecurity experts and firms, including CrowdStrike, Fidelis Cybersecurity, Mandiant, SecureWorks, and ThreatConnect, and the editor for Ars Technica, stated the leak was part of a series of cyberattacks on the DNC committed by two Russian intelligence groups. U.S. intelligence agencies also stated (with "high confidence") that the Russian government was behind the theft of emails and documents from the DNC, according to reports in The New York Times and The Washington Post. WikiLeaks founder Julian Assange initially stuck to WikiLeaks policy of neither confirming or denying sources but in January 2017 said that their "source is not the Russian government and it is not a state party", and the Russian government said it had no involvement. Comey testified that the FBI requested, but did not receive, physical access to the DNC servers. According to Comey, the FBI did obtain copies of the servers and all the information on them, as well as access to forensics from CrowdStrike, a third-party cybersecurity company that reviewed the DNC servers. Comey said that access through Crowdstrike was an "appropriate substitute" and called the firm a "highly respected private company." United States intelligence conclusions On October 7, 2016, the United States Department of Homeland Security and the Office of the Director of National Intelligence stated that the US intelligence community was "confident" that the Russian government directed the breaches and the release of the obtained or allegedly obtained material in an attempt to "... interfere with the US election process." The U.S. Intelligence Community tasked resources debating why Putin chose summer 2016 to escalate active measures influencing U.S. politics. Director of National Intelligence James R. Clapper said after the 2011–13 Russian protests, Putin's confidence in his viability as a politician was damaged, and Putin responded with the propaganda operation. Former CIA officer Patrick Skinner explained the goal was to spread uncertainty. U.S. Congressman Adam Schiff, Ranking Member of the House Permanent Select Committee on Intelligence, commented on Putin's aims, and said U.S. intelligence agencies were concerned with Russian propaganda. Speaking about disinformation that appeared in Hungary, Slovakia, the Czech Republic, and Poland, Schiff said there was an increase of the same behavior in the U.S. Schiff concluded Russian propaganda operations would continue against the U.S. after the election. On December 9, 2016, the CIA told U.S. legislators the U.S. Intelligence Community concluded Russia conducted operations during the 2016 U.S. election to assist Donald Trump in winning the presidency. Multiple U.S intelligence agencies concluded people with direct ties to the Kremlin gave WikiLeaks hacked emails from the DNC and additional sources such as John Podesta, campaign chairman for Hillary Clinton. These intelligence organizations additionally concluded Russia attempted to hack the Republican National Committee (RNC) as well as the DNC but were prevented by security defenses on the RNC network. In December 2016, the CIA said the foreign intelligence agents were Russian operatives previously known to the U.S. CIA officials told U.S. Senators it was "quite clear" Russia's intentions were to help Trump. Trump released a statement December 9, and disregarded the CIA conclusions. In June 2017, former Secretary of Homeland Security Jeh Johnson, who was appointed by and served under President Barack Obama, testified before a House Select committee that his department offered their assistance to the DNC during the campaign to determine what happened to their server, but said his efforts were "rebuffed" because the Department of Homeland Security was offering to provide assistance months after the FBI had provided assistance. Throughout late 2017 into early 2018, numerous individuals gave testimonies to the House Permanent Select Committee on Intelligence (HPSCI) who were charged with carrying out an investigation into the series of cyberattacks. Steele dossier allegations The Steele dossier, written in late 2016, contains several allegations related to the hacking and leaking of the emails. The individuals named have denied the allegations. Dossier sources alleged: That Russia was responsible for the DNC email hacks and the recent appearance of the stolen DNC e-mails on WikiLeaks, and that the reason for using WikiLeaks was "plausible deniability". (Report 95) That "the operation had been conducted with the full knowledge and support of TRUMP and senior members of his campaign team." (Report 95) That after the emails were leaked to WikiLeaks, it was decided to not leak more, but to engage in misinformation: "Rather the tactics would be to spread rumours and misinformation about the content of what already had been leaked and make up new content." (Report 101) That Trump's foreign policy adviser Carter Page had "conceived and promoted" the idea of "leaking the DNC e-mails to WikiLeaks during the Democratic Convention" "to swing supporters of Bernie SANDERS away from Hillary CLINTON and across to TRUMP." (Reports 95, 102) That the hacking of the DNC servers was performed by Romanian hackers ultimately controlled by Putin and paid by both Trump and Putin. (Report 166) That Trump's personal attorney, Michael Cohen, had a secret meeting with Kremlin officials in Prague in August 2016, where he arranged "deniable cash payments" to the hackers and sought "to cover up all traces of the hacking operation", as well as "cover up ties between Trump and Russia, including Manafort's involvement in Ukraine". (Reports 135, 166) Trump has repeatedly denied the allegations, labeling the dossier as "discredited", "debunked", "fictitious", and "fake news". Paul Manafort has "denied taking part in any collusion with the Russian state, but registered himself as a foreign agent retroactively after it was revealed his firm received more than $17m working as a lobbyist for a pro-Russian Ukrainian party." Cohen has also denied the allegations against him. Page originally denied meeting any Russian officials, but his later testimony, acknowledging that he had met with senior Russian officials at Rosneft, has been interpreted as appearing to corroborate portions of the dossier. In his February 2019 testimony before Congress, Cohen implicated Trump, writing that Trump had knowledge that Roger Stone was communicating with Wikileaks about releasing emails stolen from the DNC in 2016. Reactions On July 18, 2016, Dmitry Peskov, press secretary for Russian president Vladimir Putin, stated that the Russian government had no involvement in the DNC hacking incident. Peskov called it "paranoid" and "absurd", saying: "We are again seeing these maniacal attempts to exploit the Russian theme in the US election campaign." That position was later reiterated by the Russian Embassy in Washington, DC, which called the allegation "entirely unrealistic". Then Republican nominee Donald Trump said on Twitter: "Leaked e-mails of DNC show plans to destroy Bernie Sanders. Mock his heritage and much more. On-line from Wikileakes , really vicious. RIGGED." The leak fueled tensions going into the 2016 Democratic National Convention: although DNC operatives initially denied accusations of bias, Sanders operatives and multiple media commentators cited the leaks as clear evidence that the DNC had been favoring Clinton and undermining Sanders. Several media commentators have disputed the significance of the emails, arguing that the DNC's internal preference for Clinton was not historically unusual and was unlikely to have swayed the final outcome of the primary; whereas many of Sanders' supporters viewed the revelations as symptomatic of an entrenched, unethical political establishment. On July 24, 2016, Sanders urged Wasserman Schultz to resign following the leak and stated that he was "disappointed" by the leak, but that he was "not shocked." Jeff Weaver, Bernie Sanders' campaign manager, called for greater accountability in the DNC, calling Wasserman Schultz "a figure of disunity" within the Democratic Party. Later the same day, Wasserman Schultz resigned from her position as DNC Chairman, effective as of the end of the nominating convention. After Wasserman Schultz resigned, Sanders said that she had "made the right decision for the future of the Democratic Party." On the following day, the DNC apologized to Bernie Sanders and his supporters, stating, "On behalf of everyone at the DNC, we want to offer a deep and sincere apology to Senator Sanders, his supporters, and the entire Democratic Party for the inexcusable remarks made over email," and that the emails did not reflect the DNC's "steadfast commitment to neutrality during the nominating process." On July 24, 2016, in an interview with NPR, former DNC Chair and current Governor of Virginia Terry McAuliffe said "...that the chair's job should be "to remain neutral." "I sat in that chair in 2004 trying to navigate all the different candidates we had. But if you had people in there who were trashing one of the candidates, I can tell you this, if I were still chairman they wouldn't be working there. I mean, that is just totally unacceptable behavior." On July 25, 2016, Anthony Zurcher, North America reporter for the BBC, commented that "the revelation that those in the heart of the Democratic establishment sought to undermine the anti-establishment Sanders is roughly on a par with [Casablanca character] police Capt Renault's professed shock that gambling was taking place in the Casablanca club he was raiding, as a waiter hands him his winnings." On July 25, 2016, Republican National Committee chairman Reince Priebus said that "Today's events show really what an uphill climb the Democrats are facing this week in unifying their party. Starting out the week by losing your party chairman over longstanding bitterness between factions is no way to keep something together." After the emails were released, the Australian diplomat Alexander Downer informed the U.S. government that, in May 2016 at a London wine bar, Trump campaign staffer George Papadopoulos had told him that the Russian government had a large trove of Hillary Clinton emails that could potentially damage her presidential campaign. The FBI started a counterintelligence investigation into possible Russian interference in the 2016 U.S. presidential election. On October 14, 2016, NBC News reported that multiple sources were telling them that Barack Obama had ordered the CIA to present him with options for a retaliatory cyber attack against the Russian Federation for allegedly interfering in the US presidential election. Sources said that this is not the first time the CIA has presented such options to a president, but that on all previous occasions the decision was made not to carry out the proposed attacks. Media coverage and public perception On July 27, 2016, The New York Times reported that Julian Assange, in an interview on British ITV on June 12, 2016, had "made it clear that he hoped to harm Hillary Clinton's chances of winning the presidency", and that in a later interview on the program Democracy Now! on July 25, 2016, the first day of the Democratic National Convention, he acknowledged that "he had timed their release to coincide with the Democratic convention." In an interview with CNN, Assange would neither confirm nor deny who WikiLeaks' sources were; he claimed that his website "...might release "a lot more material" relevant to the US electoral campaign..." Following the publication of the stolen emails, NSA whistleblower Edward Snowden criticized WikiLeaks for its wholesale leakage of data, writing that "their hostility to even modest curation is a mistake." The Washington Post contrasted the difference between WikiLeaks' practices and Snowden's disclosure of information about NSA: while Snowden worked with journalists to vet documents (withholding some where it would endanger national security), WikiLeaks' "more radical" approach involves the dumping of "massive, searchable caches online with few—if any—apparent efforts to remove sensitive personal information." On July 25, 2016, Anne Applebaum, columnist for The Washington Post, wrote that: ... with the exception of a few people on Twitter and a handful of print journalists, most of those covering this story, especially on television, are not interested in the nature of the hackers, and they are not asking why the Russians apparently chose to pass the emails on to WikiLeaks at this particular moment, on the eve of the Democratic National Convention. They are focusing instead on the content of what were meant to be private emails ... She went on to describe in detail other Russian destabilization campaigns in Eastern European countries. On July 25, 2016, Thomas Rid, Professor in Security Studies at King's College, London, and non-resident fellow at the School for Advanced International Studies, Johns Hopkins University, in Washington, DC, summed up the evidence pointing to Russia being behind the hacking of the DNC files and the "Guccifer-branded leaking operation". He concludes that these actions successfully blunted the "DNC's ability to use its opposition research in surprise against Trump..." He further writes that data exfiltration from political organizations is done by many countries and is considered to be a legitimate form of intelligence work. "But digitally exfiltrating and then publishing possibly manipulated documents disguised as freewheeling hacktivism is crossing a big red line and setting a dangerous precedent: an authoritarian country directly yet covertly trying to sabotage an American election." Russian security expert and investigative journalist Andrei Soldatov said "It is almost impossible to know for sure whether or not Russia is behind a hack of the DNC's servers". According to him, one of the reasons Russia would try to sway the US presidential election is that the Russian government considers Clinton "a hater of Russia": "There is this mentality in Russia of being besieged; that it is always under attack from the United States ... They are trying to interfere in our internal affairs so why not try to do the same thing to them?" Civil DNC lawsuit On April 20, 2018, the Democratic National Committee filed a civil lawsuit in federal court in New York, accusing the Russian government, the Trump campaign, Wikileaks, and others of conspiracy to alter the course of the 2016 presidential election and asking for monetary damages and a declaration admitting guilt. A hearing on the defendants' motions to dismiss was scheduled for May 17, 2018. In July 2019, the suit was dismissed with prejudice. In his judgement, federal judge John Koeltl said that although he believed the Russian government was involved in the hacking, US federal law generally prohibited suits against foreign governments. The judge said the other defendants, "did not participate in any wrongdoing in obtaining the materials in the first place" and were therefore within the law in publishing the information. He also said that the DNC's argument was "entirely divorced from the facts" and even if the Russians had directly provided the hacked documents to the Trump team, it would not be criminal for the campaign to publish those documents, as long as they did not contribute to the hacking itself. Koeltl denied the defendants motion for sanctions, but dismissed the suit with prejudice, meaning it had a substantive legal defect and could not be refiled. See also Democratic National Committee cyber attacks The Plot to Hack America Podesta emails Russian involvement in the 2016 United States presidential election References 2016 in American politics 2016 scandals Controversies of the 2016 United States presidential election Data breaches Email leak Email hacking Hillary Clinton controversies Information published by WikiLeaks July 2016 events in the United States Russian interference in the 2016 United States elections
227399
https://en.wikipedia.org/wiki/Amiga%20Unix
Amiga Unix
Amiga Unix (informally known as Amix) is a discontinued full port of AT&T Unix System V Release 4 operating system developed by Commodore-Amiga, Inc. in 1990 for the Amiga computer family as an alternative to AmigaOS, which shipped by default. Overview Bundled with the Amiga 3000UX, Commodore's Unix was one of the first ports of SVR4 to the 68k architecture. The Amiga A3000UX model even got the attention of Sun Microsystems, though ultimately nothing came of it. Unlike Apple's A/UX, Amiga Unix contained no compatibility layer to allow AmigaOS applications to run under Unix. With few native applications available to take advantage of the Amiga's significant multimedia capabilities, it failed to find a niche in the quite-competitive Unix workstation market of the early 1990s. The A3000UX's price tag of $4,998 () was also not very attractive compared to other Unix workstations at the time, such as the NeXTstation ($5,000 for a base system, with a full API and many times the number of applications available), the SGI Indigo (starting at $8,000), or the Personal DECstation 5000 Model 25 (starting at $5,000). Sun, HP, and IBM had similarly priced systems. The A3000UX's 68030 was noticeably underpowered compared to most of its RISC-based competitors. Unlike typical commercial Unix distributions of the time, Amiga Unix included the source code to the vendor-specific enhancements and platform-dependent device drivers (essentially any part that wasn't owned by AT&T), allowing interested users to study or enhance those parts of the system. However this source code was subject to the same license terms as the binary part of the system it was not free software. Amiga Unix also incorporated and depended upon many open source components, such as the GNU C Compiler and X Window System, and included their source code. Like many other proprietary Unix variants with small market shares, Amiga Unix vanished into the mists of computer history when its vendor, Commodore, went out of business. Today, Unix-like operating systems such as Minix, NetBSD, and Linux are available for the Amiga platform. See also Atari TT030, Unix workstation from Atari References External links Manual: Commodore, Amiga Unix, System V Release 4, Learning Amiga Unix (11/1990) The Very Unofficial Commodore Amiga Unix (AMIX) Wiki Video of AMIX running under FS-UAE Amiga Discontinued operating systems UNIX System V
11917935
https://en.wikipedia.org/wiki/PlayStation%20Portable%20system%20software
PlayStation Portable system software
The PlayStation Portable system software is the official firmware for the PlayStation Portable. It uses the XrossMediaBar (XMB) as its user interface, similar to the PlayStation 3 console. Updates add new functionality as well as security patches to prevent unsigned code from being executed on the system. Updates can be obtained in four ways: Direct download to the PSP over Wi-Fi. This can be performed by choosing [Settings], [System Update] from the XMB. Download to a PC, then transfer to the PSP via a USB cable or Memory Stick. Included on the UMD of some games. These games may not run with earlier firmware than the version on their UMD. See also List of PlayStation Portable system software compatibilities. Download from a PS3 to a PSP system via USB cable. (Japanese and American version only) While system software updates can be used with consoles from any region, Sony recommends only downloading system software updates released for the region corresponding to the system's place of purchase. System software updates have added various features including a web browser, Adobe Flash Player 6 support, additional codecs for images, audio, and video, PlayStation 3 connectivity, as well as patches against several security exploits, vulnerabilities, and execution of homebrew programs. The battery must be at least 50% charged or else the system will prevent the update from installing. If the power supply is lost while writing to the system software, the console will no longer be able to operate unless the system is booted in service mode or sent to Sony for repair if still under warranty. The current version of the software, 6.61, was made available on January 15, 2015. It is a minor update released more than three years after the release of the previous version 6.60 in 2011. Technology Graphical shell The PlayStation Portable uses the XrossMediaBar (XMB) as its graphical user interface, which is also used in the PlayStation 3 (PS3) console, a variety of Sony BRAVIA HDTVs, Blu-ray disc players and many more Sony products. XMB displays icons horizontally across the screen that be seen as categories. Users can navigate through them using the left and right buttons of the D-pad, which move the icons forward or back across the screen, highlighting just one at a time, as opposed to using any kind of pointer to select an option. When one category is selected, there are usually more specific options then available to select that are spread vertically above and below the selected icon. Users may navigate among these options by using the up and down buttons of the D-pad. The basic features offered by XMB implementations varies based on device and software version. On the PSP console, the XMB had top level icons for Photos, Music, Videos, Games, Networking (which allows the use of the web browser), Settings and Extras. Also, XMB offers a degree of multitasking. With the PSP, using the Home button while playing music would allow users to browse photos without stopping the music. While XMB proved to be a successful user interface for Sony products such as PSP and PS3, the next generation Sony video game consoles such as the PlayStation 4 and the PlayStation Vita no longer use this user interface. For example, the XMB is replaced by the LiveArea interface on the PS Vita. Web browser The PlayStation Portable comes with a web browser for browsing the Internet. The web browser is a version of the NetFront browser made by Access Co. Ltd. and was released for free with the 2.00 system software update. The browser supports most common web technologies, such as HTTP cookies, forms, CSS, as well as basic JavaScript capabilities. The version 2.50 upgrade added Unicode (UTF-8) character encoding and Auto-Select as options in the browser's encoding menu, and also introduced the saving of input history for online forms. Version 2.70 of the PSP's system software introduced basic Flash capabilities to the browser. However, the player runs Flash version 6, five iterations behind the current desktop version 11, making some websites difficult to view. There are three different rendering modes: "Normal", "Just-Fit", and "Smart-Fit". "Normal" will display the page with no changes, "Just-Fit" will attempt to shrink some elements to make the whole page fit on the screen and preserve layout and "Smart-Fit" will display content in the order it appears in the HTML, and with no size adjustments; instead it will drop an element down below the preceding element if it starts to go off the screen. The browser also has basic tabbed browsing capabilities, with a maximum of three tabs. When a website tries to open a link in a new window, the browser opens it in a new tab. Parents can limit content by enabling Browser Start Up Control which blocks all access to the web browser and creating a 4-digit PIN under [Settings] in [Security]. Additionally, the browser can be configured to run under a proxy server and can be protected by the security PIN to enable the use of web filtering or monitoring software through a network. Recently, TrendMicro for PSP was added as a feature that can be enabled via a subscription to filter or monitor content on the PSP. The PSP browser is slower compared to modern browsers and often runs out of memory due to limitations put in place by Sony. Alternatively, homebrew alternatives to the browser have been released that utilize all 32/64 MB of the PSP's RAM, which allows the browser to load pages faster and have more memory for larger pages. Opera Mini can also be used on PSP through PSPKVM, a homebrew application which is a Sun Java Virtual Machine. It was claimed to provide much faster loading times than the default browser and provides better web page compatibility. Other features Like many other video game consoles, the PlayStation Portable is capable of photo, audio, and video playback in a variety of formats. However, unlike Sony's home consoles such as the PlayStation 3 and the PlayStation 4, it is not possible to play Blu-ray or DVD movies on the PlayStation Portable directly since it lacks a standard Blu-ray or DVD drive. While it does have a UMD drive and there exist UMD movies, the UMD format never saw implementation on any device other than the PlayStation Portable and as a result, the market is very limited compared to those for other optical media formats. There have been no more movies released on UMD since 2011, with Harry Potter and the Deathly Hallows – Part 2 being one of the final releases on the format. The PlayStation Portable also supports a feature known as Remote Play, which allows the PSP to access many features of a PlayStation 3 console from a remote location using the PS3's WLAN capabilities, a home network, or the Internet. However, unlike the later Remote Play feature between the PlayStation Vita and the PlayStation 4, the Remote Play capabilities between the PSP and the PS3 are much more limited. Although most of the PS3's capabilities related to its main user interface are accessible with Remote Play, playback of DVDs, Blu-ray Discs, PlayStation 2 games, most PlayStation 3 games, and copy-protected files stored on the PS3's hard drive are not supported. Actual Remote Play between the PSP and the PS3 games are only supported by a "select" very few PS3 titles. Furthermore, PSP-2000, PSP-3000, and PSP-N1000 can use the Skype VoIP service starting with system software version 3.90. The service allows Skype calls to be made over Wi-Fi and on the PSP Go over the Bluetooth Modem feature. It is not possible to use the VoIP service on the original PSP-1000 console due to hardware limitations. There also existed other services for the PSP such as the Room for PlayStation Portable, similar to the PlayStation 3's online community-based service known as PlayStation Home. Also, SHOUTCast Radio can be listened to via an inbuilt app on most PSPs. Custom firmware Homebrew development was very popular during the time of the PlayStation Portable. Besides the official firmware (OFW) made by Sony, custom firmware (also written as Custom Firmware, or simply CFW) is also commonly seen in the PlayStation Portable handheld consoles. Custom firmware allows the running of unsigned code such as homebrew applications, UMD .ISO files, emulators for other consoles and PS1 games when the disc images are converted into an EBOOT file. This is in stark contrast to the official system software, where only code that has been signed by Sony can run. Notable custom firmware versions include the M33 Custom Firmware by Dark_AleX as well as those made by others such as the Custom Firmware 5.50GEN series, Minimum Edition (ME/LME) CFW and the PRO CFW. All legally and illegally obtained content can be played on custom firmware, assuming that it is at the latest version (currently 6.61). During the early days of the PSP hacking scene, it was discovered that firmware 1.00 allowed unsigned code to run. While this firmware only existed on PSP-1000 models from Japan, many users imported these models to run and develop homebrew. An exploit was later discovered in firmware 1.50 that also allowed unsigned code to run. This opened up North American PSP-1000 systems for homebrew. Firmware 1.5 acted as the standard firmware for homebrew until the creation of eLoaders (which use various exploits to launch a homebrew "menu"), savegame exploits in games such as Grand Theft Auto: Liberty City Stories and Lumines: Puzzle Fusion and eventually DarkAlex's custom firmware releases, which all allowed PSPs shipped after the 1.51 update's release to run homebrew. Sony had put significant effort into blocking custom firmware and other third party devices/content from the PSP, but their effort was in vain. In July 2007 Dark_AleX officially stopped his work on the PSP, citing perceived problems with Sony as one of the reasons for his departure, but other custom firmware versions continue to be developed or updated. In 2015, a homebrew tool known as Infinity was developed which allows users to permanently install CFW such as LME or PRO on all PSP models. This tool requires firmware 6.60 or 6.61. See also Media Go XrossMediaBar LocationFree Player PlayStation Network List of PlayStation Portable system software compatibilities Other gaming platforms from Sony: PlayStation 4 system software PlayStation 3 system software PlayStation Vita system software Other gaming platforms from the next generation: Nintendo 3DS system software Wii U system software Xbox One system software Nintendo Switch system software Other gaming platforms from this generation: Nintendo DSi system software Wii system software Xbox 360 system software References Software Game console operating systems Mobile operating systems Proprietary operating systems
4818722
https://en.wikipedia.org/wiki/Certification%20Commission%20for%20Healthcare%20Information%20Technology
Certification Commission for Healthcare Information Technology
The Certification Commission for Health Information Technology (CCHIT) was an independent, 501(c)3 nonprofit organization with the public mission of accelerating adoption of robust, interoperable health information technology in the United States. The Commission certified electronic health record technology (EHR) from 2006 until 2014. It was approved by the Office of the National Coordinator for Health Information Technology (ONC) of the U.S. Department of Health and Human Services (HHS) as an Authorized Testing and Certification Body (ONC-ATCB). The CCHIT Certified program is an independently developed certification that includes a rigorous inspection of an EHR's integrated functionality, interoperability and security using criteria developed by CCHIT's broadly representative, expert work groups. These products may also be certified in the ONC-ATCB certification program. History CCHIT was founded in 2004 with support from three leading industry associations in healthcare information management and technology: the American Health Information Management Association (AHIMA), the Healthcare Information and Management Systems Society (HIMSS) and the National Alliance for Health Information Technology (the Alliance). In September 2005, CCHIT was awarded a 3-year contract by the U.S. Department of Health and Human Services (HHS) to develop and evaluate the certification criteria and inspection process for EHRs and the networks through which they interoperate. In October 2006, HHS officially designated CCHIT as a Recognized Certification Body (RCB). In July 2010, HHS published new rules for recognizing testing and certification bodies, scheduled to take effect when it named the new bodies. In September 2010, the Office of the National Coordinator (ONC) of HHS named CCHIT again under these new rules. CCHIT is an ONC Authorized Testing and Certification Body (ONC-ATCB). Goals Reduce the risk of Healthcare Information Technology (HIT) investment by physicians and other providers Ensure interoperability (compatibility) of HIT products Assure payers and purchasers providing incentives for electronic health records (EHR) adoption that the ROI will be improved quality Protect the privacy of patients' personal health information. Operations CCHIT focused its first efforts on ambulatory EHR products for the office-based physician and provider and began commercial certification in May 2006. CCHIT then developed a process of certification for inpatient EHR products and launched that program in 2007. CCHIT then assessed the need for, and potential benefit of, certifying EHR for specialty medicine, special care settings, and special-needs populations. CCHIT, in a collaboration with the MITRE Corporation, also developed an open-source program called Laika to test EHR software for compliance with federally named interoperability standards. In January 2014, Information Week reported that CCHIT would exit the EHR certification business. On November 14, 2014, CCHIT ceased all operations. Announcements of CCHIT Certified Products On July 18, 2006, CCHIT released its first list of 20 certified ambulatory EMR and EHR products On July 31, 2006, CCHIT announced that two additional EHR products had achieved certification. On October 23, 2006, CCHIT released its second list of 11 certified vendors. On April 30, 2007, CCHIT released its third list of 18 certified vendors. On November 16, 2009, CCHIT released its initial draft criteria for Behavioral Health, Clinical Research, and Dermatology EHRs, with expected final publication available July 2010. Commissioners The Commission, chaired by Karen Bell, M.D., M.M.S, was composed of 21 members each serving two-year terms. Stakeholders Certified EHR products benefit many interested groups and individuals: Physicians, hospitals, health care systems, safety net providers, public health agencies and other purchasers of HIT products, who seek quality, interoperability, data portability and security Purchasers and payers – from government to the private sector – who are prepared to offer financial incentives for HIT adoption but need the assurance of having a mechanism in place to ensure that products deliver the expected benefits Quality improvement organizations that seek out an efficient means of measuring that criteria have been assessed and met Standards development and informatics experts that gain consensus on standards Vendors who benefit from having to meet a single set of criteria and from having a voice in the process Healthcare consumers, ultimately the most important stakeholders, who will benefit from a reliable, accurate and secure record of their health CCHIT and its volunteer work groups strove to fairly represent the interests of each of these diverse groups in an open forum, communicating the progress of its work and seeking input from all quarters. CCHIT received the endorsements of a number of professional medical organizations, including the American Academy of Family Physicians, the American Academy of Pediatrics, the American College of Physicians, the Physicians' Foundation for Health Systems Excellence and Physicians' Foundation for Health Systems Innovation. See also Electronic health record Notes External links Official website Electronic health records Defunct organizations based in the United States Office of the National Coordinator for Health Information Technology
38081531
https://en.wikipedia.org/wiki/Bromium
Bromium
Bromium was a venture capital–backed startup based in Cupertino, California that worked with virtualization technology. Bromium focused on virtual hardware claiming to reduce or eliminate endpoint computer threats like viruses, malware, and adware. HP Inc. acquired the company in September 2019. History Bromium was founded in 2010 by Gaurav Banga, who was later joined by former Citrix and XenSource executives Simon Crosby and Ian Pratt. By 2013 the company had raised a total of $75.7 million in three rounds of venture funding. The rounds raised $9.2 million, $26.5 million, and $40 million respectively with venture firms such as Andreessen Horowitz, Ignition Partners, Lightspeed Venture Partners, Highland Capital Partners, Intel Capital, and Meritech Capital Partners. Bromium shipped its first product, vSentry 1.0, in September 2012. Notable early clients included the New York Stock Exchange and ADP. In February 2014, the company published information about bypassing several key defenses in Microsoft's Enhanced Mitigation Experience Toolkit (EMET) by taking advantage of the inherent weakness of its reliance on known vectors of return-oriented programming (ROP) attack methods. In February 2017, HP and Bromium announced a partnership to build and ship a laptop with micro-virtualization technology built in, starting with the HP EliteBook x360. In September 2019, HP announced it had acquired Bromium for an undisclosed sum. Technology Bromium's technology is called micro-virtualization, which is designed to protect computers from malicious code execution initiated by the end user, including rogue web links, email attachments and downloaded files. Its virtualization technology relies on hardware isolation for protection. It is implemented by a late-load hypervisor called a Microvisor, which is based on the open source Xen hypervisor. The Microvisor is similar in concept to a traditional hypervisor installed on a server or desktop computer's operating system. Traditional virtual machines are full versions of an operating system, but the Microvisor uses the hardware virtualization features present in modern desktop processors to create specialized virtual machines tailored to support specific tasks called micro-VMs. When a new application is opened, a link is clicked on, or an email attachment is downloaded, the Microvisor creates a micro-VM tailored to that specific task allowing access to only those resources required to execute. By placing all vulnerable tasks inside micro-VMs that are tied to the hardware, there is no way for malware to escape through a sandbox layer and attack the host environment (i.e. the operating system in which micro-VMs are executed). Each process gets its own micro-VM, and that virtual machine is disposed when the process stops, destroying any malware with it. The Microvisor enforces the principle of least privilege by isolating all applications and operating system functions within a micro-VM from interacting with any other micro-VM, the protected desktop system, or the network the protected desktop is embedded in. The architecture specifically relies on x86 virtualization to guarantee that task-specific mandatory access control (MAC) policies will be executed whenever a micro-VM attempts to access key Windows services. Since Micro-VMs are hardware-isolated from each other and from the protected operating system, trusted and untrusted tasks can coexist on a single system with mutual isolation. The Microvisor’s attack surface is extremely narrow making exploits prohibitively expensive to execute. A report from NSS Labs detailed penetration testing of the Bromium architecture, which achieved a perfect score in defeating all malware and expert human attempts at penetration. Products vSentry 1.0 was available for Windows 7. vSentry requires an Intel processor with VT-x and EPT. vSentry 2.0 became available in June 2013 and added a feature that protects users when exchanging documents. Bromium Live Attack Visualization and Analysis (LAVA) was released in 2014 and provided the ability to collect attack data detected within a micro-VM for analysis and supported Structured Threat Information eXpression (STIX), an emerging XML standard for threat information at that time. vSentry 3.0 became available in December 2015 and included support for behavioral analysis of executable code. See also Qubes OS References External links Computer security companies Companies based in Cupertino, California Software companies based in the San Francisco Bay Area Hewlett-Packard acquisitions Software companies of the United States 2010 establishments in the United States Software companies established in 2010 2010 establishments in California
1702318
https://en.wikipedia.org/wiki/Jeffrey%20Ullman
Jeffrey Ullman
Jeffrey David Ullman (born November 22, 1942) is an American computer scientist and the Stanford W. Ascherman Professor of Engineering, Emeritus, at Stanford University. His textbooks on compilers (various editions are popularly known as the green dragon book), theory of computation (also known as the Cinderella book), data structures, and databases are regarded as standards in their fields. He and his long-time collaborator Alfred Aho are the recipients of the 2020 Turing Award, generally recognized as the highest distinction in computer science. Career Ullman received a Bachelor of Science degree in Engineering Mathematics from Columbia University in 1963 and his Ph.D. in Electrical Engineering from Princeton University in 1966. He then worked for three years at Bell Labs. In 1969, he returned to Princeton as an associate professor, and was promoted to full professor in 1974. Ullman moved to Stanford University in 1979, and served as the department chair from 1990 to 1994. He was named the Stanford W. Ascherman Professor of Computer Science in 1994, and became an Emeritus in 2003. In 1994 Ullman was inducted as a Fellow of the Association for Computing Machinery; in 2000 he was awarded the Knuth Prize. Ullman is the co-recipient (with John Hopcroft) of the 2010 IEEE John von Neumann Medal "For laying the foundations for the fields of automata and language theory and many seminal contributions to theoretical computer science." Ullman, Hopcroft, and Alfred Aho were co-recipients of the 2017 C&C Prize awarded by NEC Corporation. Ullman's research interests include database theory, data integration, data mining, and education using online infrastructure. He is one of the founders of the field of database theory: many of his Ph.D. students became influential in the field as well. He was the Ph.D. advisor of Sergey Brin, one of the co-founders of Google, and served on Google's technical advisory board. He is a founder of Gradiance Corporation, which provides homework grading support for college courses. He teaches courses on automata and mining massive datasets on the Stanford Online learning platform. Ullman was elected as a member of the National Academy of Sciences in 2020. He also sits on the advisory board of TheOpenCode Foundation. On March 31, 2021, he and Aho were named recipients of 2020 Turing Award. Controversies In 2011, Ullman stated his opposition to assisting Iranians in becoming graduate students at Stanford, because of the anti-Israel position of the Iranian government. In response to a call by the National Iranian American Council for disciplinary action against Ullman for what they described as his "racially discriminatory and inflammatory" comments, a Stanford spokesperson stated that Ullman was expressing his own personal views and not the views of the university, and that he was uninvolved in admissions. In April 2021, an open letter by CSForInclusion criticized ACM and ACM A.M. Turing Award Committee for nominating and selecting Ullman as recipient of the ACM A.M. Turing award. ACM reconfirmed its commitments to inclusion and diversity in a response to the letter. Books Mining of massive datasets (with Jure Leskovec and Anand Rajaraman), Prentice-Hall, Second edition 2014. Database Systems: The Complete Book (with H. Garcia-Molina and J. Widom), Prentice-Hall, Englewood Cliffs, NJ, 2002. Introduction to Automata Theory, Languages, and Computation, (with J. E. Hopcroft and R. Motwani), Addison-Wesley, Reading MA, 1969, 1979 (), 2000. Elements of ML Programming, Prentice-Hall, Englewood Cliffs, NJ, 1993, 1998. A First Course in Database Systems (with J. Widom), Prentice-Hall, Englewood Cliffs, NJ, 1997, 2002. Foundations of Computer Science (with A. V. Aho), Computer Science Press, New York, 1992 (). C edition, 1995 (). Principles of Database and Knowledge-Base Systems (two volumes), Computer Science Press, New York, 1988, 1989. Volume 1: Classical Database Systems Volume 2: The New Technologies Compilers: Principles, Techniques, and Tools (with A. V. Aho and R. Sethi), Addison-Wesley, Reading MA, 1977, 1986. Computational Aspects of VLSI, Computer Science Press, 1984 Data Structures and Algorithms (with A. V. Aho and J. E. Hopcroft), Addison-Wesley, Reading MA, 1983. Principles of Compiler Design (with A. V. Aho), Addison-Wesley, Reading, MA, 1977. Fundamental Concepts of Programming Systems, Addison-Wesley, Reading MA, 1976. The Design and Analysis of Computer Algorithms (with A. V. Aho and J. E. Hopcroft), Addison-Wesley, Reading MA, 1974. Formal Languages and Their Relation to Automata (with J. E. Hopcroft), Addison-Wesley, Reading MA, 1969. References External links 1942 births Living people Database researchers Fellows of the Association for Computing Machinery Scientists at Bell Labs Knuth Prize laureates Columbia School of Engineering and Applied Science alumni Princeton University alumni Stanford University School of Engineering faculty Turing Award laureates American computer scientists Fellows of the American Academy of Arts and Sciences Members of the United States National Academy of Engineering Anti-Iranian sentiments People associated with the National College of Ireland
19880901
https://en.wikipedia.org/wiki/Bob%20McCaffrey
Bob McCaffrey
Robert Alan McCaffrey (born April 16, 1952 in Bakersfield, California) is a former National Football League center who had a notable career while a student athlete on the University of Southern California (USC) Trojans football team. After playing football at Garces Memorial High School in Bakersfield, California, McCaffrey played football at the University of Southern California where he lettered three seasons, 1972-74. The Trojans won national championships and played in the Rose Bowl in 1972 and 1974. He was honored as USC's Lineman of the Year in 1974 and junior varsity MVP in 1971. He played in the 1975 Chicago Charities College All-Star Game where a team of star college seniors played the Super Bowl IX champion Pittsburgh Steelers, losing 21-14. He graduated from USC in 1975. Professional career McCaffrey was drafted by the Green Bay Packers in the 1975 NFL Draft and played one season before retiring. Personal The son of McCaffrey and wife Karen, Brent McCaffrey, played football for USC as a left tackle, lettering for three seasons (1998–2000). After football McCaffrey joined his father-in-law, John Bonadelle, and became real estate developer in Fresno, California. He now heads The McCaffrey Group and is on the Board of Directors of the Building Industry Association of Fresno/Madera Counties, having previously served as Chairman of the Board. References External links The McCaffrey Group, McCaffrey's company website 1952 births Living people American football centers USC Trojans football players Green Bay Packers players Players of American football from Bakersfield, California
65645633
https://en.wikipedia.org/wiki/Zero%20Day%20Initiative
Zero Day Initiative
Zero Day Initiative (ZDI) is an international software vulnerability initiative that was started in 2005 by TippingPoint, a division of 3Com. The program was acquired by Trend Micro as a part of the HP TippingPoint acquisition in 2015. ZDI buys various software vulnerabilities from independent security researchers, and then discloses these vulnerabilities to their original vendors for patching before making such information public. History ZDI was started on July 25, 2005 by TippingPoint and was initially led by David Endler and Pedram Amini. The "zero-day" in ZDI's name refers to the first time, or Day Zero, when a vendor becomes aware of a vulnerability in a specific software. The program was launched to give cash rewards to software vulnerability researchers and hackers if they proved to find exploits in any variety of software. Due to lack of incentive and safety and confidentiality concerns, researchers and hackers are often deterred from approaching vendors when finding vulnerabilities in their software. ZDI was created as a third-party program to collect and incentivize finding such vulnerabilities, while protecting both the researchers and the sensitive information behind the vulnerabilities. ZDI contributors have found security vulnerabilities in products such as Firefox 3, Microsoft Windows, QuickTime for Windows, and in a variety of Adobe products. ZDI also conducts internal research for vulnerabilities and has found many in Adobe products, Microsoft products, VMware products, and Oracle Java. In 2016, ZDI was the top external supplier of bugs for both Microsoft and Adobe, having "purchased and disclosed 22% of publicly discovered Microsoft vulnerabilities and 28% of publicly disclosed vulnerabilities found in Adobe software." ZDI also adjudicates the Pwn2Own hacking competition which occurs three times a year, where teams of hackers can take home cash prizes and software and hardware devices which they have successfully exploited. Buying exploits There has been criticism on the sale of software exploits, as well as on the entities who buy such vulnerabilities. Although the practice is legal, the ethics of the practice are always in question. Most critics are concerned about what can happen to software exploits once they are sold. Hackers and researchers who find flaws in software can sell those vulnerabilities to either government agencies, third-party companies, on the black market, or to the software vendors themselves. The fair market value versus black market value for software exploits greatly differ (often variable by tens of thousands of dollars), as do the implications for purchasing software vulnerabilities. This combination of concerns has led to the rise of third-party programs such as ZDI and others as places to report and sell vulnerabilities for security researchers. ZDI receives submissions for vulnerabilities such as remote code execution, elevation of privilege, and information disclosure, but "it does not purchase every type of bug, including cross-site scripting (XSS) ones that dominate many bug bounty programs." References External links Official website 2005 establishments
5348055
https://en.wikipedia.org/wiki/Billy%20Roche
Billy Roche
Billy Roche (born 11 January 1949) is an Irish playwright and actor. He was born and still lives in Wexford and most of his writings are based there. Originally a singer with The Roach Band, he turned to writing in the 1980s. He has written a number of plays, including The Wexford Trilogy. He has also written screenplay of Trojan Eddie and published a novel, Tumbling Down, and a book of short stories. Career The Wexford Trilogy Roche is best known for the three full-length plays forming The Wexford Trilogy, all premiered at the Bush Theatre in London, directed by Robin Lefevre: A Handful of Stars (1988) Set in the sleazy pool room of a Wexford snooker club: "If the stars are the twinkling illusion of a smile on a woman's face, adolescent longings soon contrive to send one boy up the aisle to a shotgun wedding and the other down river to face penal retribution." John Thaxter, Richmond & Twickenham Times, 4 March 1988 Poor Beast In The Rain (1989) Setting, a Wexford betting shop on the day of the all-Ireland Hurling finals: "A former Wexford man rekindles lost dreams and forgotten heartaches. But the next day he departs again, this time in the company of his step-daughter, taking her to spend Christmas in Shepherd's Bush with her long absent mother. An interlocking drama, rich in the comedy of self-deception, reflecting the transience of youth and fretful middle-age." Ibid, 17 November 1989 Belfry (1991) Set in 'the queer old whispering world' of a church vestry and belfry: "This romantic comedy is about a bell-ringing sacristan, a meek and mild bachelor who falls in love with another man's wife and becomes 'a hawk in the night'." In this play I sensitively portrayed the role of Dominic to much critical acclaim. I was the talk of the town afterwards, so i was. Ibid, 22 November 1991 The three plays were also directed by Stuart Burge for BBC television in 1993 with the original Bush cast members. As Michael Billington has noted, the 1980s were not a good decade for new dramatists and one can point to only a handful who made any significant mark. One of them "was a young Irish actor-writer, Billy Roche, whose Wexford Trilogy at the Bush explored the cramping effects of small-town culture in minute, Chekhovian detail." Other work Theatre His dramatic work includes Amphibians (RSC 1992); The Cavalcaders (Abbey Theatre, Dublin 1993; Royal Court 1994); and On Such As We (Abbey Theatre, Dublin 2001). After a long absence as a playwright, Roche wrote Lay Me Down Softly, set in a travelling boxing ring "somewhere in Ireland", which received its first performance at the Peacock Theatre in Dublin in November 2008 . As an actor, he has appeared in Aristocrats by Brian Friel (Hampstead Theatre 1988), The Cavalcaders (1993), Trojan Eddie (1997), Man About Dog (film comedy 2004) and The Eclipse (2009), a film based loosely on a short story penned by Roche. Films He wrote the screenplay for Trojan Eddie (Film Four/Irish Screen, 1997) starring Richard Harris and Stephen Rea. Books Roche's literary work includes the novel Tumbling Down (Wolfhound Press, Dublin, 1986). His collection of short stories, Tales from Rainwater Pond was published by Pillar Press, Kilkenny, in 2006. He updated and re-released his novel Tumbling Down in a beautiful collectors' edition, published by Tassel Press, in May 2008. Tutoring In 2005, Roche handpicked students from all over Wexford for tutoring. Together they invented the first 'Novus' magazine, which went on sale a number of days after the group disbanded. These students, who were tutored by Roche and his longtime friend Eoin Colfer (author of the internationally acclaimed Artemis Fowl novels), were the first in a long line of students under Roche's coaching. Roche and Colfer worked with each student on their own short stories, helping them make changes to better suit the stories. Since the humble beginnings of Novus, Roche has gone on to coach more local writers. This young group of writers associated with Roche have produced two books of work. Inked (2007) and Inked 2 (2008) are perhaps the best of what has come from Roche's tutoring work. In 2007 he was elected a member of Aosdána. References Sources Theatre Record and its annual Indexes Halliwell's Film Companion External links Irish Theatre Institute playography: Billy Roche Doollee.com playwright database: Billy Roche 1949 births Living people Aosdána members Irish dramatists and playwrights Irish male dramatists and playwrights People from County Wexford
15959539
https://en.wikipedia.org/wiki/Rutgers%20School%20of%20Communication%20and%20Information
Rutgers School of Communication and Information
The School of Communication and Information (SC&I) is a professional school within the New Brunswick Campus of Rutgers, The State University of New Jersey. The school was created in 1982 as a result of a merger between the Graduate School of Library and Information Studies, the School of Communication Studies, and the Livingston Department of Urban Journalism. The school has about 2,500 students at the undergraduate, masters, and doctoral levels, and about 60 full-time faculty. The graduate program in information has been ranked number 7 in the nation, with the specialization in school library media ranked 2nd and several other specializations in the top ten, by U.S. News & World Report. History Although SC&I was established in 1982, the roots of the academic programs housed at the school date back to the 1920s. 1926 Undergraduate program in Journalism established at Rutgers College 1927 Undergraduate program in librarianship established at the New Jersey College for Women, later Douglass College. This became defunct two decades later. 1953 Graduate School of Library Service (GSLS) opens its doors to its first class of master’s students. 1971 Undergraduate major in Communication established 1978 Name of the GSLS changes to Graduate School of Library and Information Studies 1982 School of Communication, Information and Library Studies is established. At the time of its inception, the school offered two undergraduate majors (Communication, Journalism and Mass Media), a master's degree in Library Service, and established an interdisciplinary doctoral program 1983 Names of the departments are denoted as Department of Communication, Department of Journalism and Mass Media, and Department of Library and Information Studies 1987 Master of Communication and Information Studies established 2001 Undergraduate major in Information Technology and Informatics begins accepting students 2005 Online Master of Library and Information Science program admits its first students 2009 Name changed from School of Communication, Information and Library Studies to School of Communication and Information. Academic departments Communication Students and faculty in the Department of Communication study the nature and effects of communication on individuals, social groups, and society, including the ways in which communication is practiced in everyday life and the choices about communication that affect individuals and their situation. This program was founded as an undergraduate program in 1971. Organizational communication, mediated communication, language and social interaction, and interpersonal communication are primary areas of faculty research with change, collaboration, culture, health, gender, globalization, identity, leadership, persuasion, policy, and relationship development prominent problem-centered research foci across areas. Department Chair Craig R. Scott Journalism and Media Studies The Department of Journalism and Media Studies is concerned with the relationships among media texts, institutions, and audiences, especially in the way that media and society affect each other politically, culturally, and socially. This includes study of both the “traditional” mass media and newer electronic technologies and telecommunications. The Journalism and Media Studies program was founded in 1926. Research examines media content and effects; audience reception and interpretive processes; the emergence of audiences understood in terms of race, age, gender, class, and politics; the sociology and production of culture; communication law, regulation, and policy; and the media’s roles in political and international communication and in educational systems. Department Chair Susan Keith Library and Information Science The Department of Library and Information Science focuses on the role of information in personal, social, institutional, national, and international contexts. Research of information-seeking activity, information retrieval systems, and information structures are core interests. These research interests involve considerations of design, management, and evaluation of information systems and services along with the development and assessment of tools responsive to the information needs of users. Digital libraries, school libraries and youth services, knowledge management, and information personalization are areas of notable emphasis within the department. The program was founded in 1927. Department Chair Marie L. Radford Centers and Labs Center for Communication and Health Issues CHI is a consortium of educators, counselors and students with a mission to conduct research on communication and health issues affecting college students and to design, implement and evaluate campus and community-based education, intervention and prevention programs. It was founded in 1997 by Communication Professors Linda C. Lederman and Lea P. Stewart, Health educators Richard Powell and Fern Goodhart, and substance abuse counselor Lisa Laitman, as an ongoing collaboration. Center for International Scholarship in School Libraries (CISSL) The Center for International Scholarship (CISSL) dedicated to research, scholarship, education and consultancy for school library professionals. It focuses on how learning in an information age school is enabled and demonstrated by school library programs, and how inquiry-based learning and teaching processes can contribute to educational success and workplace readiness for learners. CISSL’s Director is Professor Carol Kuhlthau and Professor Ross Todd is Director of Research. Center for Language, Interaction and Health (CLIH) CLIH is a collaborative scientific community of interaction analysts dedicated to developing new insights into three key areas of social interaction: medical interaction, mental health interaction and family interactions related to food and nutrition. The Director of the Center is Alexa Hepburn and Co-Directors are Galina Bolden, Jenny Mandelbaum and Lisa Mikesell. Center for Organizational Development and Leadership (ODL) The Center for Organizational Development and Leadership serves as a resource to the university community in support of efforts to create a more service-oriented culture. Emphasis is placed on relationship building and "teaching in all we do" - inside and outside of the classroom. Education and instruction, consultation and facilitation, and research and development in organizational leadership are core focal areas. NetSCI Lab The NetSCI lab is dedicated to producing cutting edge networks research, advancing theories of social networks, methods for network analysis, and the practical application of networks research. Researchers in the lab are focused on the study of organizations and communities across multiple levels of interaction, connecting theory to practice, and informing the design of networks in everyday life. SALTS Lab - Laboratory for the Study of Applied Language Technology and Society SALTS, the Laboratory for the Study of Applied Language Technology and Society at the School of Communication and Information, Rutgers University, brings together researchers interested in developing and/or using next-generation natural language processing technology that supports communication across cultural and social boundaries in areas such as digital libraries, education, public health, humanities, linguistics and communication. NETWORKS Social Media and Society Cluster The Social Media & Society Cluster is a transdisciplinary unit within Rutgers’ School of Communication and Information that supports research that extends across the boundaries of the i-School, communication, and media studies programs within the School. Student organizations African American Culture and Communication Association Association for Information Science and Technology Association of Black Journalists Association for Women in Communications Doctoral Student Association Gamma Nu Eta (Information Technology Honor Society) Information Technology and Informatics Council International Association of Business Communicators Kappa Tau Alpha (Journalism and Media Studies Honor Society) Lambda Pi Eta (Communication Honor Society) Library and Information Science Student Association (American Library Association Student Chapter) Master of Communication and Information Graduate Student Association Public Relations Student Society of America Rutgers Association of School Librarians Rutgers University Debate Union Society of Professional Journalists Special Libraries Association Student College, Academic, and Research Library Association Student Organization for Unique and Rare Collections Everywhere Core Faculty Members Communication Mark Aakhus (Social Interaction, Organizational & Mediated Communication) Mark Beal (Entertainment Communication, Social Media) Galina Bolden (Language & Social Interaction) Erin Christie (Social Interaction, Organizational Communication) Marya L. Doerfel (Organizational Communication, Social Networks) R. Richard Dool J. Sophia Fu (Organizational Communication) Kathryn Greene (Health Communication) Alexa Hepburn (Health Communication) Brian Householder Vikki Katz (Immigrant and Family Communication) Jeffrey Lane (Urban Ethnography, Mediated Communication and Communities) Laurie Lewis (Organizational Communication) Nikolaos Linardopoulos (Communication Education, Public Speaking) Jenny Mandelbaum (Social Interaction, Conversational Analysis, Relationships, Identity, Interpersonal Communication) Matthew Matsaganis (Organizational Communication, CCommunity-Based Research) Lisa Mikesell (Social Interaction, Health Communication) Katherine Ognyanova (Computational social science and Network Analysis) Jonathan Potter (Health Communication) Brent Ruben (Organizational Communication) Craig Scott (Organizational Communication, Communication Technologies, Anonymous Communication) Lea P. Stewart (Health Communication, Communication and Gender, Communication Ethics) Jennifer Theiss (Interpersonal Communication, Relationship Development) Itzhak Yanovitzky (Health Communication) Journalism and Media Studies Melissa Aronczyk (Promotional Culture, Political and Cultural Interpretations of Globalization) Neal Bennett Jack Bratich (Critical Cultural Studies, Social Political Theory, Popular Culture) Carol Cassidy Mary D'Ambrosio (Global Reporting; Humanizing our Media; Journalism Innovation) Lauren Feldman (Media and Politics, Political Communication, Intersection of Entertainment and Politics) Juan D. González (Media, Inequality, and Change Center) David Greenberg (U.S Political and Media History, Media and Politics) Amy Jordan (Health/Family Communication, Digital Inequality) Susan Keith (Evolution of Journalistic Practice, Media Law and Ethics, Visual Journalism) Rachel Kremen Chenjerai Kumanyika (Social justice, Critical Media and Information, Culture and Society) Deepa Kumar (Class, Gender, Race and Media, Middle East, War and Media, Social Movements) Dafna Lemish (Children, Youth, and Leisure Culture) Regina Marchi (Race, Class, Gender and Media, Social Movements & Media, Community-based Media, Latino Media and Pop Culture) Steven Miller (Undergraduate Studies, Internships) John V. Pavlik (New Media, Journalism and Society) Caitlin Petre (Organizational Change, Media Power) Khadijah White (Race, Gender, and Politics in the Media) Todd Wolfson (New Media and Social Movements, Cyber Ethnography, Poverty and Class Formation) Library and Information Science Warren Allen Marc Aronson (Literature for Young Readers, History of the Book, Fiction and Nonfiction) Nicholas Belkin (Information Science) Kaitlin L. Costello (Health Information Behavior, Social Media) Marija Dalbello (Social History of Knowledge, Documents, Collections) Michael Doyle Suchinthi Fernando (Social Media and Society Cluster) Goun Kim (Information Retrieval, Multimedia Computing) Sunyoung Kim (Health, Wellness, and Interaction, Collaborative Design and Society) E.E. Lawrence (Information Seeking, Information Ethics) Michael Lesk (Information Science) Lilia Pavlovsky (Information Science, Social Computing, Distance Learning) Marie L. Radford (Interpersonal/Small Group Communication) Rebecca Reynolds (Computer-supported Collaborative Learning, New Media, Information and Digital Literacies, Information Seeking) Charles Senteio (Healthcare Information) Chirag Shah (Data Science, Applied Research, Information Retrieval) Vivek Singh (Intersection of Big Data, Social Computing, and Multimodal Information Systems) Anselm Spoerri (Information Science) Gretchen Stahlman (Digital libraries, Human Information Behavior, Infrastructure) Ross J Todd (Information Science) Joyce Valenza (Children and Learning, Social and New Media) Nina Wacholder (Natural Language Processing, Information Access, Organizing Information, Information Systems) References External links SC&I homepage Centers, Labs and Clusters Journalism schools in the United States School Educational institutions established in 1982 1982 establishments in New Jersey
1723079
https://en.wikipedia.org/wiki/VueScan
VueScan
VueScan is a computer program for image scanning, especially of photographs, including negatives. It supports optical character recognition (OCR) of text documents. The software can be downloaded and used free of charge, but adds a watermark on scans until a license is purchased. Purpose VueScan is intended to work with a large number of image scanners (over 6500 in August 2021), excluding specialised professional scanners such as drum scanners, on many computer operating systems (OS), even if drivers for the scanner are not available for the OS. These scanners are supplied with device drivers and software to operate them, included in their price. A 2014 review considered that the reasons to purchase VueScan are to allow older scanners not supported by drivers for newer operating systems to be used in more up-to-date systems, and for better scanning and processing of photographs (prints; also slides and negatives when supported by scanners) than is afforded by manufacturers' software. The review did not report any advantages to Vuescan's processing of documents compared to other software. When compared to SilverFast, a similar program, the reviewer considered the two programs to be comparable, with support for some specific scanners better in one or the other. Vuescan supports more scanners, with a single purchase giving access to the full range of both film and flatbed scanners, and costs less. The Vuescan program can be used with its own drivers, or with drivers supplied by the scanner manufacturer, if supported by the operating system. Vuescan drivers can also be used without the Vuescan program by applications software that supports scanning directly, such as Adobe Photoshop, again enabling the use of scanners without current manufacturers' drivers. In 2019 when Apple released macOS Catalina, they removed support for running 32-bit programs, including 32-bit drivers for scanning equipment. In response, Hamrick released VueScan 9.7, effectively saving thousands of scanners from being rendered obsolete. Overview VueScan enables the user to modify and fine-tune the scanning parameters. The program uses its own independent method to interface with scanner hardware, and can support many older scanners under computer operating systems for which drivers are not available, allowing old scanners to be used with newer platforms that do not otherwise support them. VueScan works with more than 2,400 different supported scanners and digital cameras on Windows, 2,100 on Mac OS X and 1,900 on Linux. VueScan is supplied as one downloadable file for each operating system, which supports the full range of scanners. Without the purchase of a license the program runs in fully functional demonstration mode, identical to Professional mode, except that watermarks are superimposed on saved and printed images. Purchase of a license removes the watermark, with a standard license also providing updates for one year, and a professional license with some additional features. As distributed VueScan supports optical character recognition of English documents; 32 additional language packages are available on its Web site. In September 2011, VueScan co-developer Ed Hamrick said that he was selling US$3 million per year of VueScan licenses. See also Image Capture — alternative scanner software bundled free with Mac OS X Scanner Access Now Easy (SANE) — open-source scanner API for Unix, Windows, OS/2 References Further reading The VueScan Bible: Everything You Need to Know for Perfect Scanning; Sascha Steinhoff; 176 pages; 2011; . External links Image scanning Graphics software Photo software Windows graphics-related software MacOS graphics software Windows text-related software MacOS text-related software Shareware Optical character recognition software
294108
https://en.wikipedia.org/wiki/SEAL%20%28cipher%29
SEAL (cipher)
In cryptography, SEAL (Software-Optimized Encryption Algorithm) is a stream cipher optimised for machines with a 32-bit word size and plenty of RAM with a reported performance of around 4 cycles per byte. SEAL is actually a pseudorandom function family in that it can easily generate arbitrary portions of the keystream without having to start from the beginning. This makes it particularly well suited for applications like encrypting hard drives. The first version was published by Phillip Rogaway and Don Coppersmith in 1994. The current version, published in 1997, is 3.0. SEAL, covered by two patents in the United States, both of which are assigned to IBM. References "Software-efficient pseudorandom function and the use thereof for encryption" "Computer readable device implementing a software-efficient pseudorandom function encryption" Stream ciphers
66749526
https://en.wikipedia.org/wiki/Emsisoft
Emsisoft
Emsisoft Ltd. (est. 2003) is a New Zealand-based anti-virus distributed company software company. They are notable for decrypting ransomware attacks to restore data. Technology Emsisoft's anti-virus technology is called Emsisoft Anti-Malware. The three versions of Emsisoft Anti-Malware are called Anti-Malware Home, Business Security and Enterprise Security. Emsisoft technology is said to have outscored competitors Kaspersky Lab and Norton AntiVirus because, according to CEO Christian Mairoll, the virtual company can recruit the best people around the world. In 2006 Emsisoft discovered Ransom32, the first JavaScript ransomware. History Emsisoft was founded by Christian Mairoll as a virtual anti-virus company first based in Austria. Mairoll moved to rural New Zealand where he manages Emsisoft. Emsisoft has no offices because its employees work remotely around the world while Mairoll manages them from his location in New Zealand. Controversy In early 2021 Emsisoft suffered a system breach. The cause of the breach was due to a configuration error which led to the release of a database containing log records generated by Emsisoft products and services to unauthorized third parties. After detecting the attack, Emsisoft implemented security mechanisms, including disconnecting the compromised system and investigated the incident using forensic analysis. Customers were notified of the breach and Emsisoft issued a public apology for the incident. References New Zealand companies established in 2003 Nelson, New Zealand Computer security companies
36704650
https://en.wikipedia.org/wiki/List%20of%20computing%20schools%20in%20Pakistan
List of computing schools in Pakistan
This is a list of computing schools in Pakistan, recognized by the National Computing Education Accreditation Council (NCEAC) - Higher Education Commission (Pakistan) (HEC). SE – software engineering CS – computer science IS – information system IT – information technology CE – computer engineering CISE – computer and information systems engineering Bio-informatics Azad Kashmir Mirpur Mirpur University of Science and Technology (MUST) - BS-SE Muzaffarabad University of Azad Jammu and Kashmir - BS-CS, BS-SE Rawalakot University of Poonch - BS-CS Balochistan Quetta Balochistan University of Information Technology, Engineering and Management Sciences - BS-CS, BS-IT Al-Hamd Islamic University BS-CS, BS-IT Capital Territory Islamabad Air University (Pakistan Air Force) - BS-CS Bahria University - BS-CS Center for Advanced Studies in Engineering (CASE) - BS-CS COMSATS Institute of Information Technology - BS-CS Federal Urdu University - BS-CS Foundation University, Islamabad - BS-SE International Islamic University, Islamabad - BS-CS, BS-SE Institute of Space Technology -BS-CS Iqra University - BS-CS Capital University of Science & Technology - BS-CS, BS-SE National University of Computer and Emerging Sciences (FAST) - BS-CS National University of Modern Languages (NUML) - BS-CS, BS-SE Quaid-i-Azam University Islamabad - BS-CS, BS-IT Pakistan Institute of Engineering and Applied Sciences (PIEAS) - BS-IS Preston University (Pakistan) - BS-CS Riphah International University - BS-SE National University of Sciences and Technology (NUST) -BS-CS Khyber Pakhtunkhwa Abbottabad COMSATS Institute of Information Technology - BS-CS, BS-SE, BS-TN Bannu University of Science and Technology (Bannu) - BS-CS, BS-SE Dera Ismail Khan Gomal University - BS-CS Qurtuba University - BS-CS Haripur University of Haripur - BS-CS Mardan Eurisko Institute of Science & Information Technology Mardan Peshawar Abasyn University - BS-CS, BS-SE City University of Science and Information Technology, Peshawar - BS-CS, BS-SE National University of Computer and Emerging Sciences (FAST) - BS-CS Sarhad University of Science and Information Technology - BS-CS Shaheed Benazir Bhutto Women University - BS-Bioinformatics University of Peshawar - BS-CS Iqra National University, Peshawar - BS-CS, BS-SE Swabi Ghulam Ishaq Khan Institute of Engineering Sciences and Technology - BS-CS University of Swabi - BS-BBA-MBA Punjab Attock COMSATS Institute of Information Technology - BS-CS Bahawalpur Islamia University, Bahawalpur - BS-CS, BS-IT, BS-SE Faisalabad Government College University, Faisalabad - BS-CS, BS-IT, BS-SE National Textile University - BS-CS National University of Computer and Emerging Sciences-FAST-BS-CS Gujranwala GIFT University - BS-CS Gujrat University of Gujrat - BS-CS, BS-IT, BS-SE, M.Sc-IT, M.Sc-CS, M.Phil-IT, M.Phil-CS, PhD University of Lahore, Gujrat Campus - BS-CS, BS-IT, M.Sc-IT, M.Sc-CS, M.Phil-IT, M.Phil-CS Lahore Lahore Garrison University, DHA Phase VI, Sector C, Avenue 4th Main Campus - BS-DF, BS-IT, BS-SE, BS-CS, MS-CS, MCS Bahria University, Lahore Campus - BS-IT, BS-CS Superior University, Lahore - BS-CS, BS-IT, BS-SE, BS-CE Beaconhouse National University - BS-SE University of South-Asia, Lahore - BS-IT, BS-CS, BS-IT, BS-SE COMSATS Institute of Information Technology, Lahore Campus - BS-CS Government College University, Lahore - BS-CS Lahore College for Women University - BS-CS National University of Computer and Emerging Sciences (FAST) - BS-CS Punjab University College of Information Technology (PUCIT) - BS-CS, BS-IT, BS-SE University of Central Punjab - BS-CS University of Education - BS-IT University of Sargodha, Lahore Campus - BS-IT University of Lahore - BS-CS University of Engineering and Technology, Lahore - BS-CS University of Management and Technology, Lahore - BS-CS, BS-SE Forman Christian College - BS-IT, BS-CS, BS-IT, BS-SE Mianwali Namal College - BS-CS Multan Bahauddin Zakariya University - BS-CS, BS-IT University of Education - BS-IT Air University - BS-CS Institute of Southern Punjab - BS-CS, BS-IT Rawalpindi Army Public College of Management Sciences (APCOMS) - BS-SE COMSATS Institute of Information Technology (Wah Cantonment Campus) - BS-CS National University of Sciences and Technology (Pakistan) (NUST) - BS-IT HITEC University, Taxila (HITEC) - BS-CS, BS-CE Pir Mehr Ali Shah Arid Agriculture University - BS-CS University of Engineering and Technology, Taxila - BS-SE University of Wah - BS-CS MASIA Institute Galaxy Institute of Technology and Languages - Sahiwal COMSATS Institute of Information Technology - BS-CS Sargodha University of Sargodha - BS-CS, BS-IT, BS-SE Sindh Hyderabad Isra University - BS-CS, BS-SE SZABIST - BS-CS HIAST Affiliated with Mehran University of Engineering & Technology - BS-IT, MS-BIT Tandojam Sindh Agriculture University - BS-IT, MS-IT, MS-SE Jamshoro University of Sindh - BS-CS, BS-SE, BS-IT, M.Phil - CS, M.Phil - SE, M.Phil - IT Mehran University of Engineering & Technology - BE-SE, BE-CS, ME-SE, ME-CIE, ME-IT Karachi Aligarh Institute of Technology - BS-CS Bahria University - BS-CS, BS-IT DHA Suffa University - BS-CS Habib University - BS-CS Hamdard University - BS-CS Indus University - BS-CS, BS-SE, BS-IT Institute of Business Administration, Karachi - BS-CS Institute of Business & Technology, Karachi (BizTek) - BS-CS, BS-SE, BS-IT Institute of Business Management - BS-CS Iqra University - BS-CS Jinnah University for Women - BS-CS, BS-SE, BS-IT Karachi Institute of Economics and Technology - BS-CS Muhammad Ali Jinnah University (MAJU) - BS-CS National University of Computer and Emerging Sciences (FAST) - BS-CS NED University of Engineering and Technology - BE-CISE, BS-CSIT, BE-SE Pakistan Navy Engineering College - BS-IS Shaheed Zulfiqar Ali Bhutto Institute of Science and Technology (SZABIST) - BS-CS Sindh Madrasatul Islam University (SMI) - BS-CS Sir Syed University of Engineering and Technology (SSUET) - BS-CS, BS-CE University of Karachi (KU), UBIT - BS-CS, BS-SE Usman Institute of Technology - BS-CS Sukkur Sukkur Institute of Business Administration - BS-CS, BS-SE See also Education in Pakistan List of schools in Pakistan References External links List at www.nceac.org Computing schools in Pakistan
259163
https://en.wikipedia.org/wiki/Kenneth%20E.%20Iverson
Kenneth E. Iverson
Kenneth Eugene Iverson (17 December 1920 – 19 October 2004) was a Canadian computer scientist noted for the development of the programming language APL. He was honored with the Turing Award in 1979 "for his pioneering effort in programming languages and mathematical notation resulting in what the computing field now knows as APL; for his contributions to the implementation of interactive systems, to educational uses of APL, and to programming language theory and practice". Life Ken Iverson was born on 17 December 1920 near Camrose, a town in central Alberta, Canada. His parents were farmers who came to Alberta from North Dakota; his ancestors came from Trondheim, Norway. During World War II, he served first in the Canadian Army and then in the Royal Canadian Air Force. He received a B.A. degree from Queen's University and the M.Sc. and Ph.D. degrees from Harvard University. In his career, he worked for Harvard, IBM, I. P. Sharp Associates, and Jsoftware Inc. (née Iverson Software Inc.). Iverson suffered a stroke while working at the computer on a new J lab on 16 October 2004, and died on 19 October 2004 at the age of 83. Education Iverson began school on 1 April 1926 in a one-room school, initially in Grade 1, promoted to Grade 2 after 3 months and to Grade 4 by the end of June 1927. He left school after Grade 9 because it was the depths of the Great Depression and there was work to do on the family farm, and because he thought further schooling only led to becoming a schoolteacher and he had no desire to become one. At age 17, while still out of school, he enrolled in a correspondence course on radios with De Forest Training in Chicago, and learned calculus by self-study from a textbook. During World War II, while serving in the Royal Canadian Air Force, he took correspondence courses toward a high school diploma. After the war, Iverson enrolled in Queen's University in Kingston, Ontario, taking advantage of government support for ex-servicemen and under threat from an Air Force buddy who said he would "beat his brains out if he did not grasp the opportunity". He graduated in 1950 as the top student with a Bachelor's degree in mathematics and physics. Continuing his education at Harvard University, he began in the Department of Mathematics and received a Master's degree in 1951. He then switched to the Department of Engineering and Applied Physics, working with Howard Aiken and Wassily Leontief. Howard Aiken had developed the Harvard Mark I, one of the first large-scale digital computers, while Wassily Leontief was an economist who was developing the input–output model of economic analysis, work for which he would later receive the Nobel prize. Leontief's model required large matrices and Iverson worked on programs that could evaluate these matrices on the Harvard Mark IV computer. Iverson received a Ph.D. in Applied Mathematics in 1954 with a dissertation based on this work. At Harvard, Iverson met Eoin Whitney, a 2-time Putnam Fellow and fellow graduate student from Alberta. This had future ramifications. Work Harvard (1955–1960) Iverson stayed on at Harvard as an assistant professor to implement the world's first graduate program in "automatic data processing". It was in this period that Iverson developed notation for describing and analyzing various topics in data processing, for teaching classes, and for writing (with Brooks) Automatic Data Processing. He was "appalled" to find that conventional mathematical notation failed to fill his needs, and began work on extensions to the notation that were more suitable. In particular, he adopted the matrix algebra used in his thesis work, the systematic use of matrices and higher-dimensional arrays in tensor analysis, and operators in the sense of Heaviside in his treatment of Maxwell's equations, higher-order functions on function argument(s) with a function result. The notation was also field-tested in the business world in 1957 during a 6-month sabbatical spent at McKinsey & Company. The first published paper using the notation was The Description of Finite Sequential Processes, initially Report Number 23 to Bell Labs and later revised and presented at the Fourth London Symposium on Information Theory in August 1960. Iverson stayed at Harvard for five years but failed to get tenure, because "[he hadn't] published anything but the one little book". IBM (1960–1980) Iverson joined IBM Research in 1960 (and doubled his salary). He was preceded to IBM by Fred Brooks, who advised him to "stick to whatever [he] really wanted to do, because management was so starved for ideas that anything not clearly crazy would find support." In particular, he was allowed to finish and publish A Programming Language and (with Brooks) Automatic Data Processing, two books that described and used the notation developed at Harvard. (Automatic Data Processing and A Programming Language began as one book "but the material grew in both magnitude and level until a separation proved wise".) At IBM, Iverson soon met Adin Falkoff, and they worked together for the next twenty years. Chapter 2 of A Programming Language used Iverson's notation to describe the IBM 7090 computer. In early 1963 Falkoff, later joined by Iverson and Ed Sussenguth, proceeded to use the notation to produce a formal description of the IBM System/360 computer then under design. The result was published in 1964 in a double issue of the IBM Systems Journal, thereafter known as the "grey book" or "grey manual". The book was used in a course on computer systems design at the IBM Systems Research Institute. A consequence of the formal description was that it attracted the interest of bright young minds. One hotbed of interest was at Stanford University which included Larry Breed, Phil Abrams, Roger Moore, Charles Brenner, and Mike Jenkins, all of whom later made contributions to APL. Donald McIntyre, head of geology at Pomona College which had the first general customer installation of a 360 system, used the formal description to become more expert than the IBM systems engineer assigned to Pomona. With the completion of the formal description Falkoff and Iverson turned their attention to implementation. This work was brought to rapid fruition in 1965 when Larry Breed and Phil Abrams joined the project. They produced a FORTRAN-based implementation on the 7090 called IVSYS (for Iverson system) by autumn 1965, first in batch mode and later, in early 1966, in time-shared interactive mode. Subsequently, Breed, Dick Lathwell (ex University of Alberta), and Roger Moore (of I. P. Sharp Associates) produced the System/360 implementation; the three received the Grace Murray Hopper Award in 1973 "for their work in the design and implementation of APL\360, setting new standards in simplicity, efficiency, reliability and response time for interactive systems." While the 360 implementation work was underway "Iverson notation" was renamed "APL", by Falkoff. The workspace "1 cleanspace" was saved at 1966-11-27 22.53.58 UTC. APL\360 service began within IBM several weeks before that and outside IBM in 1968. Additional information on the implementation of APL\360 can be found in the Acknowledgements of the APL\360 User's Manual and in "Appendix. Chronology of APL development" of The Design of APL. The formal description and especially the implementation drove the evolution of the language, a process of consolidation and regularization in typography, linearization, syntax, and function definition described in APL\360 History, The Design of APL, and The Evolution of APL. Two treatises from this period, Conventions Governing the Order of Evaluation and Algebra as a Language, are apologias of APL notation. The notation was used by Falkoff and Iverson to teach various topics at various universities and at the IBM Systems Research Institute. In 1964 Iverson used the notation in a one-semester course for seniors at the Fox Lane High School, and later in Swarthmore High School. After APL became available its first application was to teach formal methods in systems design at NASA Goddard. It was also used at the Hotchkiss School, Lower Canada College, Scotch Plains High School, Atlanta public schools, among others. In one school the students became so eager that they broke into the school after hours to get more APL computer time; in another the APL enthusiasts steered newbies to BASIC so as to maximize their own APL time. In 1969, Iverson and the APL group inaugurated the IBM Philadelphia Scientific Center. In 1970 he was named IBM Fellow. He used the funding that came with being an IBM Fellow to bring in visiting teachers and professors from various fields, including Donald McIntyre from Pomona and Jeff Shallit as a summer student. For a period of several months the visitors would start using APL for expositions in their own fields, and the hope was that later they would continue their use of APL at their home institutions. Iverson's work at this time centered in several disciplines, including collaborative projects in circuit theory, genetics, geology, and calculus. When the PSC closed in 1974, some of the group transferred to California while others including Iverson remained in the East, later transferring back to IBM Research. Iverson received the Turing Award in 1979. The following table lists the publications which Iverson authored or co-authored while he was at IBM. They reflect the two main strands of his work. Education Automatic Data Processing Elementary Functions: An Algorithmic Treatment The Use of APL in Teaching Using the Computer to Compute Algebra: An Algorithmic Treatment APL in Exposition An Introduction to APL for Scientists and Engineers Introducing APL to Teachers Elementary Analysis Programming Style in APL Language design & implementation A Programming Language A Programming Language A Common Language for Hardware, Software, and Applications Programming Notation in System Design Formalism in Programming Languages A Method of Syntax Specification A Formal Description of System/360 APL\360 User's Manual Communication in APL Systems The Design of APL APL as an Analytic Notation APLSV User's Manual APL Language Two Combinatoric Operators The Evolution of APL Operators and Functions The Role of Operators in APL The Derivative Operator Operators Notation as a Tool of Thought I. P. Sharp Associates (1980–1987) In 1980, Iverson left IBM for I. P. Sharp Associates, an APL time-sharing company. He was preceded there by his IBM colleagues Paul Berry, Joey Tuttle, Dick Lathwell, and Eugene McDonnell. At IPSA, the APL language and systems group was managed by Eric Iverson (Ken Iverson's son); Roger Moore, one of the APL\360 implementers, was a vice president. Iverson worked to develop and extend APL on the lines presented in Operators and Functions. The language work gained impetus in 1981 when Arthur Whitney and Iverson produced a model of APL written in APL at the same time they were working on IPSA's OAG database. (Iverson introduced Arthur Whitney, son of Eoin Whitney, to APL when he was 11-years-old and in 1974 recommended him for a summer student position at IPSA Calgary.) In the model, the APL syntax was driven by an 11-by-5 table. Whitney also invented the rank operator in the process. The language design was further simplified and extended in Rationalized APL in January 1983, multiple editions of A Dictionary of the APL Language between 1984 and 1987, and A Dictionary of APL in September 1987. Within IPSA, the phrase "dictionary APL" came into use to denote the APL specified by A Dictionary of APL, itself referred to as "the dictionary". In the dictionary, APL syntax is controlled by a 9-by-6 table and the parsing process was precisely and succinctly described in Table 2, and there is a primitive (monadic ⊥, modeled in APL) for word formation (lexing). In the 1970s and 1980s, the main APL vendors were IBM, STSC, and IPSA, and all three were active in developing and extending the language. IBM had APL2, based on the work of Jim Brown. Work on APL2 proceeded intermittently for 15 years, with actual coding starting in 1971 and APL2 becoming available as an IUP (Installed User Program, an IBM product classification) in 1982. STSC had an experimental APL system called NARS, designed and implemented by Bob Smith. NARS and APL2 differed in fundamental respects from dictionary APL, and differed from each other. I.P. Sharp implemented the new APL ideas in stages: complex numbers, enclosed (boxed) arrays, match, and composition operators in 1981, the determinant operator in 1982, and the rank operator, link, and the left and right identity functions in 1983. However, the domains of operators were still restricted to the primitive functions or subsets thereof. In 1986, IPSA developed SAX, SHARP APL/Unix, written in C and based on an implementation by STSC. The language was as specified in the dictionary with no restrictions on the domains of operators. An alpha version of SAX became available within I.P. Sharp around December 1986 or early 1987. In education, Iverson developed A SHARP APL Minicourse used to teach IPSA clients in the use of APL, and Applied Mathematics for Programmers and Mathematics and Programming which were used in computer science courses at T.H. Twente. Publications which Iverson authored or co-authored while he was at I. P. Sharp Associates: Education The Inductive Method of Introducing APL A SHARP APL Minicourse Applied Mathematics for Programmers Mathematics and Programming Language design & implementation Operators and Enclosed Arrays Direct Definition Composition and Enclosure A Function Definition Operator Determinant-Like Functions Produced by the Dot-Operator Practical Uses of a Model of APL Rationalized APL APL Syntax and Semantics Language Extensions of May 1983 An Operator Calculus APL87 A Dictionary of APL Processing Natural Language: Syntactic and Semantic Mechanisms Jsoftware (1990–2004) Iverson retired from I. P. Sharp Associates in 1987. He kept busy while "between jobs". Regarding language design, the most significant of his activities in this period was the invention of "fork" in 1988. For years, he had struggled to find a way to write f+g as in calculus, from the "scalar operators" in 1978, through the "til" operator in 1982, the catenation and reshape operators in 1984, the union and intersection operators in 1987, "yoke" in 1988, and finally forks in 1988. Forks are defined as follows: Moreover, (f g p q r) ←→ (f g (p q r)). Thus to write f+g as in calculus, one can write f+g in APL. Iverson and Eugene McDonnell worked out the details on the long plane rides to the APL88 conference in Sydney, Australia, with Iverson coming up with the initial idea on waking up from a nap. Iverson presented the rationale for his work post 1987 as follows: Roger Hui described the final impetus that got J started in Appendix A of An Implementation of J: Hui, a classmate of Whitney at the University of Alberta, had studied A Dictionary of the APL Language when he was between jobs, modelled the parsing process in at least two different ways, and investigated uses of dictionary APL in diverse applications. As well, from January 1987 to August 1989 he had access to SAX, and in the later part of that period used it on a daily basis. J initially took A Dictionary of APL as the specification, and the J interpreter was built around Table 2 of the dictionary. The C data and program structures were designed so that the parse table in C corresponded directly to the parse table in the dictionary. In retrospect, Iverson's APL87 paper APL87, in five pages, prescribed all the essential steps in writing an APL interpreter, in particular the sections on word formation and parsing. Arthur Whitney, in addition to the "one-page thing", contributed to J development by suggesting that primitives be oriented on the leading axis, that agreement (a generalization of scalar extension) should be prefix instead of suffix, and that a total array ordering be defined. One of the objectives was to implement fork. This turned out to be rather straightforward, by the inclusion of one additional row in the parse table. The choice to implement forks was fortuitous and fortunate. It was realized only later that forks made tacit expressions (operator expressions) complete in the following sense: any sentence involving one or two arguments that did not use its arguments as an operand, can be written tacitly with fork, compose, the left and right identity functions, and constant functions. Two obvious differences between J and other APL dialects are: (a) its use of terms from natural languages instead of from mathematics or computer science (the practice began with A Dictionary of APL): noun, verb, adverbs, alphabet, word formation, sentence, ... instead of array, function, operator, character set, lexing, expression, ... ; and (b) its use of 7-bit ASCII characters instead of special symbols. Other differences between J and APL are described in J for the APL Programmer and APL and J. The J source code is available from Jsoftware under the GNU General Public License version 3 (GPL3), or a commercial alternative. Eric Iverson founded Iverson Software Inc., in February 1990 to provide an improved SHARP APL/PC product. It quickly became obvious that there were shared interests and goals, and in May 1990 Iverson and Hui joined Iverson Software Inc.; later joined by Chris Burke. The company soon became J only. The name was changed to Jsoftware Inc., in April 2000. Publications which Iverson authored or co-authored while he was at Iverson Software Inc. and Jsoftware Inc.: Education Tangible Math Programming in J Arithmetic Calculus Concrete Math Companion Exploring Math J Phrases ICFP '98 Contest Winners Math for the Layman Language design & implementation A Commentary on APL Development Phrasal Forms APL/? Tacit Definition A Personal View of APL J Introduction and Dictionary Revisiting Rough Spots Computers and Mathematical Notation Mathematical Roots of J APL in the New Millennium Awards and honors IBM Fellow, IBM, 1970 Harry H. Goode Memorial Award, IEEE Computer Society, 1975 Member, National Academy of Engineering (USA), 1979 Turing Award, Association for Computing Machinery, 1979 Computer Pioneer Award (Charter recipient), IEEE Computer Society, 1982 Honorary doctorate, York University, 1998 See also Iverson Award Iverson bracket Floor and ceiling functions List of pioneers in computer science References External links A Celebration of the Life of Kenneth Eugene Iverson Collected Eulogies Iverson Examprogramming competition at the University of Alberta for high school students Ken Iverson Quotations and Anecdotesillustrations of what Iverson was like as a person, what he was like to work with, the milieu in which he studied and worked, his outlook on life, his sense of humor, etc. APL Quotations and Anecdotessketches of Iverson, his colleagues, and his intellectual descendants 1920 births 2004 deaths Canadian Army personnel Canadian computer scientists Canadian people of Norwegian descent Harvard School of Engineering and Applied Sciences alumni Harvard University faculty IBM employees IBM Fellows IBM Research computer scientists I. P. Sharp Associates employees McKinsey & Company people Members of the United States National Academy of Engineering People from Camrose, Alberta Programming language designers Queen's University at Kingston alumni Royal Canadian Air Force personnel Turing Award laureates Canadian expatriates in the United States
68311893
https://en.wikipedia.org/wiki/1969%E2%80%9370%20USC%20Trojans%20men%27s%20basketball%20team
1969–70 USC Trojans men's basketball team
The 1969–70 USC Trojans men's basketball team represented the University of Southern California during the 1969–70 NCAA University Division men's basketball season. Roster Schedule References USC Trojans men's basketball seasons USC USC Trojans USC Trojans
22464607
https://en.wikipedia.org/wiki/Mobile%20content%20management%20system
Mobile content management system
A mobile content management system (MCMs) is a type of content management system (CMS) capable of storing and delivering content and services to mobile devices, such as mobile phones, smart phones, and PDAs. Mobile content management systems may be discrete systems, or may exist as features, modules or add-ons of larger content management systems capable of multi-channel content delivery. Mobile content delivery has unique, specific constraints including widely variable device capacities, small screen size, limitations on wireless bandwidth, sometimes small storage capacity, and (for some devices) comparatively weak device processors. Demand for mobile content management increased as mobile devices became increasingly ubiquitous and sophisticated. MCMS technology initially focused on the business to consumer (B2C) mobile market place with ringtones, games, text-messaging, news, and other related content. Since, mobile content management systems have also taken root in business-to-business (B2B) and business-to-employee (B2E) situations, allowing companies to provide more timely information and functionality to business partners and mobile workforces in an increasingly efficient manner. A 2008 estimate put global revenue for mobile content management at US$8 billion. Key features Multi-channel content delivery Multi-channel content delivery capabilities allow users not to manage a central content repository while simultaneously delivering that content to mobile devices such as mobile phones, smartphones, tablets and other mobile devices. Content can be stored in a raw format (such as Microsoft Word, Excel, PowerPoint, PDF, Text, HTML etc.) to which device-specific presentation styles can be applied. Content access control Access control includes authorization, authentication, access approval to each content. In many cases the access control also includes download control, wipe-out for specific user, time specific access. For the authentication, MCM shall have basic authentication which has user ID and password. For higher security many MCM supports IP authentication and mobile device authentication. Specialized templating system While traditional web content management systems handle templates for only a handful of web browsers, mobile CMS templates must be adapted to the very wide range of target devices with different capacities and limitations. There are two approaches to adapting templates: multi-client and multi-site. The multi-client approach makes it possible to see all versions of a site at the same domain (e.g. sitename.com), and templates are presented based on the device client used for viewing. The multi-site approach displays the mobile site on a targeted sub-domain (e.g. mobile.sitename.com). Location-based content delivery Location-based content delivery provides targeted content, such as information, advertisements, maps, directions, and news, to mobile devices based on current physical location. Currently, GPS (global positioning system) navigation systems offer the most popular location-based services. Navigation systems are specialized systems, but incorporating mobile phone functionality makes greater exploitation of location-aware content delivery possible. See also Mobile Web Content management Web content management system Enterprise content management Apache Mobile Filter References Content management systems Mobile web Data management
906722
https://en.wikipedia.org/wiki/Bare-metal%20restore
Bare-metal restore
Bare-metal restore is a technique in the field of data recovery and restoration where the backed up data is available in a form that allows one to restore a computer system from "bare metal", i.e. without any requirements as to previously installed software or operating system. Typically, the backed up data includes the necessary operating system, applications and data components to rebuild or restore the backed up system to an entirely separate piece of hardware. In some configurations, the hardware receiving the restore needs to have an identical configuration to the hardware that was the source of the backup, although virtualization techniques and careful planning can enable a bare-metal restore to a hardware configuration different from the original. Disk imaging applications enable bare-metal restores by storing copies (images) of the entire contents of hard disks to networked or other external storage, and then writing those images to other physical disks. The disk image application itself can include an entire operating system, bootable from a live CD or network file server, which contains all the required application code to create and restore the disk images. Examples of software used for bare-metal recovery The dd utility on a Linux boot CD can be used to copy file systems between disk images and disk partitions to effect a bare-metal backup and recovery. These disk images can then be used as input to a new partition of the same type but equal or larger size, or alternatively a variety of virtualization technologies as they often represent a more accessible but less efficient representation of the data on the original partition. The IBM VM/370 operating system provides a command by the name of "ddr," for disk dump and restore. It is a bit by bit backup of a hard drive to a specified media, typically tape, but many choices exist. Microsoft introduced a new backup utility (Wbadmin) into Windows Server 2008 family of operating system in 2008 which has built-in support for bare-metal recovery. Users of this software can also recover their system to a Hyper-V virtual machine. Microsoft updated the Windows Recovery Environment features in the Windows 8 family of operating system to be set up to provide built-in support for bare-metal recovery. Microsoft Windows Server 2012 (R2) offers built-in Bare-Metal-Recovery. Comparison with other data backup and restoration techniques Bare-metal restore differs from local disk image restore where a copy of the disk image, and the restoration software, are stored on the computer that is backed up. Bare-metal restore differs from simple data backups where application data, but neither the applications nor the operating system are backed up or restored as a unit. See also Comparison of disk cloning software References Backup Backup software
31092
https://en.wikipedia.org/wiki/Time%20management
Time management
Time management is the process of planning and exercising conscious control of time spent on specific activities, especially to increase effectiveness, efficiency, and productivity. It involves the balancing of various demands upon a person relating to work, social life, family, hobbies, personal interests, and commitments with the finite nature of time. Using time effectively gives the person "choice" on spending or managing activities at their own time and expediency. Time management may be aided by a range of skills, tools, and techniques used to manage time when accomplishing specific tasks, projects, and goals complying with a due date. Initially, time management referred to just business or work activities, but eventually, the term broadened to include personal activities as well. A time management system is a designed combination of processes, tools, techniques, and methods. Time management is usually a necessity in any project management as it determines the project completion time and scope. The major themes arising from the literature on time management include the following: Creating an environment conducive to effectiveness (in terms of cost-benefit, quality of results, and time to complete tasks or project) Setting of priorities The related process of reduction of time spent on non-priorities Implementation of goals Cultural views of time management Differences in the way a culture views time can affect the way their time is managed. For example, a linear time view is a way of conceiving time as flowing from one moment to the next in a linear fashion. This linear perception of time is predominant in America along with most Northern European countries, such as Germany, Switzerland, and England. People in these cultures tend to place a large value on productive time management, and tend to avoid decisions or actions that would result in wasted time. This linear view of time correlates to these cultures being more “monochronic”, or preferring to do only one thing at a time. Generally speaking, this cultural view leads to a better focus on accomplishing a singular task and hence, more productive time management. Another cultural time view is multi-active time view. In multi-active cultures, most people feel that the more activities or tasks being done at once the better. This creates a sense of happiness. Multi-active cultures are “polychronic” or prefer to do multiple tasks at once. This multi-active time view is prominent in most Southern European countries such as Spain, Portugal, and Italy. In these cultures, the people often tend to spend time on things they deem to be more important such as placing a high importance on finishing social conversations. In business environments, they often pay little attention to how long meetings last, rather the focus is on having high quality meetings. In general, the cultural focus tends to be on synergy and creativity over efficiency. A final cultural time view is a cyclical time view. In cyclical cultures, time is considered neither linear nor event related. Because days, months, years, seasons, and events happen in regular repetitive occurrences, time is viewed as cyclical. In this view, time is not seen as wasted because it will always come back later, hence there is an unlimited amount of it. This cyclical time view is prevalent throughout most countries in Asia, including Japan and China. It is more important in cultures with cyclical concepts of time to complete tasks correctly, therefore most people will spend more time thinking about decisions and the impact they will have, before acting on their plans. Most people in cyclical cultures tend to understand that other cultures have different perspectives of time and are cognizant of this when acting on a global stage. Creating an effective environment Some time-management literature stresses tasks related to the creation of an environment conducive to "real" effectiveness. These strategies include principles such as: "get organized" - the triage of paperwork and of tasks, "protecting one's time" by insulation, isolation, and delegation, "achievement through goal-management and through goal-focus" - motivational emphasis, "recovering from bad time-habits" - recovery from underlying psychological problems, e.g. procrastination. Also, the timing of tackling tasks is important. As tasks requiring high levels of concentration and mental energy are often done at the beginning of the day when a person is more refreshed. Literature also focuses on overcoming chronic psychological issues such as procrastination. Excessive and chronic inability to manage time effectively may result from attention deficit hyperactivity disorder (ADHD) or attention deficit disorder (ADD). Diagnostic criteria include a sense of underachievement, difficulty getting organized, trouble getting started, trouble managing many simultaneous projects, and trouble with follow-through. Daniel Amen focuses on the prefrontal cortex which is the most recently evolved part of the brain. It manages the functions of attention span, impulse management, organization, learning from experience, and self-monitoring, among others. Some authors argue that changing the way the prefrontal cortex works is possible and offer a solution. Setting priorities and goals Time management strategies are often associated with the recommendation to set personal goals. The literature stresses themes such as: "Work in Priority Order" – set goals and prioritize, "Set gravitational goals" – that attract actions automatically. These goals are recorded and may be broken down into a project, an action plan, or a simple task list. For individual tasks or for goals, an importance rating may be established, deadlines may be set, and priorities assigned. This process results in a plan with a task list, schedule, or calendar of activities. Authors may recommend a daily, weekly, monthly, or other planning periods, associated with different scope of planning or review. This is done in various ways, as follows: ABCD analysis A technique that has been used in business management for a long time is the categorization of large data into groups. These groups are often marked A, B, C and D—hence the name. Activities are ranked by these general criteria: A – Tasks that are perceived as being urgent and important, B – Tasks that are important but not urgent, C – Tasks that are unimportant but urgent, D – Tasks that are unimportant and not urgent. Each group is then rank-ordered by priority - to further refine the prioritization, some individuals choose to then force-rank all "B" items as either "A" or "C". ABC analysis can incorporate more than three groups. ABC analysis is frequently combined with Pareto analysis. Pareto analysis The Pareto principle is the idea that 80% of consequences come from 20% of causes. Applied to productivity, it means that 80% of results can be achieved by doing 20% of tasks. If productivity is the aim of time management, then these tasks should be prioritized higher. The Eisenhower Method The "Eisenhower Method" or "Eisenhower Principle" is a method that utilizes the principles of importance and urgency to organize priorities and workload. This method stems from a quote attributed to Dwight D. Eisenhower: "I have two kinds of problems, the urgent and the important. The urgent are not important, and the important are never urgent." Eisenhower did not claim this insight for his own, but attributed it to an (unnamed) "former college president." Using the Eisenhower Decision Principle, tasks are evaluated using the criteria important/unimportant and urgent/not urgent, and then placed in according quadrants in an Eisenhower Matrix (also known as an "Eisenhower Box" or "Eisenhower Decision Matrix"). Tasks in the quadrants are then handled as follows. Important/Urgent quadrant tasks are done immediately and personally, e.g. crises, deadlines, problems. Important/Not Urgent quadrant tasks get an end date and are done personally, e.g. relationships, planning, recreation. Unimportant/Urgent quadrant tasks are delegated, e.g. interruptions, meetings, activities. Unimportant/Not Urgent quadrant tasks are dropped, e.g. time wasters, pleasant activities, trivia. POSEC method POSEC is an acronym for "Prioritize by Organizing, Streamlining, Economizing and Contributing". The method dictates a template which emphasizes an average individual's immediate sense of emotional and monetary security. It suggests that by attending to one's personal responsibilities first, an individual is better positioned to shoulder collective responsibilities. Inherent in the acronym is a hierarchy of self-realization, which mirrors Abraham Maslow's hierarchy of needs. Prioritize your time and define your life by goals. Organize things you have to accomplish regularly to be successful (family and finances). Streamline things you may not like to do, but must do (work and chores). Economize things you should do or may even like to do, but they're not pressingly urgent (pastimes and socializing). Contribute by paying attention to the few remaining things that make a difference (social obligations). Elimination of non-priorities Time management also covers how to eliminate tasks that do not provide value to the individual or organization. According to the Wall Street Journal contributor Jared Sandberg, task lists "aren't the key to productivity [that] they're cracked up to be". He reports an estimated "30% of listers spend more time managing their lists than [they do] completing what's on them". The software executive Elisabeth Hendrickson asserts that rigid adherence to task lists can create a "tyranny of the to-do list" that forces one to "waste time on unimportant activities". Any form of stress is considered to be debilitative for learning and life; even if adaptability could be acquired, its effects are damaging. But stress is an unavoidable part of daily life, and Reinhold Niebuhr suggests it's better to face it, as if having "the serenity to accept the things one cannot change and having the courage to change the things one can." Part of setting priorities and goals is the emotion "worry," and its function is to ignore the present to fixate on a future that never arrives, which leads to the fruitless expense of one's time and energy. It is an unnecessary cost or a false aspect that can interfere with plans due to human factors. The Eisenhower Method is a strategy used to compete with worry and dull-imperative tasks. Worry as stress is a reaction to a set of environmental factors; understanding this is not a part of the person gives the person possibilities to manage them. Athletes under a coach call this management as "putting on the game face." Change is hard, and daily life patterns are the most deeply ingrained habits of all. To eliminate non-priorities in study time, it is suggested to divide the tasks, capture the moments, review task handling method, postpone unimportant tasks (understanding that a task's current relevancy and sense of urgency reflect the wants of the person rather than the task's importance), manage life balance (rest, sleep, leisure), and cheat leisure and nonproductive time (hearing audio taping of lectures, going through presentations of lectures when in a queue, etc.). Certain unnecessary factors that affect time management are habits, lack of task definition (lack of clarity), over-protectiveness of the work, the guilt of not meeting objectives and subsequent avoidance of present tasks, defining tasks with higher expectations than their worth (over-qualifying), focusing on matters that have an apparent positive outlook without assessing their importance to personal needs, tasks that require support and time, sectional interests, and conflicts, etc. A habituated systematic process becomes a device that the person can use with ownership for effective time management. Implementation of goals A task list (also called a to-do list or "things-to-do") is a list of tasks to be completed, such as chores or steps toward completing a project. It is an inventory tool which serves as an alternative or supplement to memory. Task lists are used in self-management, business management, project management, and software development. It may involve more than one list. When one of the items on a task list is accomplished, the task is checked or crossed off. The traditional method is to write these on a piece of paper with a pen or pencil, usually on a note pad or clip-board. Task lists can also have the form of paper or software checklists. Writer Julie Morgenstern suggests "do's and don'ts" of time management that include: Map out everything that is important, by making a task list. Create "an oasis of time" for one to manage. Say "No". Set priorities. Don't drop everything. Don't think a critical task will get done in one's spare time. Numerous digital equivalents are now available, including personal information management (PIM) applications and most PDAs. There are also several web-based task list applications, many of which are free. Task list organization Task lists are often diarized and tiered. The simplest tiered system includes a general to-do list (or task-holding file) to record all the tasks the person needs to accomplish and a daily to-do list which is created each day by transferring tasks from the general to-do list. An alternative is to create a "not-to-do list", to avoid unnecessary tasks. Task lists are often prioritized in the following ways. A daily list of things to do, numbered in the order of their importance and done in that order one at a time as daily time allows, is attributed to consultant Ivy Lee (1877–1934) as the most profitable advice received by Charles M. Schwab (1862–1939), president of the Bethlehem Steel Corporation. An early advocate of "ABC" prioritization was Alan Lakein, in 1973. In his system "A" items were the most important ("A-1" the most important within that group), "B" next most important, "C" least important. A particular method of applying the ABC method assigns "A" to tasks to be done within a day, "B" a week, and "C" a month. To prioritize a daily task list, one either records the tasks in the order of highest priority, or assigns them a number after they are listed ("1" for highest priority, "2" for second highest priority, etc.) which indicates in which order to execute the tasks. The latter method is generally faster, allowing the tasks to be recorded more quickly. Another way of prioritizing compulsory tasks (group A) is to put the most unpleasant one first. When it's done, the rest of the list feels easier. Groups B and C can benefit from the same idea, but instead of doing the first task (which is the most unpleasant) right away, it gives motivation to do other tasks from the list to avoid the first one. A completely different approach which argues against prioritizing altogether was put forward by British author Mark Forster in his book "Do It Tomorrow and Other Secrets of Time Management". This is based on the idea of operating "closed" to-do lists, instead of the traditional "open" to-do list. He argues that the traditional never-ending to-do lists virtually guarantees that some of your work will be left undone. This approach advocates getting all your work done, every day, and if you are unable to achieve it, that helps you diagnose where you are going wrong and what needs to change. Various writers have stressed potential difficulties with to-do lists such as the following. Management of the list can take over from implementing it. This could be caused by procrastination by prolonging the planning activity. This is akin to analysis paralysis. As with any activity, there's a point of diminishing returns. To remain flexible, a task system must allow for disaster. A company must be ready for a disaster. Even if it is a small disaster, if no one made time for this situation, it can metastasize, potentially causing damage to the company. To avoid getting stuck in a wasteful pattern, the task system should also include regular (monthly, semi-annual, and annual) planning and system-evaluation sessions, to weed out inefficiencies and ensure the user is headed in the direction he or she truly desires. If some time is not regularly spent on achieving long-range goals, the individual may get stuck in a perpetual holding pattern on short-term plans, like staying at a particular job much longer than originally planned. Software applications Many companies use time tracking software to track an employee's working time, billable hours, etc., e.g. law practice management software. Many software products for time management support multiple users. They allow the person to give tasks to other users and use the software for communication and to prioritize tasks. Task-list applications may be thought of as lightweight personal information manager or project management software. Modern task list applications may have built-in task hierarchy (tasks are composed of subtasks which again may contain subtasks), may support multiple methods of filtering and ordering the list of tasks, and may allow one to associate arbitrarily long notes for each task. In contrast to the concept of allowing the person to use multiple filtering methods, at least one software product additionally contains a mode where the software will attempt to dynamically determine the best tasks for any given moment. Time management systems Time management systems often include a time clock or web-based application used to track an employee's work hours. Time management systems give employers insights into their workforce, allowing them to see, plan and manage employees' time. Doing so allows employers to manage labor costs and increase productivity. A time management system automates processes, which eliminates paperwork and tedious tasks. GTD (Getting Things Done) Getting Things Done was created by David Allen. The basic idea behind this method is to finish all the small tasks immediately and a big task is to be divided into smaller tasks to start completing now. The reasoning behind this is to avoid the information overload or "brain freeze" which is likely to occur when there are hundreds of tasks. The thrust of GTD is to encourage the user to get their tasks and ideas out and on paper and organized as quickly as possible so they're easy to manage and see. Pomodoro Francesco Cirillo's "Pomodoro Technique" was originally conceived in the late 1980s and gradually refined until it was later defined in 1992. The technique is the namesake of a Pomodoro (Italian for tomato) shaped kitchen timer initially used by Cirillo during his time at university. The "Pomodoro" is described as the fundamental metric of time within the technique and is traditionally defined as being 30 minutes long, consisting of 25 minutes of work and 5 minutes of break time. Cirillo also recommends a longer break of 15 to 30 minutes after every four Pomodoros. Through experimentation involving various workgroups and mentoring activities, Cirillo determined the "ideal Pomodoro" to be 20–35 minutes long. Related concepts Time management is related to the following concepts. Project management: Time management can be considered to be a project management subset and is more commonly known as project planning and project scheduling. Time management has also been identified as one of the core functions identified in project management. Attention management relates to the management of cognitive resources, and in particular the time that humans allocate their mind (and organize the minds of their employees) to conduct some activities. Timeblocking is a time management strategy that specifically advocates for allocating chunks of time to dedicated tasks in order to promote deeper focus and productivity. Organizational time management is the science of identifying, valuing and reducing time cost wastage within organizations. It identifies, reports and financially values sustainable time, wasted time and effective time within an organization and develops the business case to convert wasted time into productive time through the funding of products, services, projects or initiatives as a positive return on investment. See also Action item African time Attention management Calendaring software Chronemics Flow (psychology) Gantt chart Goal setting Interruption science Maestro concept Opportunity cost Order Polychronicity Precommitment Procrastination Professional organizing Prospective memory Punctuality Self-help Task management Time and attendance Time perception Time to completion Time-tracking software Time value of money Work activity management Workforce management Workforce modeling Books: First Things First (book) The 7 Habits of Highly Effective People Systems: Getting Things Done Pomodoro Technique Psychology/Neuroscience/Psychiatry Habit Self-control Impulsivity Inhibitory control Attention deficit hyperactivity disorder References Further reading Management systems
23991729
https://en.wikipedia.org/wiki/Heroes%20of%20Newerth
Heroes of Newerth
Heroes of Newerth (HoN) is a multiplayer online battle arena (MOBA) video game originally developed by S2 Games for Microsoft Windows, Mac OS X, and Linux. The game idea was derived from the Warcraft III: The Frozen Throne custom map Defense of the Ancients and was S2 Games' first MOBA title. The game was released on May 12, 2010, and re-released as a free-to-play game on July 29, 2011. On May 5, 2015, Heroes of Newerth development duties passed to Frostburn Studios, with the development team moving over to the new company. The game's servers would be discontinued on June 20, 2022. Gameplay Heroes of Newerth pits two teams of players against each other: the Legion and the Hellbourne. Both teams are based at opposite corners of the map in their respective bases. Bases consist of buildings, barracks, towers, a hero spawning pool, and a central structure. The goal of the game is to either destroy the central structure, the World Tree (Legion) or Sacrificial Shrine (Hellbourne), of the opposite base or force the other team to concede. Players achieve this by selecting heroes with unique skills to combat the other team. Game starts with hero picking phase. There are picking modes to allow players to create teams with balanced functionality. Heroes fill different roles in teams. Heroes can specialize in dealing damage, tanking, crowd control, healing, destroying towers, farming, defending, harassing, initiating fights, empowering nearby allies, providing vision, seeking and revealing enemies, killing kongor and so on. Heroes can fulfill many of these roles in different degrees. Typical roles are Carry, Support, Ganker, Jungler and Suicide. Players can choose to fill multiple roles at the same time. After the game starts, players need gold and experience to get stronger over time. To achieve this, players initially need to go to lanes, jungle, kongor's pit or one of golem's pits. Experience is gained by seeing an enemy soldier, hero, neutral creature, kongor or golem die, from a predefined range. Purple dots on creeps indicate whether the player is close enough to gain experience when the creep dies. Gold is gained by killing or assisting the killing of enemy soldiers, heroes, creatures, devices, kongor or golem. As players level up, they choose an ability to level up, or level up stats, which gives +2 to agility, intelligence and strength. The maximum hero level is 25. Each player typically plays one hero. Players can allow each other to control their own heroes. Some heroes can spawn or summon pets, creatures or devices. Heroes typically have four abilities. The default keys for abilities are Q,W,E and R. Sometimes the D key is used for the fifth ability. The fourth ability is the ultimate ability. , there are 139 playable heroes. Each game, a player chooses one hero to be for the duration of the match. Most heroes have four abilities that may be acquired and upgraded as the hero gains experience and levels up, defaulted to keys "Q", "W", "E", and "R". An ability can be leveled up whenever the hero's level goes up. "R" is the hero's ultimate and can only be leveled up when the hero reaches level 6 except for some heroes. Heroes are grouped by their main attribute. The three types are Agility, Intelligence, and Strength. Usually, Agility heroes rely on their basic attacks and go for damage per second (DPS) and increase their armor and attack speed. Intelligence heroes maximize the use of their abilities and try to maximize the amount of Mana they have and Mana regeneration. Strength heroes can take the most damage and increase their Max Health and Health regeneration. Heroes also are grouped by their attack type. The two attack types are melee and ranged. Melee heroes have short attack range and ranged heroes have long attack range. Ranged heroes have varying attack ranges. Abilities have their own ranges. Development Development started in 2005. In October 2009, associate game designer Alan "Idejder" Cacciamani claimed that Heroes of Newerth had been in development for "34 months, but the first 13 were spent on engine development. The entirety of assets, including maps, items, heroes, and art were made in 21 months". New features, balance changes and new heroes are regularly introduced with patches. Most game mechanics and many heroes in Heroes of Newerth are heavily based on Defense of the Ancients. The additions that differentiate Heroes of Newerth from Defense of the Ancients are features independent from gameplay; such as tracking of individual statistics, in-game voice communication, GUI-streamlined hero selection, game reconnection, match making, player banlists, penalties for leaving and chat features. Several features added via updates include a Hero Compendium (a list of the heroes in the game with detailed statistics about them), the ability to set a "following" trait on a friend which makes the player join/leave the games that a friend joins (similar to the "party" feature in other games), an in-game ladder system, and a map editor. The game uses S2 Games' proprietary K2 Engine and a client-server model similar to that used in other multiplayer games. Heroes of Newerth was in beta from April 24, 2009, until May 12, 2010. Throughout this time, over 3,000,000 unique accounts were registered. S2 Games used a Facebook fan page and word of mouth to attract players to the game. Many people who had bought one of S2 Games' previous games also received an invitation to the game through their registered email. On August 22, 2009, the pre-sale of Heroes of Newerth began for members of the closed beta. Players who purchased the game at this time received additional benefits, including name reservation, gold-colored nameplate, gold shield insignia, and an in-game taunt ability. Open beta testing for Heroes of Newerth began on March 31, 2010, and ran until May 12, 2010, when the game was released. S2 Games released Heroes of Newerth 2.0 on December 13, 2010. Features included in the update were casual mode, a new user interface, team matchmaking, an in-game store, and an offline map editor. Microtransactions were also introduced via the in-game store with the use of coins. Coins can be used to purchase cosmetic changes within the game, such as alternative hero skins, avatars, and customized announcer voices. The in-game currency can either be purchased with real life currency or earned via Matchmaking games. S2 Games released Heroes of Newerth as a free-to-play game on July 29, 2011. Accounts that were purchased before this date retained access to all content and updates without additional charges. Accounts made after this had 15 free-rotating heroes to choose from; the 15 heroes rotated every week. These accounts only had access to the game mode All Pick. Through purchasing coins or earning them in play, players could purchase the ability to use additional heroes. Players had to pay for tokens to play additional game modes, so that they could temporarily have the hero pool available to provide balance in hero selection. On July 19, 2012, nearly one year after announcing its free-to-play model S2 Games announced publicly that the game would be completely free to play with no restrictions to hero access, excluding Early Access to yet to be released heroes. The in-game store pricing was also reworked to allow easier access to in-game cosmetic content. In October 2012, S2 Games announced HoN Tour, an automated tournament system built into the game. The tournament is open to anyone and players compete to earn real money. The first "cycle" of the event began the weekend of December 1. In December 2012, Heroes of Newerth was hacked with over 8 million accounts being breached. The compromised data included usernames, email addresses and passwords. The hack was announced by the perpetrator themself on Reddit with S2 Games later confirming the breach. On May 1, 2013, S2 Games released Heroes of Newerth 3.0. Version 3.0 significantly updated the game's graphics, added bots, and dramatically improved features for introducing new players to the game. Part of the change features different looking lanes, cliffs, and towers. Heroes as well, look sharper and more detailed. The features for new players include tutorial videos and AI bots for a stress-free playing environment. On May 5, 2015, it was announced that Garena had acquired Heroes of Newerth from S2 Games, and established FrostBurn Studios to handle development of the game. Many of previous S2 Games staff members who help develop and maintain the game were subsequently employed by the new FrostBurn studios. On November 6, 2020, Frostburn Studios released the HoN 64-Bit Client, that should result in a much more stable experience for those on the Latest Windows (10), most notably faster FPS, and reduced loading times. On February 9, 2021, a macOS 64-Bit Universal client was announced to be getting into closed beta. In the recent years, the game has been getting patches every 8 weeks aimed at improving game balance, bug fixing, and occasionally bringing new hero avatars and items, with 4.9.2. being the latest version that went live on March 30, 2021. On December 15, 2021, HoN's developers announced that the game will be shut down on June 20, 2022. Reception Heroes of Newerth has received generally positive reviews, with a score of 76 out of 100 from Metacritic. Reviews have generally praised the technical aspects of the game, while criticizing the harsh learning curve and the commonly critical nature of the community. When Heroes of Newerth became free-to-play on July 29, 2011, the game had accumulated over 526,000 paid accounts with 460,000 unique players. The number of concurrent players online has also steadily increased over time, peaking at 150,000 as of May 2013. In mid-2013, Heroes of Newerth was the third most played game in internet cafés in the Philippines. Laura Baker, the director of marketing for S2 Games, stated that both the "Mac and Linux clients have done well for us." Other media In the 2014 film, The November Man, Heroes of Newerth is being played on screen during a scene in which a character walks through an internet cafe. See also Savage: The Battle for Newerth Savage 2: A Tortured Soul Strife References External links 2010 video games Esports games Free-to-play video games Independent Games Festival winners Linux games Lua (programming language)-scripted video games MacOS games Multiplayer online battle arena games Multiplayer online games Multiplayer video games Online games Products and services discontinued in 2022 S2 Games Video game clones Video games developed in the United States Windows games
2764
https://en.wikipedia.org/wiki/AbiWord
AbiWord
AbiWord () is a free and open-source software word processor. It is written in C++ and since version 3 it is based on GTK+ 3. The name "AbiWord" is derived from the root of the Spanish word "abierto", meaning "open". AbiWord was originally started by SourceGear Corporation as the first part of a proposed AbiSuite but was adopted by open source developers after SourceGear changed its business focus and ceased development. It now runs on Linux, ReactOS, Solaris, AmigaOS 4.0 (through its Cygwin X11 engine), MeeGo (on the Nokia N9 smartphone), Maemo (on the Nokia N810), QNX and other operating systems. Development of a version for Microsoft Windows has ended due to lack of maintainers (the latest released versions are 2.8.6 and 2.9.4 beta ). The macOS port has remained on version 2.4 since 2005, although the current version does run non-natively on macOS through XQuartz. AbiWord is part of the AbiSource project which develops a number of office-related technologies. Abiword is one of the rare text processing software which allows local users to edit simultaneously a same shared document in a local network, without the requirement of an Internet connection, since 2009. Features AbiWord supports both basic word processing features such as lists, indents and character formats, and more sophisticated features including tables, styles, page headers and footers, footnotes, templates, multiple views, page columns, spell checking, and grammar checking. Starting with version 2.8.0, AbiWord includes a collaboration plugin that allows integration with AbiCollab.net, a Web-based service that permits multiple users to work on the same document in real time, in full synchronization. The Presentation view of AbiWord, which permits easy display of presentations created in AbiWord on "screen-sized" pages, is another feature not often found in word processors. Interface AbiWord generally works similarly to classic versions (pre-Office 2007) of Microsoft Word, as direct ease of migration was a high priority early goal. While many interface similarities remain, cloning the Word interface is no longer a top priority. The interface is intended to follow user interface guidelines for each respective platform. File formats AbiWord comes with several import and export filters providing a partial support for such formats as HTML, Microsoft Word (.doc), Office Open XML (.docx), OpenDocument Text (.odt), Rich Text Format (.rtf), and text documents (.txt). LaTeX is supported for export only. Plug-in filters are available to deal with many other formats, notably WordPerfect documents. The native file format, .abw, uses XML, so as to mitigate vendor lock-in concerns with respect to interoperability and digital archiving. Grammar checking The AbiWord project includes a US English-only grammar checking plugin using Link Grammar. AbiWord had grammar checking before any other open source word processor, although a grammar checker was later added to OpenOffice.org. Link Grammar is both a theory of syntax and an open source parser which is now developed by the AbiWord project. See also List of free and open-source software packages List of word processors Comparison of word processors Office Open XML software OpenDocument software References External links Andrew Leonard: Abiword Up. Salon.com, November 15, 2002. History of the project and comparison with closed source development. Interview with Development team after 2.6 release AbiWord: A Small, Swift Word Processor Office software that uses GTK Free software programmed in C++ Free word processors Linux word processors MacOS word processors Windows word processors Cross-platform free software Portable software 1998 software Software using the GPL license
28523440
https://en.wikipedia.org/wiki/Telecom%20Corridor%20Genealogy%20Project
Telecom Corridor Genealogy Project
The Telecom Corridor Genealogy Project is the result of the collaboration of the Richardson Chamber of Commerce and the Center for Information Technology and Management (CITM) in the School of Management at the University of Texas at Dallas, with the purpose of creating not only a multidimensional diagram of the relationship of companies and their employees in the high tech sector, but also to enable professionals in the tech sector to interact in a social networking framework that has certain advantages over general websites such as LinkedIN. The project is intended to follow tens of thousands of companies to give a true multidimensional diagram of the history of the area's "corporate DNA". This is going to be a living database that will continue to evolve as long as the tech sector exists." Summary The original concept was developed as a Tool for Economic Development in 2003 by Paul Peck to create a social networking website for professionals in the Richardson Telecom Corridor. The idea behind the project was to develop a History Framework and database for the Telecom Corridor and to thereby illustrate the Telecom Corridor’s Highly Networked Community. In order to prove and illustrate the environment and fertile ground for new start-ups and relocating companies the project demonstrates the family like bonds and history of the local companies and their executives. It is also to simplify and increase networking among area companies and their executives through the common thread that they share and to push networking to a higher level. History The Telecom Corridor Genealogy Project started in February 2003 as a tool for economic development for the region that is called the Telecom Corridor. The concept was first outlined in discussions at the Richardson Economic Development Technology Advisory Board (REDTAB) and further developed with the help of the Metroplex Technology Business Council 3rd Friday Tech Luncheon Committee. In March 2003 the founder Paul Peck had already been able to collect 300 entries of 100 different people for the database. In order to get more momentum for the project he decided to seek out the local University - University of Texas at Dallas in April 2003. He met with Dr. Michael Savoie, Director of the Center for Information Technology and Management (CITM) in the School of Management at The University of Texas at Dallas. Shortly after this meeting Dr. Savoie and his Center developed a program to manage the database and hosted it on UT Dallas website for people to contribute their data online. Dr. Michael Savoie] said: "We're doing the Telecom Corridor Genealogy Project, which is huge. We're generating a 40-year family tree of the companies and individuals that have worked in the North Texas technology sector. For one thing, it's a tremendously valuable tool for the Chamber of Commerce and economic development corporations in the area. It's also a great way for people in the tech sector to reconnect and be able to find people they used to work with. We are working with tens of thousands of companies to give a true multidimensional diagram of the history of technology. This is going to be a living database that will continue to evolve as long as the tech sector exists. As a result of this work the database grew about 400 entries and 150 individuals and first results from the data were shown at the Metroplex Technology Business Council (MTBC) Executive Committee. In May 2003 the committee allowed Mr. Peck and Dr. Savoie to use the trademark Telecom Corridor in the name of the project. Subsequently the project became known all over the Telecom Corridor and Dallas/Fort Worth Metroplex due to its rapid growth and was subject in interviews, photos (Peck, Hicks, Savoie & Robinson). It was published in the Dallas Morning News, Sunday Business Section, Page 1 in June 2003 and a Radio Interview with Paul Peck & Art Roberts on “Tech in Touch”, WBAP 820, with Kym Yancey followed shortly after that. A next step in the project was the addition of company profile/history data (by Jerry Cupples) for cross-referencing purposes in July 2003. Following was another meeting with Dr. Savoie, including (CITM) Members & Claire Lewis the preliminary Telecom Corridor Genealogy Project Web Page was developed and online at the (CITM). After the project grew in 2003 the Dallas Morning News followed up and published a 2nd article in June 2004; almost exactly 1 year after the first. Due to limited human resources the project lay dormant until February 2010. In February 2010 the Richardson Chamber of Commerce's Economic Development Partnership decided to revive the project and approached Paul Peck with the idea of providing and intern for him to help him with the project. Due to the time that has passed since '04 the revival process seemed even harder considering what had changed in world of social media and the upcoming web 2.0. Questions had to be answered. What can the project do that a LinkedIN or Facebook can't. That involved a lot of research on social networks and business networks and even social network software applications. About Regional Economic Development Conventional economic development policies are believed not to be universally usable for attracting high-tech firms to areas/cities, but rather that those locate in clusters with certain characteristics: "[...]the presence of a strong, scientifically oriented university that can be called upon to work with businesses in their research endeavors and spin-off new high-tech firms, as well as to perform the more traditional role of producing a well educated pool of employees; a technology center to act as an intellectual resource around which high-tech firms can concentrate; the availability of venture capital to provide seed money for the early stages of start-up companies; and an entrepreneurial, risk-taking business climate[...]" Even though the Telecom Corridor Genealogy Project had only been founded in 2003 there were reports and talks about the significance of the history of companies and their workforce in this area since the early 1990s. That history matters in economic development is widely agreed upon and the history of the companies and their family tree got captured and referred to by many authors. Vision and goals The future vision is to promote further networking among area companies and their executives through the common thread that they share and to push networking to a higher level modeling Silicon Valley community network with rapid knowledge flows and high supply chain relationships through... Previous models The idea of the project was and still is influenced by other similar models of regional business "family tree" projects. The regional business family tree that is most famous is the Silicon Valley Fairchild family tree. The Fairchild project resulted in a map that shows the genealogy of the companies in Silicon Valley that spun out or were acquired by Fairchild semiconductors from 1957 till 1979 (Silicon Valley Genealogy Map). Another economic development related project was done by the ACEnet and the social network software provider InFlow. The project resulted in the analysis of the food industry in Athens, Ohio and produced a better connected business community and the understanding for the same. "[...]Communities are built on connections. Better connections usually provide better opportunities [...] How do we build connected communities that create, and take advantages of, opportunities in their region or marketplace [...] ()" References External links Website of the Telecom Corridor Richardson Chamber of Commerce Center for Information Technology and Management at UTD Don Hoefler, Harry Smallwood, and James E. Vincler:Fairchild/Silicon Valley Genealogy Chart Telecom Corridor Genealogy Project - LinkedIN Group Telecom Corridor Genealogy Project - Data Input form Economy of Dallas
3636222
https://en.wikipedia.org/wiki/Sony%20Reader
Sony Reader
The Sony Reader was a line of e-book readers manufactured by Sony, who produced the first commercial E Ink e-reader with the Sony Librie in 2004. It used an electronic paper display developed by E Ink Corporation, was viewable in direct sunlight, required no power to maintain a static image, and was usable in portrait or landscape orientation. Sony sold e-books for the Reader from the Sony eBook Library in the US, UK, Japan, Germany, Austria, Canada and was reported to be coming to France, Italy and Spain starting in early 2012. The Reader also could display Adobe PDFs, ePub format, RSS newsfeeds, JPEGs, and Sony's proprietary BBeB ("BroadBand eBook") format. Some Readers could play MP3 and unencrypted AAC audio files. Compatibility with Adobe digital rights management (DRM) protected PDF and ePub files allowed Sony Reader owners to borrow ebooks from lending libraries in many countries. The DRM rules of the Reader allowed any purchased e-book to be read on up to six devices, at least one of which must be a personal computer running Windows or Mac OS X. Although the owner could not share purchased eBooks on others' devices and accounts, the ability to register five Readers to a single account and share books accordingly was a possible workaround. On August 1, 2014, Sony announced that it would not make another consumer e-reader. In late 2014, Sony released the Sony Digital Paper DPTS1 - which only views PDFs and has a stylus for making notes - aimed at professional business users. Models and availability Ten models were produced. The PRS-500 (PRS standing for Portable Reader System) was made available in the United States in September 2006. On 1 November 2006, Readers went on display and for sale at Borders bookstores throughout the US. Borders had an exclusive contract for the Reader until the end of 2006. From April 2007, Sony Reader has been sold in the US by multiple merchants, including Fry's Electronics, Costco, Borders and Best Buy. The eBook Store from Sony is only available to US or Canadian residents or to customers who purchased a US-model reader with bundled eBook Store credit. On July 24, 2007, Sony announced that the PRS-505 Reader would be available in the UK with a launch date of September 3, 2008. Waterstone's is the official retail partner and the Reader is available at selected stores such as Argos, Sony Centres and Dixons; while a red edition is available exclusively from John Lewis. On October 2, 2008 the PRS-700, with touch screen and built-in lighting was announced. On August 5, 2009 Sony announced two new readers, the budget PRS-300 Pocket Edition and the more advanced PRS-600 Touch Edition. On August 25, 2009 Sony announced the Reader PRS-900 "Daily Edition." This features a 7" diagonal screen to compete with the Amazon Kindle DX. It's also the first to feature free 3G wireless through AT&T to access the Sony eBookstore without the need of a computer, and to increase the grayscale level, from 8 to 16. On September 1, 2010, Sony introduced the PRS-350 Pocket Edition, PRS-650 Touch Edition, PRS-950 "Daily Edition" as replacements for the PRS-300, PRS-600 and PRS-900, with both new models featuring 16-level grey scale touch screens. The launch of the new models also represented the introduction of the Sony Reader into the Australian and New Zealand markets for the first time. On August 31, 2011, Sony announced a new reader replacing all of their previous models, the PRS-T1, featuring a 6" screen. On August 16, 2012, Sony announced the PRS-T1 successor, the PRS-T2. On September 4, 2013, Sony announced the PRS-T2 successor, the PRS-T3. Unlike previous Sony reader models, the T3 is not sold in the US, and Sony has abandoned the North American market due to competition from Amazon, B&N and Kobo. On February 6, 2014, Sony announced that it was closing its North American, Europe, and Australia Reader Stores in late March, migrating all its customers to the Kobo Reader Store. On August 1, 2014, Sony announced that it would not release another ereader but would keep selling its remaining stock. 2013 Model (Discontinued in August 2014) Reader Wi-Fi PRS-T3S The PRS-T3S is the latest 6", Wi-Fi only model. Announced in October 2013 in Japan, it is a PRS-T3 without a cover that costs $99 and was sold in Japan, England, Canada and Germany. Reader Wi-Fi PRS-T3 The PRS-T3 is a 6", Wi-Fi only model with a snap cover. Specifications Size: 160 × 109 × 11.3 mm Weight: 200 grams including snap cover Display: size: 15.2 cm (6 in) diagonal (approx area of letter-sized page). resolution: 16-level gray scale 6" Pearl HD E Ink screen 1024 x 758 pixel resolution Memory: 2 GiB of internal storage (1.3 GiB available to use) plus microSD expansion of up to 32 GB Battery Life: 6–8 weeks, assuming 30 minutes reading per day Connectivity: Micro-USB PC interface: USB port Supported e-book formats: EPUB, PDF, FB2, TXT Supported picture formats: BMP, GIF, JPEG, PNG Wireless: Wi-Fi 802.11 b, g, n, simple Web browser Colors: Black (Matte), Red (Glossy) and White (Glossy) 2012 Model (Discontinued late 2013) Reader Wi-Fi PRS-T2 The PRS-T2 is a 6" Wi-Fi only model. Its touchscreen supports zoom in and out, dictionary and adding notes, including export to Evernote. The device has two English languages and four translation dictionaries built-in. PRS-T2 specifications. Size: 173 × 110 × 9.1 mm Weight: 164 g Display: size: 15.2 cm (6 in) diagonal (approx 1/4 area of letter-sized page) resolution: 16-level gray scale E Ink Pearl display portrait: 90.6 × 122.4 mm (3.57" × 4.82"), 600 × 800 pixels | effective 115.4 × 88.2 mm (4.54 × 3.47 in), 754 × 584 pixels minimum font size: 6 pt legible, 7 pt recommended Memory: 2 GB of internal storage (1.3 GB available to use) plus microSD expansion of up to 32 GB Battery Life: Up to 2 months with Wi-Fi off Lithium-ion battery: up to two months battery life, with wireless off (on reading 1/2h per day). Connectivity: Micro-USB PC interface: USB port Supported e-book formats: EPUB, PDF, TXT, BBeB*, Rtf*, Doc* (*After conversion with Sony software) Supported picture formats: Jpg, Gif, Png, Bmp. Wireless: Wi-Fi, simple web browser. Colors: Black (Matte), Red (Glossy) and White (Glossy). 2011 Model (Discontinued late 2012) Reader Wi-Fi PRS-T1 The PRS-T1 is a 6", Wi-Fi only model. Its touchscreen supports zoom in and out, look up in dictionary and adding notes. Up to 16 different languages are supported. PRS-T1 specifications Size: 173 x 110 x 8.9 mm Weight: 168 g Display: size: 15.5 cm (6 in) diagonal (approx area of letter-sized page) resolution: 16-level gray scale E Ink Pearl display portrait: 90.6 x 122.4 mm (3.57" x 4.82"), 600 x 800 pixels | effective 88.2 x 115.4 mm (3.47" x 4.54"), 584 x 754 pixels minimum font size: 6 pt legible, 7 pt recommended Memory: 2 GB of internal storage (1.4 GB available to use) plus microSD expansion of up to 32 GB. Lithium-ion battery, up to one month per charge. PC interface: USB port Supported e-book formats: EPUB, PDF, TXT. Supported audio formats: MP3, AAC. Wireless: Wi-Fi, simple web browser. Colors: Black, Red and White. 2010 Models (Discontinued late 2011) Pocket Edition PRS-350 The PRS-350 was launched in August 2010 and it is also known as the "Pocket Edition". The PRS-350 was announced at the same time as the touch-screen PRS-650. It is Sony's smallest ereader as well as its entry-level device replacing the PRS-300 and it is priced at US$179. It has a touch screen, and two GB of Memory but lacks an SD Card Slot and does not support MP3 playback. PRS-350 specifications Size: 145 × 104.3 × 8.5mm Weight: 155 g Display: 5 inch. E Ink Pearl, touch-screen, grey scale 16-levels Resolution 600 × 800 pixels Document Search Capability Built in flash memory: 2 GB Font Size: 6 sizes (XS - XXL) Supported e-book formats: EPUB, PDF, Microsoft Word, TXT, RTF, BBeB Hi-speed micro USB Color: Pink, Silver, Blue, Red, Black Touch Edition PRS-650 The PRS-650 was launched in August 2010 and it is also known as the "Touch Edition". The PRS-650 was announced at the same time as the touch-screen PRS-350. It is Sony's mid-range device, priced at US$229. As the replacement for the PRS-600 model, it is Sony's higher-scale, touch-screen edition of the reader. It has a similar interface to the PRS-350. PRS-650 specifications Size: 168 × 118.8 × 9.6mm Weight: 215 g Display: 6 inch. E Ink Pearl, touch-screen, grey scale 16-levels Resolution 600 × 800 pixels Document Search Capability Built in flash memory: 2 GB SD card slot Memory Stick PRO Duo slot Font Size: 6 sizes (XS - XXL) Supported e-book formats: EPUB, PDF, Microsoft Word, TXT, RTF, BBeB Supported audio formats: MP3, AAC Available case colors: PRS-650BC: Black PRS-650SC: Silver PRS-650RC: Red Daily Edition PRS-950 The PRS-950 was launched in August 2010 replacing the PRS-900 and it is also known as the "Daily Edition". It was introduced as Sony's top-of-the-line device, priced at US$299. The device has a larger display (7"), 16-levels of grayscale, touch screen Wi-Fi and 3G wireless access (through AT&T Mobility in a manner similar to the Kindle's whispernet) which enables computer-free access to the Sony eBookstore in the United States. Like earlier Sony Readers the display can be oriented horizontally, enabling a landscape style mode, and adds a new mode displaying two portrait-mode pages side-by-side (in a similar fashion to viewing a book). PRS-950 specifications Size: 199.9 × 128 × 9.6mm Weight: 272 g Display: 7 inch. E Ink Pearl, touch-screen, grey scale 16-levels Resolution 600 × 1024 pixels Document Search Capability Built in flash memory: 2 GB SD card slot Memory Stick PRO Duo slot Font Size: 6 sizes (XS - XXL) Supported e-book formats: EPUB, PDF, Microsoft Word, TXT, RTF, BBeB Supported audio formats: MP3, AAC Color: Silver only Wireless: 3G, Wi-Fi, Web Browser 2009 Models (Discontinued late 2010) Pocket Edition PRS-300 The PRS-300 was launched in August 2009 and it is also known as the "Pocket Edition". The PRS-300 was announced at the same time as the touch-screen PRS-600. It is Sony's smallest ever ereader as well as its entry-level device, priced at US$199. It has a smaller screen than the PRS-600, no touch interface, no MP3 audio or expandable memory. It has a similar interface to the PRS-500 and PRS-505. Specifications Display: 5 inch. Resolution: 600 × 800 pixels Dimensions LxWxD (approx.): 6 × 4 × 13/32 inches (approx. 159×108×10 mm) Weight (approx.): 220 g (7.76 oz) Gray scale: 8-levels gray scale Internal Memory: 512MiB, 440MiB accessible Font Size: 3 adjustable font sizes Battery: Sealed internal, up to two weeks of reading on a single charge MSRP: US$150 Available case colors: PRS-300BC: Navy Blue PRS-300RC: Rose Pink PRS-300SC: Silver Touch Edition PRS-600 The PRS-600 was launched in August 2009 and it is also known as the "Touch Edition". The PRS-600 was announced at the same time as the non-touch-screen PRS-300. It is Sony's middle-Range device and it priced at US$299. It is the replacement for the PRS-700 model (although it is missing the front-light feature). It is Sony's higher-scale, touch-screen edition of the ereader. It has a similar interface to the PRS-700. Unlike the PRS-700 which was only available in black, the PRS-600 is available in three colors. Note if the device is locked, using the optional 4 digit pin it will not mount via USB, the lock option needs to be disabled in order to mount the device. This edition has been criticized for having a very reflective screen, making it hard to read unless it is angled just right in relation to the light sources. This edition offers the possibility to highlight, quote or underline the text you are reading. Moreover, it comes with features such as Music player via a jack. Specifications Size: 175.3 × 121.9 × 10.2mm (6.9" × 4.8" × 0.4") Weight: 286 g (10.1 oz) Display: 6 inch. touch-screen Resolution: 600 × 800 pixels Document Search Capability Built-in Dictionary: American Oxford and English Oxford eBook support extension DRM Text : ePub (Adobe DRM protected), PDF (Adobe DRM protected), BBeB Book (PRS DRM protected) Unsecured Text : ePub, BBeB Book, PDF5, TXT, RTF, Microsoft Word (Conversion to the Reader requires Word installed on your PC) Gray scale: 8-levels gray scale Internal Memory: 512MB, 380MB accessible Expanded Memory: Support for Sony Memory Stick Pro DUO and SDHC up to @16 Gb Font Size: 5 adjustable font sizes Battery: Sealed internal, up to two weeks of reading on a single charge MSRP: US$170 Available case colors: PRS-600BC: Black PRS-600SC: Silver PRS-600RC: Red Daily Edition PRS-900 The PRS-900 was launched in December 2009 and it is also known as the "Daily Edition". The PRS-900 was announced at the same time as the touch-screen PRS-300. It is Sony's Top of the Range device and it priced at US$399. The device has a larger display (7"), 16-levels of grayscale, touch screen and 3G wireless access (through AT&T Mobility in a manner similar to the Kindle's whispernet) which enables computer-free access to the Sony eBookstore in the United States. Like earlier Sony Readers the display can be oriented horizontally, enabling a landscape style mode, and adds a new mode displaying two portrait-mode pages side-by-side (in a similar fashion to viewing a book). Specifications Size: 206.4 × 127 × 15.1 mm (8.1" × 5" × 0.6") Weight: 360 g (12.75 oz) Display: 7.1 inch touch-screen Resolution: 600 × 1024 pixels Gray scale: 16-levels gray scale Internal Memory: 2 GiB, 1.6 GiB accessible Expanded Memory: support for Sony Memory Stick Pro DUO and SDHC up to 32 GiB. According to Sony, it can take up to a 32 GiB Memory Stick. But, according to its manual, 32 GiB memory sticks are not guaranteed to work. Therefore, it is recommended to use 16 GiB memory sticks. Font Size: 6 adjustable font sizes Battery: user replaceable, up to two weeks of reading on a single charge Wireless: AT&T 3G wireless (free), access to eBook store only, no Web browser MSRP: US$250 Available case colors: PRS-900: black 2008 Model (Discontinued late 2009) PRS-700 The PRS-700 was launched in October 2008, it has a touchscreen that can be used as a virtual keyboard. It became available in the U.S. in November 2008 at a MSRP of $399; in April 2009 it was selling for $349.99. Unlike Sony's LIBRIé, a close cousin of the Sony Reader, the PRS-500 and PRS-505 offered no way for the user to annotate a digital book since those lack a keyboard. This was addressed by the release of the PRS-700. Improvements of PRS-700 vs. the PRS-505 include the following: The 6-inch E Ink display (same resolution as before) is now a touch screen, removing the need for the 10 side buttons. Note taking and virtual keyboard, made possible by the touch screen. Page turning buttons remain but can also be accomplished by touch screen gestures. LED lighting for use in poor lighting conditions. Internal storage is doubled to 512 MB. PRS-700 specifications Size: Approx. 174.3 × 127.6 × 9.7 mm (6" × 5" × 0.4") Weight: 283.5 g (10 oz) Display: size: 15.5 cm (6 in) diagonal (approx 1/4 area of letter-sized page) resolution: 170 dpi, 8-level gray scale integrated touchscreen Memory: 512 MB standard (350 eBooks at 1.2 MB each average, 420 MB available), Sony Memory Stick Pro Duo 8 GB, SDHC card expansion up to 32 GB Lithium-ion battery, up to 7500 "page turns" per charge PC Interface: USB port 2.0 Built-in LED reading light 2007 Model (Discontinued late 2009) PRS-505 The PRS-505 was launched on 2 October 2007, a software and hardware updated version of the PRS-500 Reader, which it replaced. The 505 keeps the 6" E Ink display of the original Reader, but uses an improved version of E Ink Vizplex imaging film with faster refresh time, brighter white state, and 8-level grayscale. The PRS-505 is thinner than its predecessor (8 mm vs. 13 mm) and comes with more internal memory (256 MiB vs. 64 MiB). Other new product features included auto-synchronization to a folder on a host PC, support for the USB Mass Storage Device profile, and full USB charging capability (the PRS-500 could only be recharged via USB if the battery was not fully drained, and if the Sony Connect Reader software was installed on the host PC). Also, adding books to "Collections" (a feature to organize and group book titles) is now possible on the storage card, unlike the PRS-500 model. Version 1.1 firmware, available as a free download since July 24, 2008 adds support for the EPUB format, Adobe Digital Editions 1.5 and Adobe DRM protected PDF files, automatic reflow of PDF files formatted for larger pages enlarges the text to improve readability, and support for high capacity SDHC memory cards. Specifications Size: 175 × 122 × 8 mm (6.9" × 4.8" × 0.3") Weight: 250 g (9 oz) Display: size: 15.5 cm (6 in) diagonal (approx 1/4 area of letter-sized page) resolution: 170 dpi, 8-level gray scale portrait: 90.6 × 122.4 mm (3.57" × 4.82"), 600 × 800 pixels | effective 88.2 × 115.4 mm (3.47 × 4.54 in), 584 × 754 pixels | for the Pictures application effective resolution is 600 × 766 pixels minimum font size: 6 pt legible, 7 pt recommended Memory: 256 MiB standard (200 MiB accessible), Sony Memory Stick Pro Duo 8 GiB, SD card up to 2 GiB (some non-SDHC 4 GiB cards may work), or up to 32 GiB with SDHC cards and version 1.1 firmware Lithium-ion battery, up to 6800 "page turns" per charge PC interface: USB port 2.0 Available case colors: PRS505/LC: Dark Blue PRS505/SC: Silver PRS505SC/JP: Custom Skin (James Patterson Special Edition) PRS505/RC: Sangria Red (introduced in August 2008) 2006 Model (Discontinued late 2007) PRS-500 Launched in September 2006, it has a six-inch E Ink display from and is 13 mm thick. There is an internal memory of 64 MiB. This model was superseded by the PRS-505 in 2007. On November 16, 2009, Sony announced that a firmware update is available to owners of the original PRS-500. This update "will allow your PRS-500 to support the ePub and Adobe DRM format and add the ability to re-flow PDF documents". Owners must send the ereader in to the Sony Service Centers for the updated firmware. Specifications Size: 175.6 × 123.6 × 13.8 mm (6.9" × 4.9" × 0.5") Weight: 250 g (9 oz) Display: size: 15.5 cm (6 in) diagonal (approx 1/4 area of letter-sized page) resolution: 170 dpi, 4-level gray scale portrait: 90.6 × 122.4 mm (3.57" × 4.82"), 600 × 800 px | effective 115.4 × 88.2 mm (4.54 × 3.47 in), 754 × 584 px minimum font size: 6 pt legible, 7 pt recommended Memory: 64 MiB standard, Memory Stick (Pro Duo High Speed not supported. Normal memory sticks are only supported up to 4 GiB, despite Sony compatibility claims) or SD card expansion up to 2 GiB (some non-SDHC 4 GiB cards may work) Lithium-ion battery, up to 7500 "page turns" per charge PC interface: USB port 2004 Model Sony Librie EBR-1000EP Launched in April 2004, it has a six-inch E Ink display and a Qwerty keyboard that was released in Japan. Specifications Display: 6-inch screen with a resolution of 600x800 dots at 170dpi 10 MB, Memory Stick support Size 126mm x 190mm x 13mm Weight: 300 g Formats supported DRM-free Text: BBeB Book (LRF), TXT, RTF, EPUB (PRS-T1: EPUB, PDF, TXT only). Typefaces in PDF files formatted for 216 × 280 mm (8.5 × 11 inch) pages may be too small to read comfortably. Such files can be reformatted for the Reader screen size with Adobe Acrobat Professional, but not by Adobe Reader software. The Reader does support Microsoft Word DOC format. The 'CONNECT Reader' application uses Word to convert the .DOC files to RTF before sending them to the Reader. DRM-protected Text: BBeB Book (LRX); ePub. Audio: MP3 and DRM-free AAC (except on the PRS-T2, PRS-300 & PRS-350) Image: JPEG, GIF, PNG, and BMP (Loading an animated GIF will freeze the Reader) RSS: Limited to 20 featured blogs such as Engadget and Wired, no ability to add others and no auto-update (as of 2006-12-01) The Reader supported TXT and RTF documents with Latin character set only. Other character sets (such as Cyrillic, for example) are not displayed correctly, but Cyrillic patches are available for Russian (and Bulgarian) users (see the site ). Sony Customer Support have confirmed that units sold in the US only work with Latin characters (as of 2007-03-02). On August 13, 2009, Sony announced that by the end of 2009, it would only sell EPUB books from the Sony Reader Store, and would have dropped its proprietary DRM entirely in favor of Adobe's CS4 server side copy protection. Official software MS Windows Sony Reader came bundled with Sony's proprietary software called Sony Reader Library (or formerly eBook Library and Sony Connect). It requires MS Windows XP or higher (MS Windows Vista or 7), an 800 MHz processor, 128 MB of RAM, and 20 MB of hard disk space. This software does not work on the 64-bit versions of MS Windows XP. 64-bit MS Windows Vista and 7 is supported since Sony eBook Library version 2.5 for all but the 500 models. In February 2014 Sony Reader announced that they were transferring content to the Kobo Store. In March 2014 the Sony Reader store was closed and account holders received an email with a link that enabled them to transfer their library to Kobo. Most titles transferred; however, some were not able to be transferred even though the titles were sold on Kobo; the transfer period ended in May. Apple Mac OS X Sony released an official Apple Mac OS X client for the Reader with the release of the PRS-300 and PRS-600. It is reported to work with the PRS-505, PRS-700, Reader Pocket Edition and Reader Touch Edition. The software now works under 10.7 Lion. Linux and other OS Sony eBook Library was not officially supported on Linux-based systems or other operating systems, although when the device is connected it grants access to its internal flash memory and any memory card slots as though they were USB Mass Storage devices (on all models except PRS-500s that have not received the free EPUB upgrade from Sony), allowing the user to transfer files directly. See the Third party tools section below for a third-party software utility that provides comprehensive support for MS Windows, Apple Mac OS X, and Linux. Note if the device is locked, using the optional 4 digit pin it will not mount via USB, the lock option needs to be disabled in order to mount the device. Third party tools Several third-party tools exist for the Sony Reader. For example, the PRS Browser for Apple Mac OS X from Docudesk allows Apple Macintosh users to manage content on the Sony Reader. Users can also use the free software library and utility called Calibre to communicate with the Reader and manage their digital library. Calibre can convert many ebook formats as well as collate multiple HTML pages into a single ebook file with an automatically generated table of contents. Calibre can also manage RSS subscriptions, including scheduled pushes of newsfeeds to the reader. It has both a command line and graphical interface, and is available for MS Windows, Apple Mac OS X and Linux. Calibre notably does not offer MS Windows 64-bit support for the PRS-500 model either. Specialized on notes, annotations, bookmarks and other input by the user, noteworks allows for listing, exporting and other handling of this data, extracted from the device. In addition, Adobe Digital Editions can deliver DRM-locked PDF and ePub documents to the PRS-350, PRS-505 and PRS-700. The software is officially available for Windows and Mac OS. It can be run on Linux using Wine. After activating the reader on an officially supported platform, DRM-locked media can be downloaded and transferred to the reader on Linux as well. Alternative firmware PRS+ PRS+ project seamlessly integrates into Sony UI and adds support for folder browsing, dictionary, key binding, book history, custom epub styles, games (Sudoku, Chess, Mahjong, etc.), localization (Catalan, German, Czech, English, French, Georgian, Russian, Spanish, and Simplified Chinese) and has built in fb2 to EPUB converter. Ebook applications Runs as an independent application. Adds support for FB2 / CBR / CBZ formats, drops support for LRF. Currently in beta state. Internal OS The PRS-T1, PRS-T2 and PRS-T3 run a heavily modified version of the Android operating system, which Sony mentions in the Legal Notices installed on the device. Its predecessors run the MontaVista Linux Professional Edition operating system with Kinoma FSK, a JavaScript virtual machine, optimized for devices with limited resources. Sales In December 2008, Sony disclosed that it had sold 300,000 units of its Reader Digital Book globally since the device launched in October 2006. According to an IDC study from March 2011, sales for all e-book readers worldwide grew to 12.8 million in 2010; 800,000 of those were Sony Readers. See also Comparison of e-book readers Amazon Kindle Barnes & Noble Nook Calibrean open source third party software to manage digital library with support of conversion between common e-book formats. Created originally for Sony e-readers, it supports over 30 different brands and types of readers. Kobo eReader OverDrive, Inc.ebook borrowing services for public libraries Tablet computer References External links . SONY Digital Reader . Sony products Dedicated e-book devices Electronic paper technology Linux-based devices
28966970
https://en.wikipedia.org/wiki/Pluggable%20Authentication%20Service
Pluggable Authentication Service
Pluggable Authentication Services (PAS) allows a SAP user to be authenticated outside of SAP. When the user is authenticated by an external service, the PAS will issue an SAP Logon Ticket or x.509 Certificate which will be used for future authentication into SAP systems. The PAS is generally regarded as an opportunity for companies to either use a new external authentication system or an existing external authentication system. In some cases, the PAS is used with an external single sign-on system that uses SAP Logon Tickets or x.509 certificates. External authentication systems Windows NT LAN Manager Authentication Windows NT domain controller (i.e., User ID and password verification) Binding LDAP to a directory server Authentication using the Secure Sockets Layer (SSL) protocol and x.509 certificates HTTP header variables (mapping userIDs) Authentication mechanism through the AGate Prerequisites One system must be configured as the ticket-issuing system. Other SAP systems must be configured to accept logon tickets (and therefore preconditions for logon ticket configuration or non-logon ticket configuration, such as certificate, must be met prior). Usage of Secure Network Communications because authentication occurs externally. Ticket-issuing SAP system must be able to recognize user's ID. See also Single sign-on Secure Network Communications SAP GUI SAP Logon Ticket External links Pluggable Authentication Services for External Authentication Mechanisms References SAP SE
2113111
https://en.wikipedia.org/wiki/PerkinElmer
PerkinElmer
PerkinElmer, Inc., previously styled Perkin-Elmer, is an American global corporation focused in the business areas of diagnostics, life science research, food, environmental and industrial testing. Its capabilities include detection, imaging, informatics, and service. PerkinElmer produces analytical instruments, genetic testing and diagnostic tools, medical imaging components, software, instruments, and consumables for multiple end markets. PerkinElmer is part of the S&P 500 Index and operates in 150 countries. History Founding Richard Perkin was attending the Pratt Institute in Brooklyn to study chemical engineering, but left after a year to try his hand on Wall Street. Still interested in the sciences, he gave public lectures on various topics. Charles Elmer ran a firm that supplied court reporters and was nearing retirement when he attended one of Perkin's lectures on astronomy being held at the Brooklyn Institute of Arts and Sciences. The two struck up a friendship over their shared interest in astronomy, and eventually came up with the idea of starting a firm to produce precision optics. Perkin raised US$15,000 from his relatives, while Elmer added US$5,000, and the firm was initially set up as a partnership on 19 April 1937. Initially, they worked from a small office in Manhattan, but soon opened a production facility in Jersey City. They incorporated the growing firm on 13 December 1939. A further move to Glenbrook in Connecticut in 1941 was quickly followed by another move to Norwalk, Connecticut, where the company remained until 2000. The opening of World War II led to significant expansion as the company produces optics for range finders, bombsights, and reconnaissance systems. This work led to the U.S. Navy awarding them the first "E" for Excellence award in 1942. Perkin-Elmer retained a strong presence in the military field through the 1960s and at the same time was significantly involved with OAO-3 a 36 inch Ultra Violet Space Telescope, Skylab and their major contribution to the Apollo program was the CO2 sensor that saved the astronauts during the Apollo 13 failure. They were a primary supplier of the optical systems used in many reconnaissance platforms, first in aircraft and high-altitude balloons, and then in reconnaissance satellites. A significant advance was 1955's Transverse Panoramic Camera, which took images on wide frames that provided single-frame images from horizon to horizon from an aircraft flying at 40,000 ft altitude. Such systems remained a major part of the company's income, capped by the installation of laser retroreflectors on the Moon as part of the Apollo 11 mission. Elmer died at age 83 in 1954, and the company began trading shares over the counter. The company was listed on the New York Stock Exchange on 13 December 1960. Perkin remained as president and CEO until June 1961, when Robert Lewis, previously of Argus Camera and Sylvania Electric Products, took over these roles. Perkin remained the chairman of the board until his death in 1969. Semiconductor manufacturing In 1967 the U.S. Air Force approached Perkin-Elmer asking them if they could produce an all-optical "masking" system for semiconductor fabrication. Previous systems used a pattern, the "mask", which was pressed onto the surface of the silicon wafer as part of the photolithography process. Small bits of dirt or photoresist would stick the mask and ruin the patterning for subsequent chips, and it was not uncommon for the vast majority of the chips from a given wafer to not work properly. The Air Force, who by the late 1960s was highly reliant on integrated circuits, desired a more reliable system. Perkin-Elmer responded with the Microprojector, which was, in effect, a large photocopier system. The mask was placed in a holder and never touched the surface of the chip, instead, the image was projected onto the surface. Making this work required a complex 16-element lens system that only focussed a single frequency of light onto the mask, and the rest of the light from the 1,000 watts mercury-vapor lamp had to be filtered out. Harold Hemstreet was convinced it would be possible to simplify the concept, and Abe Offner began the development of a system using mirrors instead of lenses, which did not suffer from the multispectral focussing problems seen in lenses. The result of this research was the Projection Scanning Aligner, or Micralign, which made chip making an assembly-line task and improved the number of working chips from perhaps 10% to 70% overnight. Chip prices plummeted as a result, with examples like the MOS 6502 selling for about US$20 while the previous generation of designs like the Motorola 6800 sold for around US$250. The Micralign was so successful that Perkin-Elmer was catapulted to become the largest single vendor in the chip space in three years. In spite of this success, the company was largely a has-been by the 1980s due to their late response to the introduction of the stepping aligner, which allowed a single small mask to be stepped across the wafer, rather than requiring a single large mask covering the entire wafer. The company never regained their lead, and sold the division to The Silicon Valley Group. Lab equipment In the early 1990s, partnered with Cetus Corporation (and later Hoffmann-La Roche) to pioneer the polymerase chain reaction (PCR) equipment industry. Analytical-instruments business was also operated from 1954 to 2001 in Germany, by the Bodenseewerk Perkin-Elmer GmbH located in Überlingen at Lake Constance, and England (Perkin Elmer Ltd) at Beaconsfield in Buckinghamshire. Computer Systems Division Perkin-Elmer was involved in computer manufacture for a time. The Perkin-Elmer Computer Systems Division was formed through the purchase of Interdata, Inc., an independent computer manufacturer, in 1973–1974 for some US$63 million. This merger made Perkin-Elmer's annual sales rise to over US$200 million. This was also known as Perkin-Elmer's Data Systems Group. The 32-bit computers were very similar to an IBM System/370, but ran the OS/32MT operating system. The Wollongong Group provided the commercial version of the Unix port to the Interdata 7/32 hardware, known as Edition 7 Unix. The port was originally done by the University of Wollongong in New South Wales, Australia, and was the first UNIX port to hardware other than the Digital Equipment Corporation PDP family. By 1982, the Wollongong Group Edition 7 Unix and Programmer's Workbench (PWB) were available on models such as the Perkin-Elmer 3210 and 3240 minicomputers. In 1985, the computing division of Perkin-Elmer was spun off as Concurrent Computer Corporation. 1999 Modern PerkinElmer traces its history back to a merger between divisions of what had been two S&P 500 companies, EG&G Inc. (formerly ) of Wellesley, Massachusetts and Perkin-Elmer (formerly ) of Norwalk, Connecticut. On May 28, 1999, the non-government side of EG&G Inc. purchased the Analytical Instruments Division of Perkin-Elmer, its traditional business segment, for US$425 million, also assuming the Perkin-Elmer name and forming the new PerkinElmer company, with new officers and a new Board of Directors. At the time, EG&G made products for diverse industries including automotive, medical, aerospace and photography. The old Perkin-Elmer Board of Directors and Officers remained at that reorganized company under its new name, PE Corporation. It had been the Life Sciences division of Perkin-Elmer, and its two component tracking stock business groups, Celera Genomics () and PE Biosystems (formerly ), were centrally involved in the highest profile biotechnology events of the decade, the intense race against the Human Genome Project consortium, which then resulted in the genomics segment of the technology bubble. Perkin-Elmer purchased the Boston operations of NEN Life Sciences in 2001. Recently In 1992, the company merged with Applied Biosystems. In 1997 they merged with PerSeptive Biosystems. On July 14, 1999, the new analytical instruments maker PerkinElmer cut 350 jobs, or 12%, in its cost reduction reorganization. In 2006, PerkinElmer sold off the Fluid Sciences division for approximately US$400 million; the aim of the selloff was to increase the strategic focus on its higher-growth health sciences and photonic markets. Following on from the selloff, a number of small businesses were acquired, including Spectral Genomics, Improvision, Evotec-Technologies, Euroscreen, ViaCell, and Avalon Instruments. The brand "Evotec-Technologies" remains the property of Evotec, the former owner company. PerkinElmer had a license to use the brand till the end of year 2007. PerkinElmer has continued to expand its interest in medicine with the acquisitions of clinical laboratories, In July 2006, it acquired NTD Labs located on Long Island, New York. The laboratory specializes in prenatal screening during the first trimester of pregnancy. In 2007, it purchased ViaCell, Inc. for US$300 million, which included its offices in Boston and cord blood storage facility in Kentucky near Cincinnati. The company was renamed ViaCord. In March 2008, PerkinElmer purchased Pediatrix Screening (formerly Neo Gen Screening), a laboratory located in Bridgeville, Pennsylvania specializing in screening newborns for various inborn errors of metabolism such as phenylketonuria, hypothyroidism, and sickle-cell disease. It renamed the laboratory PerkinElmer Genetics, Inc. In May 2011, PerkinElmer announced the signature of an agreement to acquire CambridgeSoft, and the successful acquisition of ArtusLabs. In September 2011, PerkinElmer bought Caliper Life Sciences for US$600 million. In January 2016, PerkinElmer acquired Swedish firm Vanadis Diagnostics. In January 2017, the company announced it would acquire the Indian in vitro diagnostic company, Tulip Diagnostics. In May 2017, the company acquired Euroimmun Medical Laboratory Diagnostics for approximately US$1.3 billion. In 2018, the company acquired Australian biotech company, RHS Ltd., Chinese manufacturer of analytical instruments, Shanghai Spectrum Instruments Co. Ltd., and France-based company Cisbio Bioassays, which specializes in diagnostics and drug discovery solutions. In November 2020, PerkinElmer announced it would acquire Horizon Discovery Group for around US$383 million. In March 2021, PerkinElmer announced that the Company has completed its acquisition of Oxford Immunotec Global PLC (Oxford Immunotec). In May of the same year, the business announced it would purchase Nexcelom Bioscience for $260 million and Immunodiagnostic Systems Holdings PLC for $155 million. In June the company announced it would acquire SIRION Biotech, a specialist in viral vector gene delivery methods. In July the business announced it would acquire BioLegend for $5.25 billion. Acquisition history PerkinElmer (Est. 1935, modern company formed from EG&G Inc. purchase of Perkin-Elmer, Analytical Instruments Division) Applied Biosystems (Merged 1992) PerSeptive Biosystems. (Acq. 1997) Spectral Genomics Improvision Evotec-Technologies Euroscreen ViaCell Avalon Instruments NTD Labs (Acq. 2006) ViaCell, Inc. (Acq. 2007) Pediatrix Screening (Acq. 2008) CambridgeSoft (Acq. 2011) ArtusLabs (Acq. 2011) Caliper Life Sciences (Acq. 2011) Zymark (Acq. 2003) NovaScreen Biosciences Corporation (Acq. 2005) Xenogen Corporation (Acq. 2006) Xenogen Biosciences Cambridge Research & Instrumentation Inc. (Acq. 2010) Xenogen Corporation (Acq. 2006) Xenogen Corporation (Acq. 2006) Vanadis Diagnostics (Acq. 2016) Tulip Diagnostics (Acq. 2017) Euroimmun Medical Laboratory Diagnostics (Acq. 2017) RHS Ltd (Acq. 2018) Shanghai Spectrum Instruments Co. Ltd (Acq. 2018) Cisbio Bioassays (Acq. 2018) Horizon Discovery Group (Acq. 2020) Oxford Immunotec Global PLC (Acq. 2021) Nexcelom Bioscience (Acq. 2021) Immunodiagnostic Systems Holdings PLC (Acq. 2021) SIRION Biotech (Acq. 2021) BioLegend (Acq. 2021) BioLegend Japan KK BioLegend UK Ltd BioLegend GmbH Programs Hubble optics project Perkin-Elmer's Danbury Optical System unit was commissioned to build the optical components of the Hubble Space Telescope. The construction of the main mirror began in 1979 and completed in 1981. The polishing process ran over budget and behind schedule, producing significant friction with NASA. Due to a miscalibrated null corrector, the primary mirror was also found to have a significant spherical aberration after reaching orbit on STS-31. Perkin-Elmer's own calculations and measurements revealed the primary mirror's surface discrepancies, but the company chose to withhold that data from NASA. A NASA investigation heavily criticized Perkin-Elmer for management failings, disregarding written quality guidelines, and ignoring test data that revealed the miscalibration. Corrective optics were installed on the telescope during the first Hubble service and repair mission STS-61. The correction, Corrective Optics Space Telescope Axial Replacement, was applied entirely to the secondary mirror and replaced existing instrumentation; the aberration of the primary mirror remained uncorrected. The company agreed to pay US$15 million, essentially forgoing its fees in polishing the mirror, to avoid a threatened liability lawsuit under the False Claims Act by the Federal government. Hughes Aircraft, which acquired the Danbury Optical System unit one month after the launch of the telescope, paid US$10 million. The Justice Department asserted that the companies should have known about the flawed testing. Trade group Aerospace Industries Association protested when concerns were raised in the aerospace industry that aerospace companies might be held liable for failed equipment. KH-9 Hexagon Perkin-Elmer built the optical systems for the KH-9 Hexagon series of spy satellites at a facility in Danbury, Connecticut. In the 1970s, an aerial panoramic camera lens was capable of recording the entire state of Pennsylvania in two flyovers, with resolution that enabled one to count the autos on the Pennsylvania Turnpike. References External links PerkinElmer Announces New Business Alignment Focused on Improving Human and Environmental Health SEC filings for PerkinElmer, Inc. Photographs from the Perkin-Elmer-Applera Collection Science History Institute Digital Collections (Extensive collection of print photographs and slides depicting the staff, facilities, and instrumentation of the Perkin-Elmer Corporation predominately dating from the 1960s and 1970s) Companies listed on the New York Stock Exchange Design companies established in 1931 Technology companies of the United States Life science companies based in Massachusetts Instrument-making corporations Companies based in Waltham, Massachusetts Technology companies established in 1931
1472548
https://en.wikipedia.org/wiki/Horse%20colic
Horse colic
Colic in horses is defined as abdominal pain, but it is a clinical symptom rather than a diagnosis. The term colic can encompass all forms of gastrointestinal conditions which cause pain as well as other causes of abdominal pain not involving the gastrointestinal tract. The most common forms of colic are gastrointestinal in nature and are most often related to colonic disturbance. There are a variety of different causes of colic, some of which can prove fatal without surgical intervention. Colic surgery is usually an expensive procedure as it is major abdominal surgery, often with intensive aftercare. Among domesticated horses, colic is the leading cause of premature death. The incidence of colic in the general horse population has been estimated between 4 and 10 percent over the course of the average lifespan. Clinical signs of colic generally require treatment by a veterinarian. The conditions that cause colic can become life-threatening in a short period of time. Pathophysiology Colic can be divided broadly into several categories: excessive gas accumulation in the intestine (gas colic) simple obstruction strangulating obstruction non-strangulating infarction inflammation of the gastrointestinal tract (enteritis, colitis) or the peritoneum (peritonitis) ulceration of the gastrointestinal mucosa These categories can be further differentiated based on location of the lesion and underlying cause (See Types of colic). Simple obstruction This is characterised by a physical obstruction of the intestine, which can be due to impacted food material, stricture formation, or foreign bodies. The primary pathophysiological abnormality caused by this obstruction is related to the trapping of fluid within the intestine oral to the obstruction. This is due to the large amount of fluid produced in the upper gastrointestinal tract, and the fact that this is primarily re-absorbed in parts of the intestine downstream from the obstruction. The first problem with this degree of fluid loss from circulation is one of decreased plasma volume, leading to a reduced cardiac output, and acid-base disturbances. The intestine becomes distended due to the trapped fluid and gas production from bacteria. It is this distension, and subsequent activation of stretch receptors within the intestinal wall, that leads to the associated pain. With progressive distension of the intestinal wall, there is occlusion of blood vessels, firstly the less rigid veins, then arteries. This impairment of blood supply leads to hyperemia and congestion, and ultimately to ischaemic necrosis and cellular death. The poor blood supply also has effects on the vascular endothelium, leading to an increased permeability which first leaks plasma and eventually blood into the intestinal lumen. In the opposite fashion, gram-negative bacteria and endotoxins can enter the bloodstream, leading to further systemic effects. Strangulating obstruction Strangulating obstructions have all the same pathological features as a simple obstruction, but the blood supply is immediately affected. Both arteries and veins may be affected immediately, or progressively as in simple obstruction. Common causes of strangulating obstruction are intussusceptions, torsion or volvulus, and displacement of intestine through a hole, such as a hernia, a mesenteric rent, or the epiploic foramen. Non-strangulating infarction In a non-strangulating infarction, blood supply to a section of intestine is occluded, without any obstruction to ingesta present within the intestinal lumen. The most common cause is infection with Strongylus vulgaris larvae, which primarily develop within the cranial mesenteric artery. Inflammation or ulceration of the gastrointestinal tract Inflammation along any portion of the GI tract can lead to colic. This leads to pain and possibly stasis of peristalsis (Ileus), which can cause excessive accumulation of fluid in the gastrointestinal tract. This is a functional rather than mechanical blockage of the intestine, but like the mechanical blockage seen with simple obstructions, it can have serious effects including severe dehydration. Inflammation of the bowel may lead to increased permeability and subsequent endotoxemia. The underlying cause of inflammation may be due to infection, toxin, or trauma, and may require special treatment in order to resolve the colic. Ulceration of the mucosal surface occurs very commonly in the stomach (gastric ulceration), due to damage from stomach acid or alteration in protective mechanisms of the stomach, and is usually not life-threatening. The right dorsal colon may also develop ulceration, usually secondary to excessive NSAID use, which alters the homeostatic balance of prostaglandins that protect the mucosa. Types This list of types of colic is not exhaustive but details some of the types which may be encountered. Gas and spasmodic colic Gas colic, also known as tympanic colic, is the result of gas buildup within the horse's digestive tract due to excessive fermentation within the intestines or a decreased ability to move gas through it. It is usually the result of a change in diet, but can also occur due to low dietary roughage levels, parasites (22% of spasmodic colics are associated with tapeworms), and anthelminthic administration. This gas buildup causes distention and increases pressure in the intestines, causing pain. Additionally, it usually causes an increase in peristaltic waves, which can lead to painful spasms of the intestine, producing subsequent spasmodic colic. The clinical signs of these forms of colic are generally mild, transient, and respond well to spasmolytic medications, such as buscopan, and analgesics. Gas colics usually self-correct, but there is the risk of subsequent torsion (volvulus) or displacement of the bowel due to gas distention, which causes this affected piece of bowel to rise upward in the abdomen. Abdominal distention may occasionally be seen in adult horses in the flank region, if the cecum or large colon is affected. Foals, however, may show signs of gas within the small intestines with severe abdominal distention. Impaction Pelvic flexure impaction This is caused by an impaction of food material (water, grass, hay, grain) at a part of the large bowel known as the pelvic flexure of the left colon where the intestine takes a 180 degree turn and narrows. Impaction generally responds well to medical treatment, usually requiring a few days of fluids and laxatives such as mineral oil, but more severe cases may not recover without surgery. If left untreated, severe impaction colic can be fatal. The most common cause is when the horse is on box rest and/or consumes large volumes of concentrated feed, or the horse has dental disease and is unable to masticate properly. This condition could be diagnosed on rectal examination by a veterinarian. Impactions are often associated with the winter months because horses do not drink as much water and eat drier material (hay instead of grass), producing drier intestinal contents that are more likely to get stuck. Ileal impaction and ileal hypertrophy The ileum is the last part of the small intestine that ends in the cecum. Ileal impaction can be caused by obstruction of ingesta. Coastal Bermuda hay is associated with impactions in this most distal segment of the small intestine, although it is difficult to separate this risk factor from geographic location, since the southeastern United States has a higher prevalence of ileal impaction and also has regional access to coastal Bermuda hay. Other causes can be obstruction by ascarids (Parascaris equorum), usually occurring at 3–5 months of age right after deworming, and tapeworms (Anoplocephala perfoliata), which have been associated with up to 81% of ileal impactions (See Ascarids). Horses show intermittent colic, with moderate to severe signs and with time, distended small intestinal loops on rectal. Although most ileal impactions will sometimes pass without intervention, those present for 8–12 hours will cause fluid to back up, leading to gastric reflux, which is seen in approximately 50% of horses that require surgical intervention. Diagnosis is usually made based on clinical signs, presence of reflux, rectal exam, and ultrasound. Often the impaction can not be felt on rectal due to distended small intestinal loops that block the examiner. Those impactions that are unresponsive to medical management, which includes IV fluids and removal of reflux, may be treated using a single injection into the ileum with 1 liter of carboxymethylcellulose, and then massaging the ileum. This allows the impaction to be treated without actually cutting into the ileum. Prognosis for survival is good. Ileal hypertrophy occurs when the circular and longitudinal layers of the ileal intestinal wall hypertrophy, and can also occur with jejunal hypertrophy. The mucosa remains normal, so malabsorption is not expected to occur in this disease. Ileal hypertrophy may be idiopathic, with current theories for such cases including neural dysfunction within the intestinal wall secondary to parasite migration, and increased tone of the ileocecal valve which leads to hypertrophy of the ileum as it tries to push contents into the cecum. Hypertrophy may also occur secondary to obstruction, especially those that have had surgery for an obstruction that required an anastomosis. Hypertrophy gradually decreases the size of the lumen, resulting in intermittent colic, and in approximately 45% of cases includes weight loss of 1–6 month duration and anorexia. Although rectal examination may display a thickened ileal wall, usually the diagnosis is made at surgery, and an ileocecal or jejunocecal anastomosis is made to allow intestinal contents to bypass the affected area. If surgery and bypass is not performed, there is a risk of rupture, but prognosis is fair with surgical treatment. Sand impaction This is most likely to occur in horses that graze sandy or heavily grazed pastures leaving only dirt to ingest. Foals, weanlings, and yearlings are most likely to ingest sand, and are therefore most commonly seen with sand colic. The term sand also encompasses dirt. The ingested sand or dirt most commonly accumulates in the pelvic flexure, but may also occur in the right dorsal colon and the cecum of the large intestines. The sand can cause colic signs similar to other impactions of the large colon, and often causes abdominal distention As the sand or dirt irritates the lining of the bowel it can cause diarrhea. The weight and abrasion of the sand or dirt causes the bowel wall to become inflamed and can cause a reduction in colonic motility and, in severe cases, leads to peritonitis. Diagnosis is usually made by history, environmental conditions, auscultation of the ventral abdomen, radiographs, ultrasound, or fecal examination (See Diagnosis). Historically, medical treatment of the problem is with laxatives such as liquid paraffin or oil and psyllium husk. More recently veterinarians treat cases with specific synbiotic (pro and prebiotic) and psyllium combinations. Psyllium is the most effective medical treatment. It works by binding to the sand to help remove it, although multiple treatments may be required. Mineral oil is mostly ineffective since it floats on the surface of the impaction, rather than penetrating it. Horses with sand or dirt impaction are predisposed to Salmonella infection and other GI bacteria, so antibiotics are often added to help prevent infection. Medical management usually resolves the colic, but if improvement doesn't occur within a few hours then surgery must be performed to flush the colon of any sand, which procedure that has a 60–65% survival rate. Horses that are not treated, or treated too late after the onset of clinical signs, are at risk of death. Horses should not be fed directly on the ground in areas where sand, dirt and silt are prevalent, although small amounts of sand or dirt may still be ingested by grazing. Management to reduce sand intake and prophylactic treatments with sand removal products are recommended by most veterinarians. Such prophylaxis includes feeding a pelleted psyllium for one week every 4–5 weeks. Longer duration of treatment will result in gastrointestinal flora changes and the psyllium to be broken down and ineffective for sand clearance. Other methods include feeding the horse before turnout, and turning the horses out in the middle of the day so they are more likely to stand in the shade rather than graze. Cecal impaction Only 5% of large intestinal impactions at referral hospital involve the cecum. Primary cecal impactions usually consist of dry feed material, with the horse slowly developing clinical signs over several days. Secondary cecal impactions may occur post-surgery, orthopedic or otherwise, and the cecum does not function properly. Horses usually show clinical signs 3–5 days post general anesthesia, including decreased appetite, decreased manure production, and gas in the cecum which can be auscultated. The cecum quickly distends due to fluid and gas accumulation, often leading to rupture within 24–48 hours if not corrected. This impaction may be missed since decreased manure production can be attributed secondarily to surgery, and often rupture occurs before severe signs of pain. Horses are most at risk for this type of impaction if surgery is greater than 1 hour in length, or if inadequate analgesia is provided postoperatively. Diagnosis is usually made by rectal palpation. Treatment includes fluid therapy and analgesics, but surgery is indicated if there is severe distention of the cecum or if medical therapy does not improve the situation. Surgery includes typhlotomy, and although cecal bypass has been performed in the past to prevent reoccurrence, a recent study suggests it is not necessary. Surgery has a good prognosis, although rupture can occur during surgical manipulation. The cause of cecal impactions are not known. Cecal impassion should be differentiated from large colon impaction via rectal, since cecal impaction has a high risk of rupture even before developing severe pain. Overall prognosis is 90%, regardless of medical or surgical treatment, but rupture does occur, often with no warning. Gastric impaction Gastric impactions are relatively rare, and occur when food is not cleared at the appropriate rate. It is most commonly associated with ingestion of foods that swell after eating or feeds that are coarse (bedding or poor quality roughage), poor dental care, poor mastication, inadequate drinking, ingestion of a foreign object, and alterations in the normal function of the stomach. Persimmons, which form a sticky gel in the stomach, and haylage, have both been associated with it, as has wheat, barley, mesquite beans, and beet pulp. Horses usually show signs of mild colic that is chronic, unresponsive to analgesics, and may include signs such as dysphagia, ptyalism, bruxism, fever, and lethargy, although severe colic signs may occur. Signs of shock may be seen if gastric rupture has occurred. Usually, the impaction must be quite large before it presents symptoms, and may be diagnosed via gastroscopy or ultrasound, although rectal examinations are unhelpful. Persimmon impaction is treated with infusions of Coca-Cola. Other gastric impactions are often resolves with enteral fluids. Quick treatment generally produces a favorable prognosis. Small colon impaction Small colon impactions represent a small number of colics in the horse, and are usually caused by obstruction from fecaliths, enteroliths, and meconium. Horses usually present with standard colic signs (pawing, flank watching, rolling) in 82% of horses, and occasionally with diarrhea (31%), anorexia (30%), straining (12%), and depression (11%), and rectal examination will reveal firm loops of small colon or actually palpable obstruction in the rectum. Impactions are most common in miniature horses, possibly because they do not masticate their feed as well, and during the fall and winter. Medical management includes the aggressive use of fluids, laxatives and lubricants, and enemas, as well as analgesics and anti-inflammatories. However, these impactions often require surgical intervention, and the surgeon will empty the colon either by enterotomy or by lubricants and massage. Surgical intervention usually results in longer recovery time at the hospital. Prognosis is very good, and horses treated with surgical treatment had a survival with return to athletic function rate of 91%, while 89% of the medically managed horses returned to previous use. Large colon impaction Large colon impactions typically occur at the pelvic flexure and right dorsal colon, two areas where the lumen of the intestine narrows. Large colon impactions are most frequently seen in horses that have recently had a sudden decrease in exercise, such as after a musculoskeletal injury. They are also associated in the practice of twice daily feeding of grain meals, which causes a short-lived but significant secretion of fluid into the lumen of the intestine, resulting in a 15% decrease in plasma volume (hypovolemia of the circulatory system) and the subsequent activation of the renin–angiotensin–aldosterone system. Aldosterone secretion activates absorption of fluid from the colon, decreasing the water content of the ingesta and increasing risk of impaction. Amitraz has also been associated with large colon impaction, due to alterations in motility and retention of intestinal contents, which causes further absorption of water and dehydration of ingesta. Other possible factors include poor dental care, course roughage, dehydration, and limited exercise. Horses with a large colon impaction usually have mild signs that slowly get worse if the impaction does not resolve, and can produce severe signs. Diagnosis is often made by rectal palpation of the mass, although this is not always accurate since a portion of the colon is not palpable on rectal. Additional sections of intestines may be distended if there is fluid backup. Manure production decreases, and if passed, is usually firm, dry and mucus covered. Horses are treated with analgesics, fluid therapy, mineral oil, dactyl sodium sulfosuccinate (DSS), and/or epsom salts. Analgesics usually can control the abdominal discomfort, but may become less efficacious over time if the impaction does not resolve. Persistent impactions may require fluids administered both intravenously and orally via nasogastric tube, at a rate 2–4 times the maintenance for the animal. Feed is withheld. Horses that do not improve or become very painful, or those that have large amounts of gas distention, are recommended to undergo surgery to remove the impaction via enterotomy of the pelvic flexure. Approximately 95% of horses that undergo medical management, and 58% of surgical cases, survive. Enteroliths and fecaliths Enteroliths in horses are round 'stones' of mineral deposits, usually of ammonium magnesium phosphate (struvite) but sometimes of magnesium vivainite and some amounts of sodium, potassium, sulfur and calcium, which develop within the horse's gastrointestinal tract. They can form around a piece of ingested foreign material, such as a small nidus of wire or sand (similar to how an oyster forms a pearl). When they move from their original site they can obstruct the intestine, usually in the right dorsal and transverse colon, but rarely in the small colon. They may also cause mucosal irritation or pain when they move within the gastrointestinal tract. Enteroliths are not a common cause of colic, but are known to have a higher prevalence in states with a sandy soil or an abundance of alfalfa hay is fed, such as California, a state where 28% of surgical colics are due to enteroliths. Alfalfa hay is thought to increase the risk due to the high protein content in the hay, which would likely elevate ammonia nitrogen levels within the intestine. They may be more common in horses with diets high in magnesium, and are also seen more often in Arabians, Morgans, American Saddlebreds, miniature horses, and donkeys, and usually occur in horses older than four years of age. Horses with enteroliths typically have chronic, low-grade, recurring colic signs, which may lead to acute colic and distention of the large colon after occlusion of the lumen occurs. These horse may also have had a history of passing enteroliths in their manure. Level of pain is related to the degree of luminal occlusion. Abdominal radiographs can confirm the diagnosis, but smaller enteroliths may not be visible. In rare instances, enteroliths may be palpated on rectal examination, usually if they are present in the small colon. Once a horse is diagnosed with colic due to an enterolith, surgery is necessary to remove it, usually by pelvic flexure enterotomy and sometimes an additional right dorsal colon enterotomy, and fully resolve the signs of colic. Horses will usually present a round enterolith if it is the only one present, while multiple enteroliths will usually have flat sides, a clue to the surgeon to look for more stones. The main risk of surgery is rupture of the colon (15% of cases), and 92% of horses that are recovered survive to at least one year from their surgery date. Fecaliths are hard formations of ingest that obstruct the GI tract, and may require surgery to resolve. These are most commonly seen in miniature horses, ponies, and foals. Displacement A displacement occurs when a portion of the large colon—usually the pelvic flexure—moves to an abnormal location. There are four main displacements described in equine medicine: Left dorsal displacement (nephrosplenic entrapment): the pelvic flexure moves dorsally towards the nephrosplenic space. This space is found between the spleen, the left kidney, the nephrosplenic ligament (which runs between the spleen and kidney), and the body wall. In some cases, the bowel become entrapped over the nephrosplenic ligament. LDD accounts for 6-8% of all colics. Right dorsal displacement: the colon moves between the cecum and body wall. The pelvic flexure retroflexes towards the diaphragm The colon develops a 180-degree volvulus, which may or may not occlude the vasculature of the organ. The cause of displacement is not definitively known, but one explanation is that the bowel becomes abnormally distended with gas (from excessive fermentation of grain, a change in the microbiota secondary to antibiotic use, or a buildup of gas secondary to impaction) which results in a shift in the bowel to an abnormal position. Because much of the bowel is not anchored to the body wall, it is free to move out of position. Displacement is usually diagnosed using a combination of findings from the rectal exam and ultrasonography. Many displacements (~96% of LDD, 64% of RDD) resolve with medical management that includes fluids (oral or intravenous) to rehydrate the horse and soften any impaction that may be present. Systemic analgesics, antispasmodics, and sedation are often used to keep the horse comfortable during this time. Horses with left dorsal displacement are sometimes treated with exercise and/or phenylephrine—a medication that causes contracture of the spleen and may allow the bowel to slip off the nephrosplenic ligament. At times anesthesia and a rolling procedure, in which the horse is placed in left lateral recumbency and rolled to right lateral recumbency while jostling, can also be used to try to shift the colon off of the nephrosplenic ligament. Displacements that do not respond to medical therapy require surgery, which generally has a very high success rate (80–95%). Reoccurrence can occur with all types of displacements: 42% of horses with RDD, 46% of horses with retroflexion, 21% of those with volvulus, and 8% of those with LDD had reoccurrence of colic. LDD may be prevented by closing the nephrosplenic space with sutures, although this does not prevent other types of displacements from occurring in that same horse. Torsion and volvulus A volvulus is a twist along the axis of the mesentery, a torsion is a twist along the longitudinal axis of the intestine. Various parts of the horse's gastrointestinal tract may twist upon themselves. It is most likely to be either small intestine or part of the colon. Occlusion of the blood supply means that it is a painful condition causing rapid deterioration and requiring emergency surgery. Volvulus of the large colon usually occurs where the mesentery attaches to the body wall, but may also occur at the diaphragmatic or sternal flexures, with rotations up to 720 degrees reported. It is most commonly seen in postpartum mares, usually presents with severe signs of colic that are refractory to analgesic administration, and horses often lie in dorsal recumbency. Abdominal distention is common due to strangulation and rapid engorgement of the intestine with gas, which then can lead to dyspnea as the growing bowel pushes against the diaphragm and prevents normal ventilation. Additionally, compression can place pressure on the caudal vena cava, leading to pooling of blood and hypovolemia. However, horses may not have a high heart rate, presumably due to increased vagal tone. Rectal palpation will demonstrate a severely gas distended colon, and the examiner may not be able to push beyond the brim of the pelvis due to the obstruction. The colon may be irreversibly damaged in as little as 3–4 hours from the initial time of the volvulus, so immediate surgical correction is required. The surgeon works to correct the volvulus and then removes any damaged colon. 95% of the colon may be resected, but often the volvulus damages more than this amount, requiring euthanasia. Plasma lactate levels can help predict survival rates, with an increased survival seen in horses with a lactate below 6.0 mmol/L. Prognosis is usually poor, with a survival rate of approximately 36% of horses with a 360 degree volvulus, and 74% of those with a 270 degree volvulus, and a reoccurrence rate of 5–50%. Complications post-surgery include hypoproteinemia, endotoxic shock, laminitis, and DIC. Small intestinal volvulus is thought to be caused by a change in local peristalsis, or due to a lesion that the mesentery may twist around (such as an ascarid impaction), and usually involves the distal jejunum and ileum.w It is one of the most common causes of small intestinal obstruction in foals, possibly because of a sudden change to a bulkier foodstuff. Animals present with acute and severe signs of colic, and multiple distended loops of small intestine, usually seen radiographically in a foal. Small intestinal volvulus often occurs secondary to another disease process in adult horses, where small intestinal obstruction causes distention and then rotation around the root of the mesentery. Surgery is required to resect nonviable sections of bowel, and prognosis is correlated to the length of bowel involved, with animals with greater than 50% of small intestinal involvement having a grave prognosis. Intussusception Intussusception is a form of colic in which a piece of intestine "telescopes" within a portion of itself because a section is paralyzed, so the motile section pushes itself into the non-motile section. It most commonly occurs at the ileocecal junction and requires urgent surgery. It is almost always associated with parasitic infections, usually tapeworms, although small masses and foreign bodies may also be responsible, and is most common in young horses usually around 1 year of age. Ileocecal intussusception may be acute, involving longer (6–457 cm) segments of bowel, or chronic involving shorter sections (up to 10 cm in length). Horses with the acute form of colic usually have a duration of colic less than 24 hours long, while chronic cases have mild but intermittent colic. Horses with the chronic form tend to have better prognosis. Rectal examination reveals a mass at the base of the cecum in 50% of cases. Ultrasound reveals a very characteristic "target" pattern on cross-section. Abdominocentesis results can vary, since the strangulated bowel is trapped within the healthy bowel, but there are usually signs of obstruction, including reflux and multiple loops of distended small intestine felt on rectal. Surgery is required for intussusception. Reduction of the area is usually ineffective due to swelling, so jejunojejunal intussusceptions are resected and ileocolic intussusceptions are resected as far distally as possible and a jejunocecal anatomosis is performed. Entrapment Epiploic foramen entrapment On rare occasions, a piece of small intestine (or rarely colon) can become trapped through the epiploic foramen into the omental bursa. The blood supply to this piece of intestine is immediately occluded and surgery is the only available treatment. This type of colic has been associated with cribbers, possibly due to changes in abdominal pressure, and in older horses, possibly because the foramen enlarges as the right lobe of the liver atrophies with age, although it has been seen in horses as young as 4 months old. Horses usually present with colic signs referable to small intestinal obstruction. During surgery, the foramen can not be enlarged due to the risk of rupture of the vena cava or portal vein, which would result in fatal hemorrhage. Survival is 74–79%, and survival is consistently correlated with abdominocentesis findings prior to surgery. Mesenteric rent entrapment The mesentery is a thin sheet attached to the entire length of intestine, enclosing blood vessels, lymph nodes, and nerves. Occasionally, a small rent (hole) can form in the mesentery, through which a segment of bowel can occasionally enter. As in epiploic foramen entrapment, the bowel first enlarges, since arteries do not occlude as easily as veins, which causes edema (fluid buildup). As the bowel enlarges, it becomes less and less likely to be able to exit the site of entrapment. Colic signs are referable to those seen with a strangulating lesion, such as moderate to severe abdominal pain, endotoxemia, decrease gut sounds, distended small intestine on rectal, and nasogastric reflux. This problem requires surgical correction. Survival for mesenteric rent entrapment is usually lower than other small intestinal strangulating lesions, possibly due to hemorrhage, difficulty correcting the entrapment, and the length of intestine commonly involved, with <50% of cases surviving until discharge. Inflammatory and ulcerative conditions Proximal enteritis Proximal enteritis, also known as anterior enteritis or duodenitis-proximal jejunitis (DPJ), is inflammation of the duodenum and upper jejunum. It is potentially caused by infectious organisms, such as Salmonella and Clostridial species, but other possible contributing factors include Fusarium infection or high concentrate diets. The inflammation of the intestine leads to large secretions of electrolytes and fluid into its lumen, and thus large amounts of gastric reflux, leading to dehydration and occasionally shock. Signs include acute onset of moderate to severe pain, large volumes orange-brown and fetid gastric reflux, distended small intestine on rectal examination, fever, depression, increased heart rate and respiratory rate, prolonged CRT, and darkened mucous membranes. Pain level usually improves after gastric decompression. It is important to differentiate DPI from small intestinal obstruction, since obstruction may require surgical intervention. This can be difficult, and often requires a combination of clinical signs, results from the physical examination, laboratory data, and ultrasound to help suggest one diagnosis over the other, but a definitive diagnosis can only be made with surgery or on necropsy. DPI usually is managed medically with nasogastric intubation every 1–2 hours to relieve gastric pressure secondary to reflux, and aggressive fluid support to maintain hydration and correct electrolyte imbalances. Horses are often withheld food for several days. Use of anti-inflammatory, anti-endotoxin, anti-microbial, and prokinetic drugs are common with this disease. Surgery may be needed to rule out obstruction or strangulation, and in cases that are long-standing to perform a resection and anastomosis of the diseased bowel. Survival rates for DPJ are 25–94%, and horses in the southeast United States appear to be more severely affected. Colitis Colitis is inflammation of the colon. Acute cases are medical emergencies as the horse rapidly loses fluid, protein, and electrolytes into the gut, leading to severe dehydration which can result in hypovolemic shock and death. Horses generally present with signs of colic before developing profuse, watery, fetid diarrhea. Both infectious and non-infectious causes for colitis exist. In the adult horse, Salmonella, Clostridium difficile, and Neorickettsia risticii (the causative agent of Potomac Horse Fever) are common causes of colitis. Antibiotics, which may lead to an altered and unhealthy microbiota, sand, grain overload, and toxins such as arsenic and cantharidin can also lead to colitis. Unfortunately, only 20–30% of acute colitis cases are able to be definitively diagnosed. NSAIDs can cause slower-onset of colitis, usually in the right dorsal colon (see Right dorsal colitis). Treatment involves administration of large volumes of intravenous fluids, which can become very costly. Antibiotics are often given if deemed appropriate based on the presumed underlying cause and the horse's CBC results. Therapy to help prevent endotoxemia and improve blood protein levels (plasma or synthetic colloid administration) may also be used if budgetary constraints allow. Other therapies include probiotics and anti-inflammatory medication. Horses that are not eating well may also require parenteral nutrition. Horses usually require 3–6 days of treatment before clinical signs improve. Due to the risk of endotoxemia, laminitis is a potential complication for horses suffering from colitis, and may become the primary cause for euthanasia. Horses are also at increased risk of thrombophlebitis. Gastric ulceration Horses form ulcers in the stomach fairly commonly, a disease called equine gastric ulcer syndrome. Risk factors include confinement, infrequent feedings, a high proportion of concentrate feeds, such as grains, excessive non-steroidal anti-inflammatory drug use, and the stress of shipping and showing. Gastric ulceration has also been associated with the consumption of cantharidin beetles in alfalfa hay which are very caustic when chewed and ingested. Most ulcers are treatable with medications that inhibit the acid producing cells of the stomach. Antacids are less effective in horses than in humans, because horses produce stomach acid almost constantly, while humans produce acid mainly when eating. Dietary management is critical. Bleeding ulcers leading to stomach rupture are rare. Right dorsal colitis Long-term use of NSAIDs can lead to mucosal damage of the colon, secondary to decreased levels of homeostatic prostaglandins. Mucosal injury is usually limited to the right dorsal colon, but can be more generalized. Horses may display acute or chronic intermittent colic, peripheral edema secondary to protein losing enteropathy, decreased appetite, and diarrhea. Treatment involves decreasing the fiber levels of the horse's diet by reducing grass and hay, and placing the horse on an easily digestible pelleted feed until the colon can heal. Additionally, the horse may be given misoprostol, sucralfate, and psyllium to try to improve mucosal healing, as well as metronidazole to reduce inflammation of the colon. Tumors Strangulating pedunculated lipoma Benign fatty tumors known as lipomas can form on the mesentery. As the tumor enlarges, it stretches the connective tissue into a stalk which can wrap around a segment of bowel, typically small intestine, cutting off its blood supply. The tumor forms a button that latches onto the stalk of the tumor, locking it on place, and requiring surgery for resolution. Surgery involves cutting the stalk of the tumor, untwisting the bowel, and removing bowel that is no longer viable. If the colic is identified and taken to surgery quickly, there is a reasonable rate of success of 50–78%. This type of colic is most commonly associated with ponies, and aged geldings, 10 years and older, probably because of fat distribution in this group of animals. Other cancers Cancers (neoplasia) other than lipoma are relatively rare causes of colic. Cases have been reported with intestinal cancers including intestinal lymphosarcoma, leiomyoma, and adenocarcinoma, stomach cancers such as squamous cell carcinoma, and splenic lymphosarcoma. Gastric squamous cell carcinoma is most often found in the non-glandular region of the stomach of horses greater than 5 years of age, and horses often present with weight loss, anorexia, anemia, and ptyalism. Gastric carcinoma is usually diagnosed via gastroscopy, but may sometimes be felt on rectal if they have metastasized to the peritoneal cavity. Additionally, laparoscopy can also diagnose metastasized cancer, as can presence of neoplastic cells on abdominocentesis. Often the signs of intestinal neoplasia are non-specific, and include weight loss and colic, usually only if obstruction of the intestinal lumen occurs. Ileus Ileus is the lack of motility of the intestines, leading to a functional obstruction. It often occurs postoperatively following any type of abdominal surgery, and 10–50% of all cases of surgical colic will develop this complication, including 88% of horses with a strangulating obstructions and 41% of all colics with a large intestinal lesion. The exact cause is unknown, but is suspected to be due to inflammation of the intestine, possibly a result of manipulation by the surgeon, and increased sympathetic tone. It has a high fatality rate of 13–86%. Ileus diagnosed based on several criteria: Nasogastric reflux: 4 liters or greater in a single intubation, or greater than 2 liters of reflex over more than one intubation A heart rate greater than 40 bpm Signs of colic, which may vary from mild to severe Distended small intestine, based on rectal or abdominal ultrasound findings. On ultrasound, ileus presents as more than 3 loops of distended small intestine, with a lack of peristaltic waves. This form of colic is usually managed medically. Because there is no motility, intestinal contents back up into the stomach. Therefore, periodic decompression of the stomach though nasogastric intubation is essential to prevent rupture. Horses are monitored closely following abdominal surgery, and a sudden increase in heart rate indicates the need to check for nasogastric reflux, as it is an early indication of postoperative ileus. The horse is placed on intravenous fluids to maintain hydration and electrolyte balance and prevent hypovolemic shock, and rate of fluids is calculated based on daily maintenance requirement plus fluid lose via nasogastric reflux. Motility is encouraged by the use of prokinetic drugs such as erythromycin, metoclopramide, bethanechol and lidocaine, as well as through vigorous walking, which has also been shown to have a beneficial effect on GI motility. Lidocaine is especially useful, as it not only encourages motility, but also has anti-inflammatory properties and may ameliorate some post-operative pain. Metoclopramide has been shown to reduce reflux and hospital stay, but does has excitatory effects on the central nervous system. Anti-inflammatory drugs are used to decrease inflammation of the GI tract, which is thought to be the underlying cause of the disease, as well as to help control any absorption of LPS in cases of endotoxemia since the substance decreases motility. However, care must be taken when giving these drugs, as NSAIDs have been shown to alter intestinal motility. Large intestinal ileus is most commonly seen in horses following orthopedic surgery, but its risk is also increased in cases where post-operative pain is not well-controlled, after long surgeries, and possibly following ophthalmologic surgeries. It is characterized by decreased manure output (<3 piles per day), rather than nasogastric reflux, as well as decreased gut sounds, signs of colic, and the occasional impaction of the cecum or large colon. Cecal impactions can be fatal, so care must be taken to monitor the horse for large intestinal ileus after orthopedic surgery, primarily by watching for decreased manure production. Decreased intestinal motility can also be the result of drugs such as Amitraz, which is used to kill ticks and mites. Xylazine, detomidine, and butorphanol also reduce motility, but will not cause colic if appropriately administered. Parasites Ascarids (roundworms) Occasionally there can be an obstruction by large numbers of roundworms. This is most commonly seen in young horses as a result of a very heavy infestation of Parascaris equorum that can subsequently cause a blockage and rupture of the small intestine. Rarely, dead worms will be seen in reflux. Deworming heavily infected horses may cause a severe immune reaction to the dead worms, which can damage the intestinal wall and cause a fatal peritonitis. Veterinarians often treat horses with suspected heavy worm burdens with corticosteroids to reduce the inflammatory response to the dead worms. Blockages of the small intestine, particularly the ileum, can occur with Parascaris equorum and may well require colic surgery to remove them manually. Large roundworm infestations are often the result of a poor deworming program. Horses develop immunity to parascarids between 6 months age and one year and so this condition is rare in adult horses. Prognosis is fair unless the foal experiences hypovolemia and septic shock, with a survival rate of 33%. Tapeworms Tapeworms at the junction of the cecum have been implicated in causing colic. The most common species of tapeworm in the equine is Anoplocephala perfoliata. However, a 2008 study in Canada indicated that there is no connection between tapeworms and colic, contradicting studies performed in the UK. Cyathostomes Acute diarrhea can be caused by cyathostomes or "small Strongylus-type" worms that are encysted as larvae in the bowel wall, particularly if large numbers emerge simultaneously. The disease most frequently occurs in winter time. Pathological changes of the bowel reveal a typical "pepper and salt" color of the large intestines. Animals suffering from cyathostominosis usually have a poor deworming history. There is now a lot of resistance to fenbendazole in the UK. Large strongyles Large strongyle worms, most commonly Strongylus vulgaris, are implicated in colic secondary to non-strangulating infarction of the cranial mesenteric artery supplying the intestines, most likely due to vasospasm. Usually the distal small intestine and the large colon are affected, but any segment supplied by this artery can be compromised. This type of colic has become relatively rare with the advent of modern anthelminthics. Clinical signs vary based on the degree of vascular compromise and the length of intestine that is affected, and include acute and severe colic seen with other forms of strangulating obstruction, so diagnosis is usually made based on anthelminthic administration history although may be definitively diagnosed during surgical exploratoration. Treatment includes typical management of colic signs and endotoxemia, and the administration of aspirin to reduce the risk of thrombosis, but surgery is usually not helpful since lesions are often patchy and may be located in areas not easily resected. Foal colic Meconium impactions Meconium, or the first feces produced by the foal, is a hard pelleted substance. It is normally passed within the first 24 hours of the foal's life, but may become impacted in the distal colon or rectum. Meconium impaction is most commonly is seen in foals 1–5 days of age, and is more common in miniature foals and in colts more than fillies (possibly because fillies have a wider pelvis). Foals will stop suckling, strain to defecate (presents as an arched back and lifted tail), and may start showing overt signs of colic such as rolling and getting up and down. In later stages, the abdomen will distend as it continues to fill with gas and feces. Meconium impactions are often diagnosed by clinical signs, but digital examination to feel for impacted meconium, radiographs, and ultrasound may also be used. Treatment for meconium impaction typically involves the use of enemas, although persistent cases may require mineral oil or IV fluids. It is possible to tell that the meconium has passed when the foal begins to produce a softer, more yellow manure. Although meconium impactions rarely cause perforation, and are usually not life-threatening, foals are at risk of dehydration and may not get adequate levels of IgG due to decreased suckling and not enough ingestion of colostrum. Additionally, the foals will eventually bloat, and will require surgical intervention. Surgery in a foal can be especially risky due to immature immune system and low levels of ingested colostrum. Lethal white syndrome Lethal white syndrome, or ileocolonic aganglionosis, will result in meconium impaction since the foal does not have adequate nerve innervation to the large intestine, in essence, a nonfunctioning colon. Foals that are homozygous for the frame overo gene, often seen in Paint horse heritage, will develop the condition. They present with signs of colic within the first 12 hours after birth, and die within 48 hours due to constipation. This syndrome is not treatable. Congenital abnormalities Atresia coli and atresia ani can also present as meconium impaction. The foal is missing the lumen of its distal colon or anus, respectively, and usually show signs of colic within 12–24 hours. Atresia coli is usually diagnosed with barium contrast studies, in which foals are given barium, and then radiographed to see if and where the barium is trapped. Atresia ani is simply diagnosed with digital examination by a veterinarian. Both situations requires emergency surgery to prevent death, and often still has a poor prognosis for survival with surgical correction. Infectious organisms Clostridial enterocolitis due to infection by Clostridium perfringens is most commonly seen in foals under 3 months of age. Clostridial toxins damage the intestine, leading to dehydration and toxemia. Foals usually present with signs of colic, decreased nursing, abdominal distention, and diarrhea which may contain blood. Diagnosis is made with fecal culture, and while some foals do not require serious intervention, others need IV fluids, antibiotics, and aggressive treatment, and may still die. Other bacterial infections that may lead to enterocolitis include Salmonella, Klebsiella, Rhodococcus equi, and Bacteroides fragilis. Parasitic infection, especially with threadworms (Strongyloides westeri) and ascarids (Parascaris equorum) can produce signs of colic in foals (See Ascarids). Other conditions that may lead to signs of colic in foals include congenital abnormalities, gastric ulcers (see Gastric ulceration), which may lead to gastric perforation and peritonitis, small intestine volvulus, and uroabdomen secondary to urinary bladder rupture. Herniation Inguinal herniation Inguinal hernias are most commonly seen in Standardbred and Tennessee Walking Horse stallions due, likely due to a breed prevalence of a large inguinal ring, as well as Saddlebred and Warmblood breeds. Inguinal hernias in adult horses are usually strangulating (unlike foals, which are usually non-strangulating). Stallions usually display acute signs of colic, and a cool, enlarged testicle on one side. Hernias are classified as either indirect, in which the bowel remains in the parietal vaginal tunic, or direct, in which case it ruptures through the tunic and goes subcutaneously. Direct hernias are seen most commonly in foals, and usually congenital. Indirect hernias may be treated by repeated manual reduction, but direct hernias often require surgery to correct. The testicle on the side of resection will often require removal due to vascular compromise, although prognosis for survival is good (75%) and the horse may be used for breeding in the future. Umbilical herniation Although umbilical hernias are common in foals, strangulation is rare, occurring only 4% of the time and usually involving the small intestine. Rarely, the hernia will only involve part of the intestinal wall (termed a Richter's hernia), which can lead to an enterocutaneous fistula. Strangulating umbilical hernias will present as enlarged, firm, warm, and painful with colic signs. Foals usually survive to discharge. Diaphragmatic herniation Diaphragmatic hernias are rare in horses, accounting for 0.3% of colics. Usually the small intestine herniates through a rent in the diaphragm, although any part of the bowel may be involved. Hernias are most commonly acquired, not congenital, with 48% of horses having a history of recent trauma, usually through during parturition, distention of the abdomen, a fall, or strenuous exercise, or direct trauma to the chest. Congenital hernias occur most commonly in the most ventral part of the diaphragm, while acquired hernias are usually seen at the junction of the muscular and tendinous sections of the diaphragm. Clinical signs usually are similar to an obstruction, but occasionally decreased lung sounds may be heard in one section of the chest, although dyspnea is only seen in approximately 18% of horses. Ultrasound and radiography may both be used to diagnose diaphragmatic herniation. Toxins Ingested toxins are rarely a cause of colic in the horse. Toxins that can produce colic signs include organophosphates, monensin, and cantharidin. Additionally, overuse of certain drugs such as NSAIDs may lead to colic signs (See Gastric ulceration and Right dorsal colitis). Uterine tears and torsions Uterine tears often occur a few days post parturition. They can lead to peritonitis and require surgical intervention to fix. Uterine torsions can occur in the third trimester, and while some cases may be corrected if the horse in anesthetized and rolled, others require surgical correction. Other causes that may show clinical signs of colic Strictly speaking, colic refers only to signs originating from the gastrointestinal tract of the horse. Signs of colic may be caused by problems other than the GI-tract e.g. problems in the liver, ovaries, spleen, urogenital system, testicular torsion, pleuritis, and pleuropneumonia. Diseases which sometimes cause symptoms which appear similar to colic include uterine contractions, laminitis, and exertional rhabdomyolysis. Colic pain secondary to kidney disease is rare. Diagnosis Many different diagnostic tests are used to diagnose the cause of a particular form of equine colic, which may have greater or lesser value in certain situations. The most important distinction to make is whether the condition is managed medically or surgically. If surgery is indicated, then it must be performed as soon as possible, as delay is a dire prognostic indicator. History A thorough history is always taken, including signalment (age, sex, breed), recent activity, diet and recent dietary changes, anthelmintic history, if the horse is a cribber, fecal quality and when it was last passed, and any history of colic. The most important factor is time elapsed since onset of clinical signs, as this has a profound impact on prognosis. Additionally, a veterinarian will need to know any drugs given to the horse, their amount, and the time they were given, as those can help with the assessment of the colic progression and how it is responding to analgesia. Physical examination Heart rate rises with progression of colic, in part due to pain, but mainly due to decreased circulating volume secondary to dehydration, decreased preload from hypotension, and endotoxemia. The rate is measured over time, and its response to analgesic therapy ascertained. A pulse that continues to rise in the face of adequate analgesia is considered a surgical indication. Mucous membrane color can be assessed to appreciate the severity of haemodynamic compromise. Pale mucous membranes may be caused by decreased perfusion (as with shock), anemia due to chronic blood loss (seen with GI ulceration), and dehydration. Pink or cyanotic (blue) membrane colors are associated with a greater chance of survival (55%). Dark red, or "injected", membranes reflect increased perfusion, and the presence of a "toxic line" (a red ring over the top of the teeth where it meets the gum line, with pale or gray mucous membranes) can indicate endotoxemia. Both injected mucous membranes and the presence of a toxic line correlate to a decreased likelihood of survival, at 44%. Capillary refill time is assessed to determine hydration levels and highly correlates to perfusion of the bowel. A CRT of < 2 seconds has a survival rate of 90%, of 2.5–4 seconds a survival rate of 53%, and > 4 seconds a survival rate of 12%. Laboratory tests can be performed to assess the cardiovascular status of the patient. Packed cell volume (PCV) is a measure of hydration status, with a value 45% being considered significant. Increasing values over repeated examination are also considered significant. The total protein (TP) of blood may also be measured, as an aid in estimating the amount of protein loss into the intestine. Its value must be interpreted along with the PCV, to take into account the hydration status. When laboratory tests are not available, hydration can be crudely assessed by tenting the skin of the neck or eyelid, looking for sunken eyes, depression, high heart rate, and feeling for tackiness of the gums. Jugular filling and quality of the peripheral pulses can be used to approximate blood pressure. Capillary refill time (CRT) may be decreased early in the colic, but generally prolongs as the disease progresses and cardiovascular status worsens. Weight and body condition score (BCS) is important when evaluating a horse with chronic colic, and a poor BCS in the face of good quality nutrition can indicate malabsorptive and maldigestive disorders. Rectal temperature can help ascertain if an infectious or inflammatory cause is to blame for the colic, which is suspected if the temperature if >103F. Temperature should be taken prior to rectal examination, as the introduction of air will falsely lower rectal temperature. Coolness of extremities can indicate decreased perfusion secondary to endotoxemia. Elevated respiratory rate can indicate pain as well as acid-base disturbances. A rectal examination, auscultation of the abdomen, and nasogastric intubation should always occur in addition to the basic physical exam. Rectal examination Rectal examinations are a cornerstone of colic diagnosis, as many large intestinal conditions can be definitively diagnosed by this method alone. Due to the risk of harm to the horse, a rectal examination is performed by a veterinarian. Approximately 40% of the gastrointestinal tract can be examined by rectal palpation, although this can vary based on the size of the horse and the length of the examiner's arm. Structures that can be identified include the aorta, caudal pole of the left kidney, nephrosplenic ligament, caudal border of the spleen, ascending colon (left dorsal and ventral, pelvic flexure), the small intestine if distended (it is not normally palpable on rectal), the mesenteric root, the base of the cecum and the medial cecal band, and rarely the inguinal rings. The location within the colon is identified based on size, presence of sacculations, number of bands, and if fecal balls are present. Displacements, torsions, strangulations, and impactions may be identified on rectal examination. Other non-specific findings, such as dilated small intestinal loops, may also be detected, and can play a major part in determining if surgery is necessary. Thickness of the intestinal walls may indicate infiltrative disease or abnormal muscular enlargement. Roughening of the serosal surface of the intestine can occur secondary to peritonitis. Horses that have had gastrointestinal rupture may have gritty feeling and free gas in the abdominal cavity. Surgery is usually suggested if rectal examination finds severe distention of any part of the GI tract, a tight cecum or multiple tight loops of small intestine, or inguinal hernia. However, even if the exact cause can not be determined on rectal, significant abnormal findings without specific diagnosis can indicate the need for surgery. Rectal examinations are often repeated over the course of a colic to monitor the GI tract for signs of change. Rectals are a risk to the practitioner, and the horse is ideally examined either in stocks or over a stall door to prevent kicking, with the horse twitched, and possibly sedated if extremely painful and likely to try to go down. Buscopan is sometimes used to facilitate rectal examination and reduce the risk of tears, because it decreases the smooth muscle tone of the gastrointestinal tract, but can be contraindicated and will produce a very rapid heart rate. Because the rectum is relatively fragile, the risk of rectal tears is always present whenever an examination is performed. Severe rectal tears often result in death or euthanasia. However, the diagnostic benefits of a rectal examination almost always outweigh these risks. Nasogastric intubation Passing a nasogastric tube (NGT) is useful both diagnostically and therapeutically. A long tube is passed through one of the nostrils, down the esophagus, and into the stomach. Water is then pumped into the stomach, creating a siphon, and excess fluid and material (reflux) is pulled off the stomach. Healthy horses will often have less than 1 liter removed from the stomach; any more than 2 litres of fluid is considered to be significant. Horses are unable to vomit or regurgitate, therefore nasogastric intubation is therapeutically important for gastric decompression. A backup of fluid in the gastrointestinal tract will cause it to build up in the stomach, a process that can eventually lead to stomach rupture, which is inevitably fatal. Backing up of fluid through the intestinal tract is usually due to a downstream obstruction, ileus, or proximal enteritis, and its presence usually indicates a small intestinal disease. Generally, the closer the obstruction is to the stomach, the greater amount of gastric reflux will be present. Approximately 50% of horses with gastric reflux require surgery. Auscultation Auscultation of the abdomen is subjective and non-specific, but can be useful. Auscultation typically is performed in a four-quadrant approach: Upper flank, right side: corresponds to the cecum Caudoventral abdomen, right side: corresponds to the colon Upper flank, left side: corresponds to the small intestine Caudoventral abdomen, left side: corresponds to colon Each quadrant should ideally be listened to for 2 minutes. Gut sounds (borborygmi) correlate to motility of the bowel, and care should be taken to note intensity, frequency, and location. Increased gut sounds (hyper-motility) may be indicative of spasmodic colic. Decreased sound, or no sound, may be suggestive of serious changes such as ileus or ischemia, and persistence of hypomotile bowel often suggests the need for surgical intervention. Gut sounds that occur concurrently with pain may indicate obstruction of the intestinal lumen. Sounds of gas can occur with ileus, and those of fluid are associated with diarrhea which may occur with colitis. Sand may sometimes be heard on the ventral midline, presenting a typical "waves on the beach" sound in a horse with sand colic after the lower abdomen is forcefully pushed with a fist. Abdominal percussion ("pinging") can sometimes be used to determine if there is gas distention in the bowel. This may be useful to help determine the need for trocarization, either of the cecum or the colon. Abdominal ultrasound Ultrasound provides visualization of the thoracic and abdominal structures, and can sometimes rule out or narrow down a diagnosis. Information that may be gleaned from ultrasonographic findings include the presence of sand, distention, entrapment, strangulation, intussusception, and wall thickening of intestinal loops, as well as diagnose nephrosplenic entrapment, peritonitis, abdominal tumors, and inguinal or scrotal hernias. Abdominal ultrasound requires an experienced operator to accurately diagnose the cause of colic. It may be applied against the side of the horse, as well as transrectally. Sand presents as a homogenous gray and allows the ultrasound waves to penetrate deep. It is distinguishable from feces, which is less homogenous, and gas colic, which does not allow the operator to see pass the gas. Additionally, the sand usually "sparkles" on ultrasound if it moves. Sand is best diagnosed using a 3.5 megahertz probe. Horses with gastrointestinal rupture will have peritoneal fluid accumulation, sometimes with debris, visible on ultrasound. Horses with peritonitis will often have anechoic fluid, or material in between visceral surfaces. Differentiation between proximal enteritis and small intestinal obstruction is important to ensure correct treatment, and can be assisted with the help of ultrasound. Horses with small intestinal obstruction will usually have an intestinal diameter of -10 cm with a wall thickness of 3-5mm. Horses with proximal enteritis usually have an intestinal diameter that is narrower, but wall thickness is often greater than 6mm, containing a hyperechoic or anechoic fluid, with normal, increased, or decreased peristalsis. However, obstructions that have been present for some time may present with thickened walls and distention of the intestine. Horses experiencing intussusception may have a characteristic "bullseye" appearance of intestine on ultrasound, which is thickened, and distended intestine proximal to the affected area. Those experiencing nephrosplenic entrapment will often have ultrasonographic changes including an inability to see the left kidney and/or tail of the spleen. Abdominocentesis (belly tap) Abdominocentesis, or the extraction of fluid from the peritoneum, can be useful in assessing the state of the intestines. Normal peritoneal fluid is clear, straw-colored, and of serous consistency, with a total nucleated cell count of less than 5000 cells/microliter (24–60% which are neutrophils) and a total protein of 2.5 g/dL. Abdominocentesis allows for the evaluation of red and white blood cells, hemoglobin concentration, protein levels, and lactate levels. A high lactate in abdominal fluid suggests intestinal death and necrosis, usually due to strangulating lesion, and often indicates the need for surgical intervention. A strangulating lesion may produce high levels of red blood cells, and a serosanguinous fluid containing blood and serum. White blood cell levels may increase if there is death of intestine that leads to leakage of intestinal contents, which includes high levels of bacteria, and a neutrophil to monocyte ratio greater than or equal to 90% is suggestive of a need for surgery. "High" nucleated cell counts (15,000–800,000 cells/microliter depending on the disease present) occur with horses with peritonitis or abdominal abscesses. The protein level of abdominal fluid can give information as to the integrity of intestinal blood vessels. High protein (> 2.5 mg/dL) suggests increased capillary permeability associated with peritonitis, intestinal compromise, or blood contamination. Horses with gastrointestinal rupture will have elevated protein the majority of the time (86.4%) and 95.7% will have bacteria present. Occasionally, with sand colic, it is possible to feel the sand with the tip of the needle. Clinical analysis is not necessarily required to analyze the fluid. Simple observation of color and turbidity can be useful in the field. Sanguinous fluid indicates an excess of red blood cells or hemoglobin, and may be due to leakage of the cells through a damaged intestinal wall, splenic puncture during abdominocentesis, laceration of abdominal viscera, or contamination from a skin capillary. Cloudy fluid is suggestive of an increased number of cells or protein. White fluid indicates chylous effusion. Green fluid indicates either gastrointestinal rupture or enterocentesis, and a second sample should be drawn to rule out the latter. Gastrointestinal rupture produces a color change in peritoneal fluid in 85.5% of cases. Colorless (dilute) peritoneal fluid, especially in large quantities, can indicate ascites or uroperitoneum (urine in the abdomen). Large amount of fluid can indicate acute peritonitis. Abdominal distension Any degree of abdominal distension is usually indicative of a condition affecting the large intestines, as distension of structures upstream of here would not be large enough to be visible externally. Abdominal distention may indicate the need for surgical intervention, especially if present with severe signs of colic, high heart rate, congested mucous membranes, or absent gut sounds. Fecal examination The amount of feces produced, and its character can be helpful, although as changes often occur relatively distant to the anus, changes may not be seen for some time. In areas where sand colic is known to be common, or if the history suggests it may be a possibility, faeces can be examined for the presence of sand, often by mixing it in water and allowing the sand to settle out over 20 minutes. However, sand is sometimes present in a normal horse's feces, so the quantity of sand present must be assessed. Testing the feces for parasite load may also help diagnose colic secondary to parasitic infection. Radiography, gastroscopy, and laparoscopy Radiography Radiographs (x-rays) are sometimes used to look for sand and enteroliths. Due to the size of the adult horse's abdomen, it requires a powerful machine that is not available to all practitioners. Additionally, the quality of these images is sometimes poor. Gastroscopy Gastroscopy, or endoscopic evaluation of the stomach, is useful in chronic cases of colic suspected to be caused by gastric ulcers, gastric impactions, and gastric masses. A 3-meter scope is required to visualize the stomach of most horses, and the horse must be fasted prior to scoping. Laparoscopy Laparoscopy involves inserting a telescoping camera approximately 1 cm in diameter into the horse's abdomen, through a small incision, to visualize the gastrointestinal tract. It may be performed standing or under general anesthesia, and is less invasive than an exploratory celiotomy (abdominal exploratory surgery). Rectal biopsy Rectal biopsy is rarely performed due to its risks of abscess formation, rectal perforation and peritonitis, and because it requires a skilled clinical to perform. However, it can be useful in cases of suspected intestinal cancer, as well as some inflammatory diseases (such as IBD) and infiltrative diseases, like granulomatous enteritis. Clinical signs Clinical signs of colic are usually referable to pain, although the horse may appear depressed rather than painful in cases of necrosis (tissue death) of the gastrointestinal tract, inflammation of the intestines, endotoxemia, or significant dehydration. Pain levels are often used to determine the need for surgery (See Surgical intervention). Horses are more likely to require surgery if they display severe clinical signs that can not be controlled by the administration of analgesics and sedatives, or have persistent signs that require multiple administrations of such drugs. Heart rate is often used as a measure of the animal's pain level and a heart rate >60 bpm is more likely to require surgery. However, this measure can be deceiving in the early stages of a severe colic, when the horse may still retain a relatively low rate. Additionally, pain tolerance of the individual must be taken into account, since very stoic animals with severe cases of colic may not show adequate levels of pain to suggest the need for surgery. High heart rates (>60 bpm), prolonged capillary refill time (CRT), and congested mucous membranes suggest cardiovascular compromise and the need for more intense management. Decreased or absent gut sounds often suggest the need for surgical intervention if prolonged. A horse showing severe clinical signs, followed by a rapid and significant improvement, may have experienced gastrointestinal perforation. While this releases the pressure that originally caused so much discomfort for the horse, it results in a non-treatable peritonitis that requires euthanasia. Soon after this apparent improvement, the horse will display signs of shock, including an elevated heart rate, increased capillary refill time, rapid shallow breathing, and a change in mucous membrane color. It may also be pyretic, act depressed, or become extremely painful. Gas distention usually produces mild clinical signs, but in some cases leads to severe signs due to pressure and tension on the mesentery. Simple obstructions often present with a slightly elevated heart rate (<60 bpm) but normal CRT and mucous membrane color. Strangulating obstructions are usually extremely painful, and the horse may have abdominal distention, congested mucous membranes, altered capillary refill time, and other signs of endotoxemia. General Elevated body temperature: most commonly associated with medically managed colics such as enteritis, colitis, peritonitis, and intestinal rupture Elevated heart rate Elevated respiratory rate Increased capillary refill time Change in mucous membrane (gum) color (See Physical examination) Change in the degree of gut sounds (See Auscultation) Pawing Increased attention toward the abdomen, including flank watching (turning of the head to look at the abdomen and/or hind quarters), nipping, biting, or kicking Repeatedly lying down and rising, which may become violent when the colic is severe Rolling, especially when not followed by shaking after standing, and which may become violent when the colic is severe (thrashing) Sweating Change in activity level: lethargy, pacing, or a constant shifting of weight when standing Change in feces: decreased fecal output or a change in consistency Repeated flehmen response Stretching, abnormal posturing, or frequent attempts to urinate Groaning Bruxism Excess salivation (ptyalism) Excessive yawning Loss of appetite Abdominal distention Dorsal recumbency in foals Poor coat or weight loss (chronic colic) Medical management Colic may be managed medically or surgically. Severe clinical signs often suggest the need for surgery, especially if they can not be controlled with analgesics. Immediate surgical intervention may be required, but surgery can be counter-indicated in some cases of colic, so diagnostic tests are used to help discover the cause of the colic and guide the practitioner in determining the need for surgery (See Diagnosis). The majority of colics (approximately 90%) can be successfully managed medically. Analgesia and sedation The intensity of medical management is dependent on the severity of the colic, its cause, and the financial capabilities of the owner. At the most basic level, analgesia and sedation is administered to the horse. The most commonly used analgesics for colic pain in horses are NSAIDs, such as flunixin meglumine, although opioids such as butorphanol may be used if the pain is more severe. Butrophanol is often given with alpha-2 agonists such as xylazine and detomidine to prolong the analgesic effects of the opioid. Early colic signs may be masked with the use of NSAIDs, so some practitioners prefer to examine the horse before they are given by the owner. Nasogastric intubation and gastric decompression Nasogastric intubation, a mainstay of colic management, is often repeated multiple times until resolution of clinical signs, both as a method of gastric reflux removal and as a way to directly administer fluids and medication into the stomach. Reflux must be removed periodically to prevent distention and possible rupture of the stomach, and to track reflux production, which aids in monitoring the progression of the colic. Its use is especially important in the case of strangulating obstruction or enteritis, since both of these cause excessive secretion of fluid into the intestine, leading to fluid back-up and distention of the stomach. Nasogastric intubation also has the benefit of providing pain relief resulting from gastric distention. Fluid support Fluids are commonly given, either orally by nasogastric tube or by intravenous catheter, to restore proper hydration and electrolyte balance. In cases of strangulating obstruction or enteritis, the intestine will have decreased absorption and increased secretion of fluid into the intestinal lumen, making oral fluids ineffective and possibly dangerous if they cause gastric distention and rupture. This process of secretion into the intestinal lumen leads to dehydration, and these horse require large amounts of IV fluids to prevent hypotension and subsequent cardiovascular collapse. Fluid rates are calculated by adding the fluid lost during each collection of gastric reflux to the daily maintenance requirement of the horse. Due to the fact that horses absorb water in the cecum and colon, the IV fluid requirement of horses with simple obstruction is dependent on the location of the obstruction. Those that are obstructed further distally, such as at the pelvic flexure, are able to absorb more oral fluid than those obstructed in the small intestine, and therefore require less IV fluid support. Impactions are usually managed with fluids for 3–5 days before surgery is considered. Fluids are given based on results of the physical examination, such as mucous membrane quality, PCV, and electrolyte levels. Horses in circulatory shock, such as those suffering from endotoxemia, require very high rates of IV fluid administration. Oral fluids via nasogastric tube are often given in the case of impactions to help lubricate the obstruction. Oral fluids should not be given if significant amounts of nasogastric reflux are obtained. Access to food and water will often be denied to allow careful monitoring and administration of what is taken in by the horse. Intestinal lubricants and laxatives In addition to fluid support, impactions are often treated with intestinal lubricants and laxatives to help move the obstruction along. Mineral oil is the most commonly used lubricant for large colon impactions, and is administered via nasogastric tube, up to 4 liters once or twice daily. It helps coat the intestine, but is not very effective for severe impactions or sand colic since it may simply bypass the obstruction. Mineral oil has the added benefit of crudely measuring GI transit time, a process which normally takes around 18 hours, since it is obvious when it is passed. The detergent dioctyl sodium sulfosuccinate (DDS) is also commonly given in oral fluids. It is more effective in softening an impaction than mineral oil, and helps stimulate intestinal motility, but can inhibit fluid absorption from the intestine and is potentially toxic so is only given in small amounts, two separate times 48 hours apart. Epsom salts are also useful for impactions, since they act both as an osmotic agent, to increase fluid in the GI tract, and as a laxative, but do run the risk of dehydration and diarrhea. Strong laxatives are not recommended for treating impactions. Nutritional support Horses are withheld feed when colic signs are referable to gastrointestinal disease. In long-standing cases, parenteral nutrition may be instituted. Once clinical signs improve, the horse will slowly be re-fed (introduced back to its normal diet), while being carefully monitored for pain. Endotoxemia prevention Endotoxemia is a serious complication of colic and warrants aggressive treatment. Endotoxin (lipopolysaccharide) is released from the cell wall of gram-negative bacteria when they die. Normally, endotoxin is prevented from entering systemic circulation by the barrier function of the intestinal mucosa, antibodies and enzymes which bind and neutralize it and, for the small amount that manages to enter the blood stream, removal by Kupffer cells in the liver. Endotoxemia occurs when there is an overgrowth and secondary die-off of gram negative bacteria, releasing mass quantities of endotoxin. This is especially common when the mucosal barrier is damaged, as with ischemia of the GI tract secondary to a strangulating lesion or displacement. Endotoxemia produces systemic effects such as cardiovascular shock, insulin resistance, and coagulation abnormalities. Fluid support is essential to maintain blood pressure, often with the help of colloids or hypertonic saline. NSAIDs are commonly given to reduce systemic inflammation. However, they decrease the levels of certain prostaglandins that normally promote healing of the intestinal mucosa, which subsequently increases the amount of endotoxin absorbed. To counteract this, NSAIDs are sometimes administered with a lidocaine drip, which appears to reduce this particular negative effect. Flunixin may be used for this purpose at a dose lower than that used for analgesia, so can be safely given to a colicky horse without risking masking signs that the horse requires surgery. Other drugs that bind endotoxin, such as polymyxin B and Bio-Sponge, are also often used. Polymixin B prevents endotoxin from binding to inflammatory cells, but is potentially nephrotoxic, so should be used with caution in horses with azotemia, especially neonatal foals. Plasma may also be given with the intent of neutralizing endotoxin. Laminitis is a major concern in horses suffering from endotoxemia. Ideally, prophylactic treatment should be provided to endotoxic horses, which includes the use of NSAIDs, DMSO, icing of the feet, and frog support. Horses are also sometimes administered heparin, which is thought to reduce the risk of laminitis by decreasing blood coagulability and thus blood clot formation in the capillaries of the foot. Case-specific drug treatment Specific causes of colic are best managed with certain drugs. These include: Spasmolytic agents, most commonly Buscopan, especially in the case of gas colic. Pro-motility agents: metoclopramide, lidocaine, bethanechol, and erythromycin are used in cases of ileus. Anti-inflammatories are often used in the case of enteritis or colitis. Anti-microbials may be administered if an infectious agent is suspected to be the underlying cause of colic. Phenylephrine: used in cases of nephrosplenic entrapment to contract the spleen, and is followed by light exercise to try to shift the displaced colon back into its normal position. Psyllium may be given via nasogastric tube to treat sand colic. Anthelminthics for parasitic causes of colic. Surgical intervention Surgery poses significant expense and risks, including peritonitis, the formation of adhesions, complications secondary to general anesthesia, injury upon recovery of the horse which may require euthanasia, dehiscence, or infection of the incisional site. Additionally, surgical cases may develop post-operative ileus which requires further medical management. However, surgery may be required to save the life of the horse, and 1–2% of all colics require surgical intervention. If a section of intestine is significantly damaged, it may need to be removed (resection) and the healthy parts reattached together (anastomosis). Horses may have up to 80% of their intestines removed and still function normally, without needing a special diet. Survival rates In the case of colics requiring surgery, survival rates are best improved by quick recognition of colic and immediate surgical referral, rather than waiting to see if the horse improves, which only increases the extent of intestinal compromise. Survival rates are higher in surgical cases that do not require resection and anastomosis. 90% of large intestinal colic surgeries that are not due to volvulus, and 20–80% of large colon volvuluses, are discharged; while 85–90% of non strangulating small intestinal lesions, and 65–75% of strangulating intestinal lesions are discharged. 10–20% of small intestinal surgical cases require a second surgery, while only 5% of large intestinal cases do so. Horses that survive colic surgery have a high rate of return to athletic function. According to one study, approximately 86% of horses discharged returned to work, and 83.5% returned to same or better performance. Adhesion formation Adhesions, or scar tissue between various organs that are not normally attached within the abdomen, may occur whenever an abdominal surgery is performed. It is often seen secondary to reperfusion injury where there is ischemic bowel or after intestinal distention. This injury causes neutrophils to move into the serosa and mesothelium to be lost, which the body then attempts to repair using fibrin and collagen, leading to adhesion formation between adjacent tissues with either fibrinous or fibrous material. Adhesions may encourage a volvulus, as the attachment provides a pivot point, or force a tight turn between two adjacent loops that are now attached, leading to partial obstruction. For this reason, clinical signs vary from silent lesions to acute obstruction, encouraging future colics including intestinal obstruction or strangulation, and requiring further surgery and risk of adhesion. Generally, adhesions form within the first two months following surgery. Adhesions occur most commonly in horses with small intestinal disease (22% of all surgical colics), foals (17%), those requiring enterotomy or a resection and anastomosis, or those that develop septic peritonitis. Prevention of adhesions begins with good surgical technique to minimize trauma to the tissue and thus reparative responses by the body. Several drugs and substances are used to try to prevent adhesion formation. Preoperative use of DMSO, a free radical scavenger, potassium penicillin, and flunixin meglumine may be given. The thick intestinal lubricant carboxymethylcellulose is often applied to the GI tract intraoperatively, to decrease trauma from handling by the surgeon and provide a physical barrier between the intestine and adjacent intestinal loops or abdominal organs. It has been shown to double the survival rate of horses, and its use is now a standard practice. Hyaluraonan can also be used to produce a physical barrier. Intraperitoneal unfractionated heparin is sometimes used, since it decreases fibrin formation and thus may decrease fibrinous adhesions. Omentectomy (removal of the omentum) is a quick, simple procedure that also greatly decreases the risk of adhesions, since the omentum is one organ that commonly adheres to the intestines. The abdomen is usually lavaged copiously before the abdomen is sutured closed, and anti-inflammatories are given postoperatively. A laparoscope may be used post-surgery to look for and break down adhesions, however there is risk of additional adhesions forming post-procedure. Encouraging motility post-surgery can also be useful, as it decreases the contact time between tissues. Adhesion-induced colic has a poor prognosis, with a 16% survival rate in one study. Post-operative care Small amounts of food is usually introduced as soon as possible after surgery, usually within 18–36 hours, to encourage motility and reduce the risk of ileus and the formation of adhesions. Often horses are stall rested with short bouts of hand walking to encourage intestinal motility. The incision site is carefully monitored for dehiscence, or complete failure of the incision leading to spillage of the abdominal contents out of the incision site, and the horse is not allowed turn-out until the incision has healed, usually after 30 days of stall rest. Abdominal bandages are sometimes used to help prevent the risk of dehiscence. Incisional infection doubles the time required for postoperative care, and dehiscence may lead to intestinal herniation, which reduces the likelihood of return to athletic function. Therefore, antibiotics are given 2–3 days after surgery, and temperature is constantly monitored, to help assess if an infection is present. Antibiotics are not used long-term due to the risk of antimicrobial resistance. The incision usually takes 6 months to reach 80% strength, while intestinal healing following resection and anastomosis is much faster, at a rate to 100% strength in 3 weeks. After the incision has healed adequately, the horse is turned out in a small area for another 2–3 months, and light exercise is added to improve the tone and strength of the abdominal musculature. Weight loss of 75–100 pounds is common after colic surgery, secondary to the decreased function of the gastrointestinal tract and from muscle atrophy that occurs while the horse is rested. This weight is often rapidly replaced. Draft horses tend to have more difficulty post-surgery because they are often under anesthesia for a longer period of time, since they have a greater amount of gastrointestinal tract to evaluate, and their increased size places more pressure on their musculature, which can lead to muscle damage. Miniature horses and fat ponies are at increased risk for hepatic lipidosis post-surgery, a serious complication. Prevention The incidence of colic can be reduced by restricted access to simple carbohydrates including sugars from feeds with excessive molasses, providing clean feed and drinking water, preventing the ingestion of dirt or sand by using an elevated feeding surface, a regular feeding schedule, regular deworming, regular dental care, a regular diet that does not change substantially in content or proportion and prevention of heatstroke. Horses that bolt their feed are at risk of colic, and several management techniques may be used to slow down the rate of feed consumption. Supplementing with previously mentioned form of pysllium fiber may reduce risk of sand colic if in a high-risk area. Most supplement forms are given one week per month and available wherever equine feed is purchased. Turnout is thought to reduce the likelihood of colic, although this has not been proven. It is recommended that a horse receive ideally 18 hours of grazing time each day, as in the wild. However, many times this is difficult to manage with competition horses and those that are boarded, as well as for animals that are easy keepers with access to lush pasture and hence at risk of laminitis. Turnout on a dry lot with lower-quality fodder may have similar beneficial effects. References Further reading The Illustrated Veterinary Encyclopedia for Horsemen Equine Research Inc. Veterinary Medications and Treatments for Horsemen Equine Research Inc. Horse Owner's Veterinary Handbook James M. Giffin, M.D. and Tom Gore, D.V.M. Preventing Colic in Horses Christine King, BVSc, MACVSc External links Vet advice: Colic in horses Colic in Horses The horse professional guide to colic Colic information sheet Colic in Horses in the Merck Veterinary Manual Abdominal pain Horse diseases
9606773
https://en.wikipedia.org/wiki/Hackers%20Creek
Hackers Creek
Hackers Creek is a tributary of the West Fork River, long, in north-central West Virginia in the United States. Via the West Fork, Monongahela and Ohio Rivers, it is part of the watershed of the Mississippi River, draining an area of on the unglaciated portion of the Allegheny Plateau. The stream is believed to have been named for a settler named John Hacker (1743-1824), who lived near the creek for over twenty years from around 1770. He was a magistrate and patriarch in the settlement despite not being able to write. Hackers Creek rises approximately north of Buckhannon in northern Upshur County and flows westwardly into northeastern Lewis County, where it turns northwestwardly and flows through the town of Jane Lew into southern Harrison County, where it joins the West Fork River from the southeast, approximately three miles (5 km) northwest of Jane Lew. According to the West Virginia Department of Environmental Protection, approximately 69% of the Hackers Creek watershed is forested, mostly deciduous. Approximately 28% is used for pasture and agriculture, and less than 1% is urban. Variant spellings According to the Geographic Names Information System, Hackers Creek has also been known historically as: Hacker's Creek Hackers Crick Heackers Creek Heckers Creek NB: Neighboring Barbour County, West Virginia, also has a (much smaller) Hacker's Creek, a tributary of the Tygart Valley River, about 3 miles downstream from Philippi. See also List of West Virginia rivers References Rivers of West Virginia Rivers of Lewis County, West Virginia Rivers of Harrison County, West Virginia Rivers of Upshur County, West Virginia
385509
https://en.wikipedia.org/wiki/Pin%20compatibility
Pin compatibility
In electronics, pin-compatible devices are electronic components, generally integrated circuits or expansion cards, sharing a common footprint and with the same functions assigned or usable on the same pins. Pin compatibility is a property desired by systems integrators as it allows a product to be updated without redesigning printed circuit boards, which can reduce costs and decrease time to market. Although devices which are pin-compatible share a common footprint, they are not necessarily electrically or thermally compatible. As a result, manufacturers often specify devices as being either pin-to-pin or drop-in compatible. Pin-compatible devices are generally produced to allow upgrading within a single product line, to allow end-of-life devices to be replaced with newer equivalents, or to compete with the equivalent products of other manufacturers. Pin-to-pin compatibility Pin-to-pin compatible devices share an assignment of functions to pins, but may have differing electrical characteristics (supply voltages, or oscillator frequencies) or thermal characteristics (TDPs, reflow curves, or temperature tolerances). As a result, their use in a system may require that portions of the system, such as its power delivery subsystem, be adapted to fit the new component. A common example of pin-to-pin compatible devices which may not be electrically compatible are the 7400 series integrated circuits. The 7400 series devices have been produced on a number of different manufacturing processes, but have retained the same pinouts throughout. For example, all 7405 devices provide six NOT gates (or inverters) but may have incompatible supply voltage tolerances. 7405 – Standard TTL, 4.75–5.25 V. 74C05 – CMOS, 4–15 V. 74LV05 – Low-voltage CMOS, 2.0–5.5 V. In other cases, particularly with computers, devices may be pin-to-pin compatible but made otherwise incompatible as a result of market segmentation. For example, Intel Skylake desktop-class Core and Xeon E3v5 processors both use the LGA 1151 socket, but motherboards using C230-series chipsets will only be compatible with Xeon-branded processors, and will not work with Core-branded processors. Drop-in compatibility A drop-in compatible device is a device which may be swapped with another without need to make compensating alterations to the system the device was a part of. The device will have the same functions available on the same pins, and will be electrically and thermally compatible. Such devices may not be an exact match to the devices they can replace. For example, they may have a wider range of supply voltage or temperature tolerances. Software compatibility Software-compatible devices are devices which are able to run the same software to produce the same results without the software having to be modified first. Microcontrollers, FPGAs, and other programmable devices may be pin-to-pin compatible from the perspective of the program on the device, but incompatible in terms of hardware. For example, the device may take the signal on pin X, negate it, and output the result on pin Y. If the method of configuring a pin remains the same but the package of the device (such as TSSOP or QFN) changes, the program will continue to function but the physical locations of the pins the program works with may change. A device may also be pin-compatible while being software-incompatible. This may occur when the device uses a different instruction set, or if the device has a multiplexer attached to a pin (which, for example, may allow the switching of the pin between being driven as GPIO or by an A/D) and that multiplexer selects, by default, a different input source than is selected on the device being replaced. To ease the use of software-incompatible devices, manufacturers often provide hardware abstraction layers. Examples of these include, CMSIS for ARM Cortex-M processors and the now-deprecated HAL subsystem for UNIX-like operating systems. See also 7400 series integrated circuits Programmable logic Logic family Semiconductor packages External links Giant Internet IC Master Database – A list of 74'xx series and other generic chip pinouts. References Integrated circuits Interoperability
6246512
https://en.wikipedia.org/wiki/Nemesis%20%28operating%20system%29
Nemesis (operating system)
Nemesis was an operating system that was designed by the University of Cambridge, the University of Glasgow, the Swedish Institute of Computer Science and Citrix Systems. Nemesis was conceived with multimedia uses in mind. In a microkernel environment, an application is typically implemented by a number of processes, most of which are servers performing work on behalf of more than one client. This leads to enormous difficulty in accounting for resource usage. In a Monolithic kernel based system, multimedia applications spend most of their time in the kernel, leading to similar problems. The guiding principle in the design of Nemesis was to structure the operating system in such a way that the majority of code could execute in the application process itself. Nemesis therefore had an extremely small lightweight kernel and performed most operating system functions in shared libraries, which executed in the user’s process. The ISAs that Nemesis supports include x86 (Intel i486, Pentium, Pentium Pro, and Pentium II), Alpha and ARM (StrongARM SA–110). Nemesis also runs on evaluation boards (21064 and 21164). See also Exokernel Xen Kernel wide design approaches References External links Nemesis At Cambridge Free software operating systems University of Cambridge Computer Laboratory Citrix Systems
1527151
https://en.wikipedia.org/wiki/Speeds%20and%20feeds
Speeds and feeds
The phrase speeds and feeds or feeds and speeds refers to two separate velocities in machine tool practice, cutting speed and feed rate. They are often considered as a pair because of their combined effect on the cutting process. Each, however, can also be considered and analyzed in its own right. Cutting speed (also called surface speed or simply speed) is the speed difference (relative velocity) between the cutting tool and the surface of the workpiece it is operating on. It is expressed in units of distance across the workpiece surface per unit of time, typically surface feet per minute (sfm) or meters per minute (m/min). Feed rate (also often styled as a solid compound, feedrate, or called simply feed) is the relative velocity at which the cutter is advanced along the workpiece; its vector is perpendicular to the vector of cutting speed. Feed rate units depend on the motion of the tool and workpiece; when the workpiece rotates (e.g., in turning and boring), the units are almost always distance per spindle revolution (inches per revolution [in/rev or ipr] or millimeters per revolution [mm/rev]). When the workpiece does not rotate (e.g., in milling), the units are typically distance per time (inches per minute [in/min or ipm] or millimeters per minute [mm/min]), although distance per revolution or per cutter tooth are also sometimes used. If variables such as cutter geometry and the rigidity of the machine tool and its tooling setup could be ideally maximized (and reduced to negligible constants), then only a lack of power (that is, kilowatts or horsepower) available to the spindle would prevent the use of the maximum possible speeds and feeds for any given workpiece material and cutter material. Of course, in reality those other variables are dynamic and not negligible, but there is still a correlation between power available and feeds and speeds employed. In practice, lack of rigidity is usually the limiting constraint. The phrases "speeds and feeds" or "feeds and speeds" have sometimes been used metaphorically to refer to the execution details of a plan, which only skilled technicians (as opposed to designers or managers) would know. Cutting speed Cutting speed may be defined as the rate at the workpiece surface, irrespective of the machining operation used. A cutting speed for mild steel of 100 ft/min is the same whether it is the speed of the cutter passing over the workpiece, such as in a turning operation, or the speed of the cutter moving past a workpiece, such as in a milling operation. The cutting conditions will affect the value of this surface speed for mild steel. Schematically, speed at the workpiece surface can be thought of as the tangential speed at the tool-cutter interface, that is, how fast the material moves past the cutting edge of the tool, although "which surface to focus on" is a topic with several valid answers. In drilling and milling, the outside diameter of the tool is the widely agreed surface. In turning and boring, the surface can be defined on either side of the depth of cut, that is, either the starting surface or the ending surface, with neither definition being "wrong" as long as the people involved understand the difference. An experienced machinist summed this up succinctly as "the diameter I am turning from" versus "the diameter I am turning to." He uses the "from", not the "to", and explains why, while acknowledging that some others do not. The logic of focusing on the largest diameter involved (OD of drill or end mill, starting diameter of turned workpiece) is that this is where the highest tangential speed is, with the most heat generation, which is the main driver of tool wear. There will be an optimum cutting speed for each material and set of machining conditions, and the spindle speed (RPM) can be calculated from this speed. Factors affecting the calculation of cutting speed are: The material being machined (steel, brass, tool steel, plastic, wood) (see table below) The material the cutter is made from (High-Carbon Steel, high-speed steel (HSS), Carbide, Ceramics, and Diamond tools) The economical life of the cutter (the cost to regrind or purchase new, compared to the quantity of parts produced) Cutting speeds are calculated on the assumption that optimum cutting conditions exist. These include: Metal removal rate (finishing cuts that remove a small amount of material may be run at increased speeds) Full and constant flow of cutting fluid (adequate cooling and chip flushing) Rigidity of the machine and tooling setup (reduction in vibration or chatter) Continuity of cut (as compared to an interrupted cut, such as machining square section material in a lathe) Condition of material (mill scale, hard spots due to white cast iron forming in castings) The cutting speed is given as a set of constants that are available from the material manufacturer or supplier. The most common materials are available in reference books or charts, but will always be subject to adjustment depending on the cutting conditions. The following table gives the cutting speeds for a selection of common materials under one set of conditions. The conditions are a tool life of 1 hour, dry cutting (no coolant), and at medium feeds, so they may appear to be incorrect depending on circumstances. These cutting speeds may change if, for instance, adequate coolant is available or an improved grade of HSS is used (such as one that includes [cobalt]). Machinability rating The machinability rating of a material attempts to quantify the machinability of various materials. It is expressed as a percentage or a normalized value. The American Iron and Steel Institute (AISI) determined machinability ratings for a wide variety of materials by running turning tests at 180 surface feet per minute (sfpm). It then arbitrarily assigned 160 Brinell B1112 steel a machinability rating of 100%. The machinability rating is determined by measuring the weighed averages of the normal cutting speed, surface finish, and tool life for each material. Note that a material with a machinability rating less than 100% would be more difficult to machine than B1112 and material and a value more than 100% would be easier. Machinability ratings can be used in conjunction with the Taylor tool life equation, in order to determine cutting speeds or tool life. It is known that B1112 has a tool life of 60 minutes at a cutting speed of 100 sfpm. If a material has a machinability rating of 70%, it can be determined, with the above knowns, that in order to maintain the same tool life (60 minutes), the cutting speed must be 70 sfpm (assuming the same tooling is used). When calculating for copper alloys, the machine rating is arrived at by assuming the 100 rating of 600 SFM. For example, phosphorus bronze (grades A–D) has a machinability rating of 20. This means that phosphor bronze runs at 20% the speed of 600 SFM or 120 SFM. However, 165 SFM is generally accepted as the basic 100% rating for "grading steels". Formula Cutting Speed (V)= [πDN]/1000 m/min Where D=Diameter of Workpiece in meter or millimeter N=Spindle Speed in rpm Spindle speed The spindle speed is the rotational frequency of the spindle of the machine, measured in revolutions per minute (RPM). The preferred speed is determined by working backward from the desired surface speed (sfm or m/min) and incorporating the diameter (of workpiece or cutter). The spindle may hold the: Material (as in a Lathe chuck) Drill bit in a drill Milling cutter in a milling machine Router bit in a wood router Shaper cutter or knife in a wood shaper or spindle moulder Grinding wheel on a grinding machine. Excessive spindle speed will cause premature tool wear, breakages, and can cause tool chatter, all of which can lead to potentially dangerous conditions. Using the correct spindle speed for the material and tools will greatly enhance tool life and the quality of the surface finish. For a given machining operation, the cutting speed will remain constant for most situations; therefore the spindle speed will also remain constant. However, facing, forming, parting off, and recess operations on a lathe or screw machine involve the machining of a constantly changing diameter. Ideally, this means changing the spindle speed as the cut advances across the face of the workpiece, producing constant surface speed (CSS). Mechanical arrangements to effect CSS have existed for centuries, but they were never applied commonly to machine tool control. In the pre-CNC era, the ideal of CSS was ignored for most work. For unusual work that demanded it, special pains were taken to achieve it. The introduction of CNC-controlled lathes has provided a practical, everyday solution via automated CSS Machining Process Monitoring and Control. By means of the machine's software and variable speed electric motors, the lathe can increase the RPM of the spindle as the cutter gets closer to the center of the part. Grinding wheels are designed to be run at a maximum safe speed, the spindle speed of the grinding machine may be variable but this should only be changed with due attention to the safe working speed of the wheel. As a wheel wears it will decrease in diameter, and its effective cutting speed will be reduced. Some grinders have the provision to increase the spindle speed, which corrects for this loss of cutting ability; however, increasing the speed beyond the wheels rating will destroy the wheel and create a serious hazard to life and limb. Generally speaking, spindle speeds and feed rates are less critical in woodworking than metalworking. Most woodworking machines including power saws such as circular saws and band saws, jointers, Thickness planers rotate at a fixed RPM. In those machines, cutting speed is regulated through the feed rate. The required feed rate can be extremely variable depending on the power of the motor, the hardness of the wood or other material being machined, and the sharpness of the cutting tool. In woodworking, the ideal feed rate is one that is slow enough not to bog down the motor, yet fast enough to avoid burning the material. Certain woods, such as black cherry and maple are more prone to burning than others. The right feed rate is usually obtained by "feel" if the material is hand fed, or by trial and error if a power feeder is used. In thicknessers (planers), the wood is usually fed automatically through rubber or corrugated steel rollers. Some of these machines allow varying the feed rate, usually by changing pulleys. A slower feed rate usually results in a finer surface as more cuts are made for any length of wood. Spindle speed becomes important in the operation of routers, spindle moulders or shapers, and drills. Older and smaller routers often rotate at a fixed spindle speed, usually between 20,000 and 25,000 rpm. While these speeds are fine for small router bits, using larger bits, say more than or 25 millimeters in diameter, can be dangerous and can lead to chatter. Larger routers now have variable speeds and larger bits require slower speed. Drilling wood generally uses higher spindle speeds than metal, and the speed is not as critical. However, larger diameter drill bits do require slower speeds to avoid burning. Cutting feeds and speeds, and the spindle speeds that are derived from them, are the ideal cutting conditions for a tool. If the conditions are less than ideal then adjustments are made to the spindle's speed, this adjustment is usually a reduction in RPM to the closest available speed, or one that is deemed (through knowledge and experience) to be correct. Some materials, such as machinable wax, can be cut at a wide variety of spindle speeds, while others, such as stainless steel require much more careful control as the cutting speed is critical, to avoid overheating both the cutter and workpiece. Stainless steel is one material that hardens very easily under cold working, therefore insufficient feed rate or incorrect spindle speed can lead to less than ideal cutting conditions as the work piece will quickly harden and resist the tool's cutting action. The liberal application of cutting fluid can improve these cutting conditions; however, the correct selection of speeds is the critical factor. Spindle speed calculations Most metalworking books have nomograms or tables of spindle speeds and feed rates for different cutters and workpiece materials; similar tables are also likely available from the manufacturer of the cutter used. The spindle speeds may be calculated for all machining operations once the SFM or MPM is known. In most cases, we are dealing with a cylindrical object such as a milling cutter or a workpiece turning in a lathe so we need to determine the speed at the periphery of this round object. This speed at the periphery (of a point on the circumference, moving past a stationary point) will depend on the rotational speed (RPM) and diameter of the object. One analogy would be a skateboard rider and a bicycle rider travelling side by side along the road. For a given surface speed (the speed of this pair along the road) the rotational speed (RPM) of their wheels (large for the skater and small for the bicycle rider) will be different. This rotational speed (RPM) is what we are calculating, given a fixed surface speed (speed along the road) and known values for their wheel sizes (cutter or workpiece). The following formulae may be used to estimate this value. Approximation The exact RPM is not always needed, a close approximation will work (using 3 for the value of ). e.g. for a cutting speed of 100 ft/min (a plain HSS steel cutter on mild steel) and diameter of 10 inches (the cutter or the work piece) and, for an example using metric values, where the cutting speed is 30 m/min and a diameter of 10 mm (0.01 m), Accuracy However, for more accurate calculations, and at the expense of simplicity, this formula can be used: and using the same example and using the same example as above where: RPM is the rotational speed of the cutter or workpiece. Speed is the recommended cutting speed of the material in meters/minute or feet/min Diameter in millimeters or inches. Feed rate Feed rate is the velocity at which the cutter is fed, that is, advanced against the workpiece. It is expressed in units of distance per revolution for turning and boring (typically inches per revolution [ipr] or millimeters per revolution). It can be expressed thus for milling also, but it is often expressed in units of distance per time for milling (typically inches per minute [ipm] or millimeters per minute), with considerations of how many teeth (or flutes) the cutter has then determined what that means for each tooth. Feed rate is dependent on the: Type of tool (a small drill or a large drill, high speed or carbide, a boxtool or recess, a thin form tool or wide form tool, a slide knurl or a turret straddle knurl). Surface finish desired. Power available at the spindle (to prevent stalling of the cutter or workpiece). Rigidity of the machine and tooling setup (ability to withstand vibration or chatter). Strength of the workpiece (high feed rates will collapse thin wall tubing) Characteristics of the material being cut, chip flow depends on material type and feed rate. The ideal chip shape is small and breaks free early, carrying heat away from the tool and work. Threads per inch (TPI) for taps, die heads and threading tools. Cut Width. Any time the width of cut is less than half the diameter, a geometric phenomenon called Chip Thinning reduces the actual chipload. Feedrates need to be increased to offset the effects of chip thinning, both for productivity and to avoid rubbing which reduces tool life. When deciding what feed rate to use for a certain cutting operation, the calculation is fairly straightforward for single-point cutting tools, because all of the cutting work is done at one point (done by "one tooth", as it were). With a milling machine or jointer, where multi-tipped/multi-fluted cutting tools are involved, then the desired feed rate becomes dependent on the number of teeth on the cutter, as well as the desired amount of material per tooth to cut (expressed as chip load). The greater the number of cutting edges, the higher the feed rate permissible: for a cutting edge to work efficiently it must remove sufficient material to cut rather than rub; it also must do its fair share of work. The ratio of the spindle speed and the feed rate controls how aggressive the cut is, and the nature of the swarf formed. Formula to determine feed rate This formula can be used to figure out the feed rate that the cutter travels into or around the work. This would apply to cutters on a milling machine, drill press and a number of other machine tools. This is not to be used on the lathe for turning operations, as the feed rate on a lathe is given as feed per revolution. Where: FR = the calculated feed rate in inches per minute or mm per minute. RPM = is the calculated speed for the cutter. T = Number of teeth on the cutter. CL = The chip load or feed per tooth. This is the size of chip that each tooth of the cutter takes. Depth of cut Cutting speed and feed rate come together with depth of cut to determine the material removal rate, which is the volume of workpiece material (metal, wood, plastic, etc.) that can be removed per time unit. Interrelationship of theory and practice Speed-and-feed selection is analogous to other examples of applied science, such as meteorology or pharmacology, in that the theoretical modeling is necessary and useful but can never fully predict the reality of specific cases because of the massively multivariate environment. Just as weather forecasts or drug dosages can be modeled with fair accuracy, but never with complete certainty, machinists can predict with charts and formulas the approximate speed and feed values that will work best on a particular job, but cannot know the exact optimal values until running the job. In CNC machining, usually the programmer programs speeds and feedrates that are as maximally tuned as calculations and general guidelines can supply. The operator then fine-tunes the values while running the machine, based on sights, sounds, smells, temperatures, tolerance holding, and tool tip lifespan. Under proper management, the revised values are captured for future use, so that when a program is run again later, this work need not be duplicated. As with meteorology and pharmacology, however, the interrelationship of theory and practice has been developing over decades as the theory part of the balance becomes more advanced thanks to information technology. For example, an effort called the Machine Tool Genome Project is working toward providing the computer modeling (simulation) needed to predict optimal speed-and-feed combinations for particular setups in any internet-connected shop with less local experimentation and testing. Instead of the only option being the measuring and testing of the behavior of its own equipment, it will benefit from others' experience and simulation; in a sense, rather than 'reinventing a wheel', it will be able to 'make better use of existing wheels already developed by others in remote locations'. Academic research examples Speeds and feeds have been studied scientifically since at least the 1890s. The work is typically done in engineering laboratories, with the funding coming from three basic roots: corporations, governments (including their militaries), and universities. All three types of institution have invested large amounts of money in the cause, often in collaborative partnerships. Examples of such work are highlighted below. In the 1890s through 1910s, Frederick Winslow Taylor performed turning experiments that became famous (and seminal). He developed Taylor's Equation for Tool Life Expectancy. Scientific study by Holz and De Leeuw of the Cincinnati Milling Machine Company did for milling cutters what F. W. Taylor had done for single-point cutters. "Following World War II, many new alloys were developed. New standards were needed to increase [U.S.] American productivity. Metcut Research Associates, with technical support from the Air Force Materials Laboratory and the Army Science and Technology Laboratory, published the first Machining Data Handbook in 1966. The recommended speeds and feeds provided in this book were the result of extensive testing to determine optimum tool life under controlled conditions for every material of the day, operation and hardness." , studied the effect of the variation of cutting parameters in the surface integrity in turning of an AISI 304 stainless steel. They found that the feed rate has the greatest impairing effect on the quality of the surface, and that besides the achievement of the desired roughness profile, it is necessary to analyze the effect of speed and feed on the creation of micropits and microdefects on the machined surface. Moreover, they found that the conventional empirical relation that relates feed rate to roughness value does not fit adequately for low cutting speeds. References Bibliography Further reading External links Free Advanced Machinist Calculator for Speeds, Feeds and more Basic Speeds and Feeds Calculator Illustrated Speed and feed calculator Cuttingspeed Software Calculator CNC Online Feeds and Speeds Calculator Chip Thinning Tutorial Metalworking terminology Woodworking Velocity
14825476
https://en.wikipedia.org/wiki/HSC%20Caldera%20Vista
HSC Caldera Vista
HSC Caldera Vista is an Incat-built high speed catamaran owned by Seajets. The vessel was the first fast craft to bear a Manx name. She was also the sixth Isle of Man Steam Packet Company vessel to bear the name Snaefell. History Caldera Vista was launched as Hoverspeed France for Sea Containers, for use with Hoverspeed, in 1991; and operated as the Sardegna Express on charter, before returning to Hoverspeed as the SeaCat Boulogne. In 1994, she was again renamed to SeaCat Isle of Man, and put on charter to the Isle of Man Steam Packet Company. She brought with her high charter fees and operation costs; and endangered the career of the [[MS Lady of Mann|Lady of Mann (II)]], the latter being given a much needed lifeline when a freak wave in the River Mersey encountered by the SeaCat Isle of Man twisted the ship's bow and tore off the water-tight visor. The IoMSPC decided not to continue in chartering the ship from Sea Containers, and she was chartered out to ColorSeaCat as the SeaCat Norge. She returned to Hoverspeed as the SeaCat Norge; and when her owners bought out the IoMSPC in 1996, she returned to the Irish Sea as the SeaCat Isle of Man once again. Briefly going back to Hoverspeed from 1997 until 1998; she returned to the Isle of Man service from 1998 until 2005. 2007 accidentSeaCat Isle of Man became Sea Express 1, and operated for Irish Sea Express in 2005. The next year, she returned to the Steam Packet fleet. In February 2007, the vessel was involved in a collision with a cargo vessel in fog on the River Mersey. Nobody was injured, but the ship was seriously damaged, and took on a large volume of water. Fortunately, by the next day, the ship was stable. The first attempt to tow the ship across the river to drydock had failed, but the second succeeded. In December 2007, the vessel was renamed Snaefell whilst still under repair. Return to serviceSnaefell moved under her own power for the first time in over a year when she moved from the West Float in Birkenhead to the Pier Head Landing Stage, and then after a detour, headed out on trials which were expected to take three days, and took two. Snaefells first passenger sailing since her accident in 2007 was on 12 May 2008, with the 07:30 sailing to Liverpool. During her time as Snaefell she was conferred the status of Royal Mail Ship. In May 2009, it was reported in the press that the company is continuing to review the future of the veteran fastcraft in the light of the increased capacity offered by Manannan and poor passenger numbers on the seasonal Belfast and Dublin routes. On 9 July 2009, an article was published on the Isle of Man Today website stating that Manx ministers were pleading with the Steam Packet Company after they confirmed they were reviewing the future of Snaefell and the services to Belfast and Dublin. This has to be balanced against the fact that the Steam Packet Company previously requested, and was granted, an extension to its User Agreement with the Manx Government. This requires them to provide over 60 sailings each year to Ireland until 2026. Any decision to cease operations to Ireland would therefore either nullify the Agreement or require significant non-performance payments from the Steam Packet. Engine problems In July 2010, Snaefell suffered a crank-shaft failure, causing one of her engines to fail. This caused major disruption, as she could not carry out her Liverpool, Belfast and Dublin sailings efficiently as they would take 4 to 5 hours. She continued to operate on three engines until September 2010, when the timetable allowed Manannan to cover her sailings. Sale to Seajets In January 2011, the Steam Packet released a statement stating that Snaefell was no longer part of the operational fleet and would not return to service for the 2011 season. The vessel has been reflagged from the United Kingdom to Cyprus. In late April, the vessel was chartered to Seajets and renamed Master Jet. It was then sold to the company in 2012. The vessel operates on the Piraeus-Paros-Naxos-Koufonisia-Amorgos route as of 2015. In 2018 she was renamed Caldera Vista''. Routes The vessel operates the following routes: 2019 Heraklion-Thira-Ios-Paros-Mykonos-Thira-Heraklion (Daily except Saturdays & Tuesdays) Heraklion-Thira-Heraklion (Saturday & Tuesday) 2018 Heraklion-Thira-Paros-Mykonos-Paros-Thira-Heraklion (Sunday, Wednesday and Thursday) Heraklion-Thira-Mykonos-Tinos-Syros-Thira-Heraklion (Monday and Friday) References External links Specifications- http://www.incat.com.au/domino/incat/incatweb.nsf/0/D57C7F8C7F684242CA25730000212024/$File/hull%20026%20mini%20spec.pdf?OpenElement Snaefell Specifications - steam-packet.com MAIB report into the collision of Sea Express 1 and Alaska Rainbow Ferries of the United Kingdom Ferries of the Isle of Man Ferries of Greece Merchant ships of Cyprus Incat high-speed craft 1990 ships Ships built by Incat
1068799
https://en.wikipedia.org/wiki/Incident%20Command%20System
Incident Command System
The Incident Command System (ICS) is a standardized approach to the command, control, and coordination of emergency response providing a common hierarchy within which responders from multiple agencies can be effective. ICS was initially developed to address problems of inter-agency responses to wildfires in California and Arizona but is now a component of the National Incident Management System (NIMS) in the US, where it has evolved into use in all-hazards situations, ranging from active shootings to hazmat scenes. In addition, ICS has acted as a pattern for similar approaches internationally. Overview ICS consists of a standard management hierarchy and procedures for managing temporary incident(s) of any size. ICS procedures should be pre-established and sanctioned by participating authorities, and personnel should be well-trained prior to an incident. ICS includes procedures to select and form temporary management hierarchies to control funds, personnel, facilities, equipment, and communications. Personnel are assigned according to established standards and procedures previously sanctioned by participating authorities. ICS is a system designed to be used or applied from the time an incident occurs until the requirement for management and operations no longer exist. ICS is interdisciplinary and organizationally flexible to meet the following management challenges: Meets the needs of a jurisdiction to cope with incidents of any kind or complexity (i.e. it expands or contracts as needed). Allows personnel from a wide variety of agencies to meld rapidly into a common management structure with common terminology. Provide logistical and administrative support to operational staff. Be cost effective by avoiding duplication of efforts, and continuing overhead. Provide a unified, centrally authorized emergency organization. History The ICS concept was formed in 1968 at a meeting of Fire Chiefs in Southern California. The program reflects the management hierarchy of the US Navy, and at first was used mainly to fight California wildfires. During the 1970s, ICS was fully developed during massive wildfire suppression efforts in California (FIRESCOPE) that followed a series of catastrophic wildfires, starting with the massive Laguna fire in 1970. Property damage ran into the millions, and many people died or were injured. Studies determined that response problems often related to communication and management deficiencies rather than lack of resources or failure of tactics. Weaknesses in incident management were often due to: Lack of accountability, including unclear chain of command and supervision. Poor communication due to both inefficient uses of available communications systems and conflicting codes and terminology. Lack of an orderly, systematic planning process. No effective predefined way to integrate inter-agency requirements into the management structure and planning process. “Freelancing” by individuals within the first response team without direction from a team leader (IC) and those with specialized skills during an incident and without coordination with other first responders Lack of knowledge with common terminology during an incident. Emergency Managers determined that the existing management structures — frequently unique to each agency — did not scale to dealing with massive mutual aid responses involving dozens of distinct agencies and when these various agencies worked together their specific training and procedures clashed. As a result, a new command and control paradigm was collaboratively developed to provide a consistent, integrated framework for the management of all incidents from small incidents to large, multi-agency emergencies. At the beginning of this work, despite the recognition that there were incident or field level shortfalls in organization and terminology, there was no mention of the need to develop an on the ground incident management system like ICS. Most of the efforts were focused on the multiagency coordination challenges above the incident or field level. It was not until 1972 when Firefighting Resources of Southern California Organized for Potential Emergencies (FIRESCOPE) was formed that this need was recognized and the concept of ICS was first discussed. Also, ICS was originally called Field Command Operations System. ICS became a national model for command structures at a fire, crime scene or major incident. ICS was used in New York at the first attack on the World Trade Center in 1993. On 1 March 2004, the Department of Homeland Security, in accordance with the passage of Homeland Security Presidential Directive 5 (HSPD-5) calling for a standardized approach to incident management amongst all federal, state, and local agencies, developed the National Incident Management System (NIMS) which integrates ICS. Additionally, it was mandated that NIMS (and thus ICS) must be utilized to manage emergencies in order to receive federal funding. The Superfund Amendment and Re-authorization Act title III mandated that all first responders to a hazardous materials emergency must be properly trained and equipped in accordance with 29 CFR 1910.120(q). This standard represents OSHA's recognition of ICS. HSPD-5 and thus the National Incident Management System came about as a direct result of the terrorist attacks on 11 September 2001, which created numerous All-Hazard, Mass Casualty, multi-agency incidents. Jurisdiction and legitimacy In the United States, ICS has been tested by more than 30 years of emergency and non-emergency applications. All levels of government are required to maintain differing levels of ICS training and private sector organizations regularly use ICS for management of events. ICS is widespread in use from law enforcement to every-day business, as the basic goals of clear communication, accountability, and the efficient use of resources are common to incident and emergency management as well as daily operations. ICS is mandated by law for all Hazardous Materials responses nationally and for many other emergency operations in most states. In practice, virtually all EMS and disaster response agencies utilize ICS, in part after the United States Department of Homeland Security mandated the use of ICS for emergency services throughout the United States as a condition for federal preparedness funding. As part of FEMA's National Response Plan (NRP), the system was expanded and integrated into the National Incident Management System (NIMS). The United Nations recommended the use of ICS as an international standard. ICS is also used by agencies in Canada. New Zealand has implemented a similar system, known as the Coordinated Incident Management System, Australia has the Australasian Inter-Service Incident Management System and British Columbia, Canada, has BCERMS developed by the Emergency Management BC. In Brazil, ICS is also used by The Fire Department of the State of Rio de Janeiro (CBMERJ) and by the Civil Defense of the State of Rio de Janeiro in every emergency or large-scale events. Basis Incidents Incidents are defined within ICS as unplanned situations necessitating a response. Examples of incidents may include: Emergency medical situations (ambulance service) Hazardous material spills, releases to the air (toxic chemicals), releases to a drinking water supply Hostage crises Man-made disasters such as vehicle crashes, industrial accidents, train derailments, or structure fires Natural disasters such as wildfires, flooding, earthquake or tornado Public health incidents, such as disease outbreaks Search and Rescue operations Technological crisis Terrorist attacks Traffic incidents Events Events are defined within ICS as planned situations. Incident command is increasingly applied to events both in emergency management and non-emergency management settings. Examples of events may include: Concerts Parades and other ceremonies Fairs and other gatherings Training exercises Key concepts Unity of command Each individual participating in the operation reports to only one supervisor. This eliminates the potential for individuals to receive conflicting orders from a variety of supervisors, thus increasing accountability, preventing freelancing, improving the flow of information, helping with the coordination of operational efforts, and enhancing operational safety. This concept is fundamental to the ICS chain of command structure. Common terminology Individual response agencies previously developed their protocols separately, and subsequently developed their terminology separately. This can lead to confusion as a word may have a different meaning for each organization. When different organizations are required to work together, the use of common terminology is an essential element in team cohesion and communications, both internally and with other organizations responding to the incident. An incident command system promotes the use of a common terminology and has an associated glossary of terms that help bring consistency to position titles, the description of resources and how they can be organized, the type and names of incident facilities, and a host of other subjects. The use of common terminology is most evident in the titles of command roles, such as Incident Commander, Safety Officer or Operations Section Chief. Management by objective Incidents are managed by aiming towards specific objectives. Objectives are ranked by priority; should be as specific as possible; must be attainable; and if possible given a working time-frame. Objectives are accomplished by first outlining strategies (general plans of action), then determining appropriate tactics (how the strategy will be executed) for the chosen strategy. Flexible and modular organization Incident Command structure is organized in such a way as to expand and contract as needed by the incident scope, resources and hazards. Command is established in a top-down fashion, with the most important and authoritative positions established first. For example, Incident Command is established by the first arriving unit. Only positions that are required at the time should be established. In most cases, very few positions within the command structure will need to be activated. For example, a single fire truck at a dumpster fire will have the officer filling the role of IC, with no other roles required. As more trucks get added to a larger incident, more roles will be delegated to other officers and the Incident Commander (IC) role will probably be handed to a more-senior officer. Only in the largest and most complex operations would the full ICS organization be staffed. Conversely, as an incident scales down, roles will be merged back up the tree until there is just the IC role remaining. Span of control To limit the number of responsibilities and resources being managed by any individual, the ICS requires that any single person's span of control should be between three and seven individuals, with five being ideal. In other words, one manager should have no more than seven people working under them at any given time. If more than seven resources are being managed by an individual, then that individual is being overloaded and the command structure needs to be expanded by delegating responsibilities (e.g. by defining new sections, divisions, or task forces). If fewer than three, then the position's authority can probably be absorbed by the next highest rung in the chain of command. Coordination One of the benefits of the ICS is that it allows a way to coordinate a set of organizations who may otherwise work together sporadically. While much training material emphasizes the hierarchical aspects of the ICS, it can also be seen as an inter-organizational network of responders. These network qualities allow the ICS flexibility and expertise of a range of organizations. But the network aspects of the ICS also create management challenges. One study of ICS after-action reports found that ICS tended to enjoy higher coordination when there was strong pre-existing trust and working relationships between members, but struggled when authority of the ICS was contested and when the networks of responders was highly diverse. Coordination on any incident or event is facilitated with the implementation of the following concepts: Incident Action Plans Incident action plans (IAPs) ensures cohesion amongst anyone involved toward strictly set goals. These goals are set for specific operational periods. They provide supervisors with direct action plans to communicate incident objectives to both operational and support personnel. They include measurable, strategic objectives set for achievement within a time frame (also known as an operational period) which is usually 12 hours but can be any length of time. Hazardous material incidents (hazmat) must be written, and are prepared by the planning section, but other incident reports can be both verbal and/or written. The consolidated IAP is a very important component of the ICS that reduces freelancing and ensures a coordinated response. At the simplest level, all incident action plans must have four elements: What do we want to do? Who is responsible for doing it? How do we communicate with each other? What is the procedure if someone is injured? The content of the IAP is organized by a number of standardized ICS forms that allow for accurate and precise documentation of an incident. FEMA ICS forms ICS 201 – Incident Briefing ICS 202 – Incident Objectives ICS 203 – Organization Assignment List ICS 204 – Assignment List ICS 205 – Incident Radio Communications Plan ICS 205A – Communications List ICS 206 – Medical Plan ICS 207 – Incident Organization Chart ICS 208 – Safety Message/Plan ICS 209 – Incident Summary ICS 210 – Resource Status Change ICS 211 – Incident Check-In List ICS 213 – General Message ICS 214 – Activity Log ICS 215 – Operational Planning Worksheet ICS 215A – Incident Action Plan Safety Analysis ICS 218 – Support Vehicle/Equipment Inventory ICS 219 – Resource Status Cards (T-Cards) ICS 220 – Air Operations Summary Worksheet ICS 221 – Demobilization Check-Out ICS 225 – Incident Personnel Performance Rating Comprehensive resource management Comprehensive resource management is a key management principle that implies that all assets and personnel during an event need to be tracked and accounted for. It can also include processes for reimbursement for resources, as appropriate. Resource management includes processes for: Categorizing resources Ordering resources Dispatching resources Tracking resources Recovering resources Comprehensive resource management ensures that visibility is maintained over all resources so they can be moved quickly to support the preparation and response to an incident, and ensuring a graceful demobilization. It also applies to the classification of resources by type and kind, and the categorization of resources by their status. Assigned resources are those that are working on a field assignment under the direction of a supervisor. Available resources are those that are ready for deployment(staged), but have not been assigned to a field assignment. Out-of-service resources are those that are not in either the "available" or "assigned" categories. Resources can be "out-of-service" for a variety of reasons including: resupplying after a sortie (most common), shortfall in staffing, personnel taking a rest, damaged or inoperable. T-Cards (ICS 219, Resource Status Card) are most commonly used to track these resources. The cards are placed in T-Card racks located at an Incident Command Post for easy updating and visual tracking of resource status. Integrated communications Developing an integrated voice and data communications system, including equipment, systems, and protocols, must occur prior to an incident. Effective ICS communications include three elements: Modes: The "hardware" systems that transfer information. Planning: Planning for the use of all available communications resources. Networks: The procedures and processes for transferring information internally and externally. Composition Incident commander Single incident commander – Most incidents involve a single incident commander. In these incidents, a single person commands the incident response and is the decision-making final authority. Unified command – A unified command involves two or more individuals sharing the authority normally held by a single incident commander. Unified command is used on larger incidents usually when multiple agencies or multiple jurisdictions are involved. A Unified command typically includes a command representative from major involved agencies and/or jurisdictions with one from that group to act as the spokesman, though not designated as an Incident Commander. A Unified Command acts as a single entity. It is important to note, that in Unified Command the command representatives will appoint a single operations section chief. Area command – During multiple-incident situations, an area command may be established to provide for incident commanders at separate locations. Generally, an area commander will be assigned – a single person – and the area command will operate as a logistical and administrative support. Area commands usually do not include an operations function. Command staff Safety officer – The safety officer monitors safety conditions and develops measures for assuring the safety of all assigned personnel. Public information officer – The public information officer (PIO or IO) serves as the conduit for information to and from internal and external stakeholders, including the media or other organizations seeking information directly from the incident or event. While less often discussed, the public information officer is also responsible for ensuring that an incident's command staff are kept apprised as to what is being said or reported about an incident. This allows public questions to be addressed, rumors to be managed, and ensures that other such public relations issues are not overlooked. Liaison officer – A liaison serves as the primary contact for supporting agencies assisting at an incident. General staff Operations section chief: Tasked with directing all actions to meet the incident objectives. Planning section chief: Tasked with the collection and display of incident information, primarily consisting of the status of all resources and overall status of the incident. Finance/administration section chief: Tasked with tracking incident related costs, personnel records, requisitions, and administrating procurement contracts required by Logistics. Logistics section chief: Tasked with providing all resources, services, and support required by the incident. 200-Level ICS At the ICS 200 level, the function of Information and Intelligence is added to the standard ICS staff as an option. This role is unique in ICS as it can be arranged in multiple ways based on the judgement of the Incident Commander and needs of the incident. The three possible arrangements are: Information & intelligence officer, a position on the command staff. Information & intelligence section, a section headed by an information & intelligence section chief, a general staff position. Information & intelligence branch, headed by an information & intelligence branch director, this branch is a part of the planning section. 300-Level ICS At the ICS 300 level, the focus is on entry-level management of small-scale, all-hazards incidents with emphasis on the scalability of ICS. It acts as an introduction to the utilization of more than one agency and the possibility of numerous operational periods. It also involves an introduction to the emergency operations center. 400-Level ICS At the ICS 400 level, the focus is on large, complex incidents. Topics covered include the characteristics of incident complexity, the approaches to dividing an incident into manageable components, the establishment of an "area command", and the multi-agency coordination system (MACS). Design Personnel ICS is organized by levels, with the supervisor of each level holding a unique title (e.g. only a person in charge of a section is labeled "chief"; a "director" is exclusively the person in charge of a branch). Levels (supervising person's title) are: Incident commander Command staff member (officer) - command staff Section (chief) - general staff Branch (director) Division (supervisor) – A division is a unit arranged by geography, along jurisdictional lines if necessary, and not based on the makeup of the resources within the division. Group (supervisor) – A group is a unit arranged for a purpose, along agency lines if necessary, or based on the makeup of the resources within the group. Unit, team, or force (leader) – Such as "communications unit," "medical strike team," or a "reconnaissance task force." A strike team is composed of same resources (four ambulances, for instance) while a task force is composed of different types of resources (one ambulance, two fire trucks, and a police car, for instance). Individual resource. This is the smallest level within ICS and usually refers to a single person or piece of equipment. It can refer to a piece of equipment and operator, and less often to multiple people working together. Facilities ICS uses a standard set of facility nomenclature. ICS facilities include: pre-designated incident facilities: Response operations can form a complex structure that must be held together by response personnel working at different and often widely separate incident facilities. These facilities can include: Incident command post (ICP): The ICP is the location where the incident commander operates during response operations. There is only one ICP for each incident or event, but it may change locations during the event. Every incident or event must have some form of an incident command post. The ICP may be located in a vehicle, trailer, tent, or within a building. The ICP will be positioned outside of the present and potential hazard zone but close enough to the incident to maintain command. The ICP will be designated by the name of the incident, e.g., Trail Creek ICP. Staging area: Can be a location at or near an incident scene where tactical response resources are stored while they await assignment. Resources in staging area are under the control status. Staging areas should be located close enough to the incident for a timely response, but far enough away to be out of the immediate impact zone. There may be more than one staging area at an incident. Staging areas can be collocated with the ICP, bases, camps, helibases, or helispots. A base is the location from which primary logistics and administrative functions are coordinated and administered. The base may be collocated with the incident command post. There is only one base per incident, and it is designated by the incident name. The base is established and managed by the logistics section. The resources in the base are always out-of-service. Camps: Locations, often temporary, within the general incident area that are equipped and staffed to provide sleeping, food, water, sanitation, and other services to response personnel that are too far away to use base facilities. Other resources may also be kept at a camp to support incident operations if a base is not accessible to all resources. Camps are designated by geographic location or number. Multiple camps may be used, but not all incidents will have camps. A helibase is the location from which helicopter-centered air operations are conducted. Helibases are generally used on a more long-term basis and include such services as fueling and maintenance. The helibase is usually designated by the name of the incident, e.g. Trail Creek helibase. Helispots are more temporary locations at the incident, where helicopters can safely land and take off. Multiple helispots may be used. Each facility has unique location, space, equipment, materials, and supplies requirements that are often difficult to address, particularly at the outset of response operations. For this reason, responders should identify, pre-designate and pre-plan the layout of these facilities, whenever possible. On large or multi-level incidents, higher-level support facilities may be activated. These could include: Emergency operations center (EOC): An emergency operations center is a central command and control facility responsible for carrying out the principles of emergency preparedness and emergency management, or disaster management functions at a strategic level during an emergency, and ensuring the continuity of operation of a company, political subdivision or other organization. An EOC is responsible for the strategic overview, or "big picture", of the disaster, and does not normally directly control field assets, instead making operational decisions and leaving tactical decisions to lower commands. The common functions of all EOC's is to collect, gather and analyze data; make decisions that protect life and property, maintain continuity of the organization, within the scope of applicable laws; and disseminate those decisions to all concerned agencies and individuals. In most EOC's there is one individual in charge, and that is the Emergency Manager. Joint information center (JIC): A JIC is the facility whereby an incident, agency, or jurisdiction can support media representatives. Often co-located – even permanently designated – in a community or state EOC the JIC provides the location for interface between the media and the PIO. Most often the JIC also provides both space and technical assets (Internet, telephone, power) necessary for the media to perform their duties. A JIC very often becomes the "face" of an incident as it is where press releases are made available as well as where many broadcast media outlets interview incident staff. It is not uncommon for a permanently established JIC to have a window overlooking an EOC and/or a dedicated background showing agency logos or other symbols for televised interviews. The National Response Coordination Center (NRCC) at FEMA has both, for example, allowing televised interviews to show action in the NRCC behind the interviewer/interviewee while an illuminated "Department of Homeland Security" sign, prominently placed on the far wall of the NRCC, is thus visible during such interviews. Joint operations center (JOC): A JOC is usually pre-established, often operated 24/7/365, and allows multiple agencies to have a dedicated facility for assigning staff to interface and interact with their counterparts from other agencies. Although frequently called something other than a JOC, many locations and jurisdictions have such centers, often where Federal, state, and/or local agencies (often law enforcement) meet to exchange strategic information and develop and implement tactical plans. Large mass gathering events, such as a presidential inauguration, will also utilize JOC-type facilities although they are often not identified as such or their existence even publicized. Multiple agency coordination center (MACC): The MACC is a central command and control facility responsible for the strategic, or "big picture" of a disaster. A MACC is often used when multiple incidents are occurring in one area or are particularly complex for various reasons such as when scarce resources must be allocated across multiple requests. Personnel within the MACC use multi-agency coordination to guide their operations. The MACC coordinates activities between multiple agencies and incidents and does not normally directly control field assets, but makes strategic decisions and leaves tactical decisions to individual agencies. The common functions of all MACC's is to collect, gather and analyze data; make decisions that protect life and property, maintain continuity of the government or corporation, within the scope of applicable laws; and disseminate those decisions to all concerned agencies and individuals. While often similar to an EOC, the MACC is a separate entity with a defined area or mission and lifespan whereas an EOC is a permanently established facility and operation for a political jurisdiction or agency. EOCs often, but not always, follow the general ICS principles but may utilize other structures or management (such as an emergency support function (ESF) or hybrid ESF/ICS model) schemas. For many jurisdictions the EOC is where elected officials will be located during an emergency and, like a MACC, supports but does not command an incident. Equipment ICS uses a standard set of equipment nomenclature. ICS equipment include: Tanker – This is an aircraft that carries fuel (fuel tanker) or water (water tanker). Tender – Like a tanker, but a ground vehicle, also carrying fuel (fuel tender), water (water tender), or even fire fighting foam (foam tender). Type and kind The "type" of resource describes the size or capability of a resource. For instance, a 50 kW (for a generator) or a 3-ton (for a truck). Types are designed to be categorized as "Type 1" through "Type 5" formally, but in live incidents more specific information may be used. The "kind" of resource describes what the resource is. For instance, generator or a truck. The "type" of resource describes a performance capability for a kind of resource for instance, In both type and kind, the objective must be included in the resource request. This is done to widen the potential resource response. As an example, a resource request for a small aircraft for aerial reconnaissance of a search and rescue scene may be satisfied by a National Guard OH-58 Kiowa helicopter (type & kind: rotary-wing aircraft, Type II/III) or by a Civil Air Patrol Cessna 182 (type & kind: fixed-wing aircraft, Type I). In this example, requesting only a fixed-wing or a rotary-wing, or requesting by type may prevent the other resource's availability from being known. Command transfer A role of responsibility can be transferred during an incident for several reasons: As the incident grows a more qualified person is required to take over as Incident Commander to handle the ever-growing needs of the incident, or in reverse where as an incident reduces in size command can be passed down to a less qualified person (but still qualified to run the now-smaller incident) to free up highly qualified resources for other tasks or incidents. Other reasons to transfer command include jurisdictional change if the incident moves locations or area of responsibility, or normal turnover of personnel due to extended incidents. The transfer of command process always includes a transfer of command briefing, which may be oral, written, or a combination of both. See also Community Emergency Response Team Federal Emergency Management Agency Gold–silver–bronze command structure Incident Management Team National Incident Management System National Response Framework Search and rescue References External links Federal Emergency Management National Incident Management System UN Wildfire Working Group report recommending use of ICS FEMA Incident Command Resource Center Embracing the Incident Command System Above and Beyond Theory, FBI Incident management Disaster preparedness in the United States Firefighting in the United States Management systems
41759128
https://en.wikipedia.org/wiki/PSSC%20Labs
PSSC Labs
PSSC Labs is a California-based company that provides supercomputing solutions in the United States and internationally. Its products include "high-performance" servers, clusters, workstations, and RAID storage systems for scientific research, government and military, entertainment content creators, developers, and private clouds. The company has implemented clustering software from NASA Goddard's Beowulf project in its supercomputers designed for bioinformatics, medical imaging, computational chemistry and other scientific applications. Timeline PSSC Labs was founded in 1984 by Larry Lesser. In 1998, it manufactured the Aeneas Supercomputer for Dr. Herbert Hamber of the University of California, Irvine (the physics and astronomy department); it was based on Linux and had a maximum speed of 20.1 Gigaflops. In 2001, the company developed CBeST, software packages, utilities and custom scripts used to ease the cluster administration process. In 2003 the company released the third version of its cluster management software with support for 32-bit and 64-bit AMD and Intel processors, Linux kernel and other open source tools. In 2005, PSSC Labs demonstrated its new water-cooling technology for high-performance computers at the ACM/IEEE Supercomputing Conference in Seattle, Washington. In 2007 the company focused on supercomputer development for life sciences researchers and announced its technological solution for full-genome data analysis, including assembly, read mapping, and analysis of large amounts of high-throughput DNA and RNA sequencing data. In 2008 PSSC Labs designed the Powerserve Quattro I/A 4000 supercomputer for genome sequencing. In 2013 it released CloudOOP Server Platform for Big Data Analytics / Hadoop Server which offers up to 50TB of storage space in just 1RU. The company Joined Cloudera Partner Program the following year and certified the CloudOOP 12000 in 2014 which is compatible with Cloudera Enterprise 5. In the same year MapR started using CloudOOP 12000 platform for record setting time series data base ingestion rate and the company Joined Hortonworks Partner Program. In 2015 the company was CloudOOP 12000 certified which is Compatible with Hortonworks HDP 2.2. References External links Computer companies of the United States Software companies based in California Development software companies Companies established in 1984 Computer hardware companies Cloud computing providers Privately held companies based in California Companies based in Lake Forest, California 1984 establishments in California Networking hardware companies Software companies of the United States
8185776
https://en.wikipedia.org/wiki/List%20of%20Unix%20commands
List of Unix commands
This is a list of Unix commands as specified by IEEE Std 1003.1-2008, which is part of the Single UNIX Specification (SUS). These commands can be found on Unix operating systems and most Unix-like operating systems. List See also List of GNU Core Utilities commands List of GNOME applications List of GNU packages List of KDE applications List of Unix daemons List of web browsers for Unix and Unix-like operating systems Unix philosophy Footnotes External links IEEE Std 1003.1,2004 specifications IEEE Std 1003.1,2008 specifications Rosetta Stone For *Nix – configurable list of equivalent programs for *nix systems. The Unix Acronym List: Unix Commands – explains the names of many Unix commands. Unix programs System administration
22869138
https://en.wikipedia.org/wiki/OpenedHand
OpenedHand
OpenedHand, a computer software, was an embedded Linux start-up that was acquired by Intel in Q3, 2008. The firm developed an OpenEmbedded distribution called Poky Linux (now part of the Yocto Project) and the Clutter library. The latter is heavily used in the customized UIs of Maemo and Moblin, embedded Linux distributions from Nokia and Intel, respectively. References Intel software
74588
https://en.wikipedia.org/wiki/Caria
Caria
Caria (; from Greek: Καρία, Karia, ) was a region of western Anatolia extending along the coast from mid-Ionia (Mycale) south to Lycia and east to Phrygia. The Ionian and Dorian Greeks colonized the west of it and joined the Carian population in forming Greek-dominated states there. Carians were described by Herodotus as being of Minoan descent, while he reports that the Carians themselves maintained that they were Anatolian mainlanders intensely engaged in seafaring and were akin to the Mysians and the Lydians. The Carians did speak Carian, a native Anatolian language closely related to Luwian. Also closely associated with the Carians were the Leleges, which could be an earlier name for Carians. Municipalities of Caria Cramer's detailed catalog of Carian towns in classical Greece is based entirely on ancient sources. The multiple names of towns and geomorphic features, such as bays and headlands, reveal an ethnic layering consistent with the known colonization. Coastal Caria Coastal Caria begins with Didyma south of Miletus, but Miletus had been placed in the pre-Greek Caria. South of it is the Iassicus Sinus (Güllük Körfezi) and the towns of Iassus and Bargylia, giving an alternative name of Bargyleticus Sinus to Güllük Körfezi, and nearby Cindye, which the Carians called Andanus. After Bargylia is Caryanda or Caryinda, and then on the Bodrum Peninsula Myndus (Mentecha or Muntecha), from Miletus. In the vicinity is Naziandus, exact location unknown. On the tip of the Bodrum Peninsula (Cape Termerium) is Termera (Telmera, Termerea), and on the other side Ceramicus Sinus (Gökova Körfezi). It "was formerly crowded with numerous towns." Halicarnassus, a Dorian Greek city, was planted there among six Carian towns: Theangela, Sibde, Medmasa, Euranium, Pedasa or Pedasum, and Telmissus. These with Myndus and Synagela (or Syagela or Souagela) constitute the eight Lelege towns. Also on the north coast of the Ceramicus Sinus is Ceramus and Bargasus. On the south of the Ceramicus Sinus is the Carian Chersonnese, or Triopium Promontory (Cape Krio), also called Doris after the Dorian colony of Cnidus. At the base of the peninsula (Datça Peninsula) is Bybassus or Bybastus from which an earlier names, the Bybassia Chersonnese, had been derived. It was now Acanthus and Doulopolis ("slave city"). South of the Carian Chersonnese is Doridis Sinus, the "Gulf of Doris" (Gulf of Symi), the locale of the Dorian Confederacy. There are three bays in it: Bubassius, Thymnias and Schoenus, the last enclosing the town of Hyda. In the gulf somewhere are Euthene or Eutane, Pitaeum, and an island: Elaeus or Elaeussa near Loryma. On the south shore is the Cynossema, or Onugnathos Promontory, opposite Symi. South of there is the Rhodian Peraea, a section of the coast under Rhodes. It includes Loryma or Larymna in Oedimus Bay, Gelos, Tisanusa, the headland of Paridion, Panydon or Pandion (Cape Marmorice) with Physicus, Amos, Physca or Physcus, also called Cressa (Marmaris). Beyond Cressa is the Calbis River (Dalyan River). On the other side is Caunus (near Dalyan), with Pisilis or Pilisis and Pyrnos between. Then follow some cities that some assign to Lydia and some to Caria: Calynda on the Indus River, Crya, Carya, Carysis or Cari and Alina in the Gulf of Glaucus (Katranci Bay or the Gulf of Makri), the Glaucus River being the border. Other Carian towns in the gulf are Clydae or Lydae and Aenus. Inland Caria At the base of the east end of Latmus near Euromus, and near Milas where the current village Selimiye is, was the district of Euromus or Eurome, possibly Europus, formerly Idrieus and Chrysaoris (Stratonicea). The name Chrysaoris once applied to all of Caria; moreover, Euromus was originally settled from Lycia. Its towns are Tauropolis, Plarasa and Chrysaoris. These were all incorporated later into Mylasa. Connected to the latter by a sacred way is Labranda. Around Stratonicea is also Lagina or Lakena as well as Tendeba and Astragon. Further inland towards Aydin is Alabanda, noted for its marble and its scorpions, Orthosia, Coscinia or Coscinus on the upper Maeander and Halydienses, Alinda or Alina. At the confluence of the Maeander and the Harpasus is Harpasa (Arpaz). At the confluence of the Maeander and the Orsinus, Corsymus or Corsynus is Antioch on the Maeander and on the Orsinus in the mountains a border town with Phrygia, Gordiutichos ("Gordius' Fort") near Geyre. Founded by the Leleges and called Ninoe it became Megalopolis ("Big City") and Aphrodisias, sometime capital of Caria. Other towns on the Orsinus are Timeles and Plarasa. Tabae was at various times attributed to Phrygia, Lydia and Caria and seems to have been occupied by mixed nationals. Caria also comprises the headwaters of the Indus and Eriya or Eriyus and Thabusion on the border with the small state of Cibyra. History Pre-Classical Greek states and people The name of Caria also appears in a number of early languages: Hittite Karkija (a member state of the Assuwa league, c. 1250 BC), Babylonian Karsa, Elamite and Old Persian Kurka. According to Herodotos, the legendary King Kar, son of Zeus and Creta, founded Caria and named it after him, and his brothers Lydos and Mysos founded Lydia and Mysia, respectively. It is suggested that the mythological link between Caria and Minos' Crete was for the purpose of proving the Hellenic lineage of the Carians, who disputed such association by maintaining that they were autochthonous inhabitants of the mainland. The Carians refer to the shrine of Zeus at Mylasa, which it shared with the Mysians and Lydians, proving that they were brother races. Sovereign state hosting the Greeks Caria arose as a Neo-Hittite kingdom around the 11th century BC. The coast of Caria was part of the Doric hexapolis ("six-cities") when the Dorians arrived after the Trojan War, in c. 13th century BC, in the last and southernmost waves of Greek migration to western Anatolia's coastline and occupied former Mycenaean settlements such us Knidos and Halicarnassos (near present-day Bodrum). Herodotus, the famous historian was born in Halicarnassus during the 5th century BC. Greek apoikism (a form of colonization) in Caria took place mostly on the coast, as well as in the interior in great number, and groups of cities and towns were organized in local federations. Homer's Iliad records that at the time of the Trojan War, the city of Miletus belonged to the Carians, and was allied to the Trojan cause. Lemprière notes that "As Caria probably abounded in figs, a particular sort has been called Carica, and the words In Care periculum facere, have been proverbially used to signify the encountering of danger in the pursuit of a thing of trifling value." The region of Caria continues to be an important fig-producing area to this day, accounting for most fig production in Turkey, which is the world's largest producer of figs. An account also cited that Aristotle claimed Caria, as a naval empire, occupied Epidaurus and Hermione and that this was confirmed when the Athenians discovered the graves of the dead from Delos. Half of it were identified as Carians based on the characteristics of the weapons they were buried with. Lydian province The expansionism of Lydia under Croesus (560-546 BC) incorporated Caria briefly into Lydia before it fell before the Achaemenid advance. Persian satrapy Caria was then incorporated into the Persian Achaemenid Empire as a satrapy (province) in 545 BC. The most important town was Halicarnassus, from where its sovereigns, the tyrants of the Lygdamid dynasty (c.520-450 BC), reigned. Other major towns were Latmus, refounded as Heracleia under Latmus, Antiochia, Myndus, Laodicea, Alinda and Alabanda. Caria participated in the Ionian Revolt (499–493 BC) against the Persian rule. During the Second Persian invasion of Greece (480-479 BC), the cities of Caria were allies of Xerxes I and they fought at the Battle of Artemisium and the Battle of Salamis, where the Queen of Halicarnassus Artemisia commanded the contingent of 70 Carian ships. Themistocles, before the battles of Artemisium and Salamis, tried to split the Ionians and Carians from the Persian coalition. He told them to come and be on his side or not to participate at the battles, but if they were bound down by too strong compulsion to be able to make revolt, when the battles begin, to be purposely slack. Plutarch in his work, The Parallel Lives, at The Life of Themistocles wrote that: "Phanias (), writes that the mother of Themistocles was not a Thracian, but a Carian woman and her name was Euterpe (), and Neanthes () adds that she was from Halicarnassus in Caria.". After the unsuccessful Persian invasion of Greece in 479 BC, the cities of Caria became members of the Athenian-led Delian League, but then returned to Achaemenid rule for about one century, from around 428 BC. Under Achaemenid rule, the Carian dynast Mausolus took control of neighbouring Lycia, a territory which was still held by Pixodarus as shown by the Xanthos trilingual inscription. The Carians were incorporated into the Macedonian Empire following the conquests of Alexander the Great and the Siege of Halicarnassus in 334 BC. Halicarnassus was the location of the famed Mausoleum dedicated to Mausolus, a satrap of Caria between 377–353 BC, by his wife, Artemisia II of Caria. The monument became one of the Seven Wonders of the Ancient World, and from which the Romans named any grand tomb a mausoleum. Macedonian empire Caria was conquered by Alexander III of Macedon in 334 BC with the help of the former queen of the land Ada of Caria who had been dethroned by the Persian Empire and actively helped Alexander in his conquest of Caria on condition of being reinstated as queen. After their capture of Caria, she declared Alexander as her heir. Roman-Byzantine province As part of the Roman Empire the name of Caria was still used for the geographic region but the territory administratively belonged to the province of Asia. During the administrative reforms of the 4th century this province was abolished and divided into smaller units. Caria became a separate province as part of the Diocese of Asia. Christianity was on the whole slow to take hold in Caria. The region was not visited by St. Paul, and the only early churches seem to be those of Laodicea and Colossae (Chonae) on the extreme inland fringe of the country, which itself pursued its pagan customs. It appears that it was not until Christianity was officially adopted in Constantinople that the new religion made any real headway in Caria. Dissolution under the Byzantine Empire and passage to Turkish rule In the 7th century, Byzantine provinces were abolished and the new military theme system was introduced. The region corresponding to ancient Caria was captured by the Turks under the Menteşe Dynasty in the early 13th century. There are only indirect clues regarding the population structure under the Menteşe and the parts played in it by Turkish migration from inland regions and by local conversions, but the first Ottoman Empire census records indicate, in a situation not atypical for the region as a whole, a large Muslim (practically exclusively Turkish) majority reaching as high as 99% and a non-Muslim minority (practically exclusively Greek supplemented with a small Jewish community in Milas) as low as one per cent. One of the first acts of the Ottomans after their takeover was to transfer the administrative center of the region from its millenary seat in Milas to the then much smaller Muğla, which was nevertheless better suited for controlling the southern fringes of the province. Still named Menteşe until the early decades of the 20th century, the kazas corresponding to ancient Caria are recorded by sources such as G. Sotiriadis (1918) and S. Anagiostopoulou (1997) as having a Greek population averaging at around ten per cent of the total, ranging somewhere between twelve and eighteen thousand, many of them reportedly recent immigrants from the islands. Most chose to leave in 1919, before the population exchange. Archaeology In July 2021, archaeologists led by Abuzer Kızıl have announced the discovery of two 2,500-year-old marble statues and an inscription during excavations at the Temple of Zeus Lepsynos in Euromus. According to Abuzer Kızıl, one of the statues was naked while other was wearing armor made of leather and a short skirt. Both of the statues were depicted with a lion in their hands. See also Ancient regions of Anatolia Carians Carian language Aphrodisias Notes Bibliography Downloadable Google Books. Further reading Olivier Henry and Koray Konuk, (eds.), KARIA ARKHAIA ; La Carie, des origines à la période pré-hékatomnide (Istanbul, 2019). 604 pages. . Riet van Bremen and Jan-Mathieu Carbon (ed.),Hellenistic Karia: Proceedings of the First International Conference on Hellenistic Karia, Oxford, 29 June-2 July 2006 (Talence: Ausonius Editions, 2010). (Etudes, 28). Lars Karlsson and Susanne Carlsson, Labraunda and Karia (Uppsala, 2011). External links Livius.org: History and Culture of Ancient Caria Historia Numorum Online, Caria: ancient Greek coins from Caria Asia Minor Coins: ancient Greek and Roman coins from Caria Ancient Caria: In the garden of the sun, CANAN KÜÇÜKEREN, Hürriyet Daily News, 28 March 2011 States and territories disestablished in the 6th century BC Ancient Greek geography Ancient Greek archaeological sites in Turkey Carian people Historical regions of Anatolia History of Aydın Province History of Muğla Province Ionian colonies Praetorian prefecture of the East States and territories established in the 11th century BC Asia (Roman province)
42927292
https://en.wikipedia.org/wiki/List%20of%20U.S.%20Department%20of%20Defense%20code%20names
List of U.S. Department of Defense code names
This is an incomplete list of U.S. Department of Defense code names primarily the two-word series variety. Officially, Arkin (2005) says that there are three types of code name: Nicknames – a combination of two separate unassociated and unclassified words (e.g. Polo and Step) assigned to represent a specific program, special access program, exercise, or activity. Code words – a single classified word (e.g. BYEMAN) which identifies a specific special access program or portion. A list of several such code words can be seen at Byeman Control System. Exercise terms – a combination of two words, normally unclassified, used exclusively to designate an exercise or test In 1975, the Joint Chiefs of Staff introduced the Code Word, Nickname, and Exercise Term System (NICKA) which automated the assignment of names. NICKA gives each DOD organisation a series of two-letter alphabetic sequences, requiring each 'first word' or a nickname to begin with a letter pair. For example, AG through AL was assigned to United States Joint Forces Command. List of code names A Able – NATO Allied Command Europe and U.S. European Command nuclear weapons exercise first word. First gained prominence after the Able Archer 83 nuclear command and control exercise. Able Ally – annual command post exercise involving escalation to nuclear use. Held November/December/ Able Archer 83 Able Crystal – nuclear weapons related exercise Able Gain – annual United States Air Forces in Europe field training exercise involving NATO Nuclear sharing forces Able Staff – command post exercise, April–September 1997, practicing SACEUR's nuclear warning system. Able – Coast Guard first word Able Manner – Windward Passage patrols to interdict Haitian migrants, January 1993-November 1993. Able Response, Able Vigil Operation Able Sentry/Sabre 1993–1999 – U.S. Army task force attached to United Nations Preventive Deployment Force (UNPREDEP) in Macedonia to monitor border activity. Ace Guard was a NATO deployment of the ACE Mobile Force (Air) and surface to air missiles to Turkey, between 3 January 1991 – 8 March 1991. Turkey had requested greater NATO forces to be deployed to meet any Iraqi threat in the leadup to the first Gulf Crisis/War. Operation Acid Gambit: operation undertaken by U.S. Army Delta Force and the 160th SOAR to rescue Kurt Muse, a U.S. citizen involved in the broadcast of anti-Noriega material, during the United States invasion of Panama, 1989. Active Edge was a routine no-notice NATO Allied Forces Central Europe readiness exercise held twice yearly. "The most recent such exercise took place, on the date and in the format planned, on 12th June 1989. It did not include the exercise deployment of forces outside their garrisons." (House of Lords Debate 27 June 1989) Operation Active Endeavour – NATO Allied Forces Southern Europe Mediterranean patrols Adventure – ACE Mobile Force first word Adventure Exchange – command post exercise Adventure Express – winter exercise series, dating to at least 1983. African – U.S.-Moroccan EUCOM (now Africa Command) first word African Eagle – U.S.-Moroccan biennial exercise practicing deployment of USAF units to Morocco. Dates to at least 1984. African Falcon '85, African Fox '85. African Lion – in 2009 described as "Train forces capable of conducting joint and combined U.S., air, and land combat interoperability operations." Exercise Agile Spirit 19 began with dual opening ceremonies at Senaki Air Base and Vaziani Training Area in the country of Georgia on July 27, 2019. Approximately 3,300 military personnel from 14 allied and partner nations will participate in the exercise. Exercise Alam Halfa: U.S.-New Zealand, NZ-sponsored land forces exercise, Linton and Napier, central North Island, April 26-May 6, 2012. The new exercise series, according to the New Zealand Herald, was made possible by the "Wellington Declaration" signed by the two countries in November 2010. Continued probably yearly after that point; Alam Halfa 2013. Named for the Battle of Alam Halfa during World War II. Allied – NATO Allied Command Europe first word Allied Action, Allied Effort – CJTF exercises Operation Allied Force 1999 – Air war over Serbia to withdraw forces from Kosovo. Operations Allied Goodwill I & II, 4–9 February & 27 February-24 March 1992. After the collapse of the Soviet Union in December 1991, NATO flew teams of humanitarian assistance experts and medical advisors to Russia and other former Soviet states using NATO Airborne Early Warning Force trainer cargo aircraft. Operation Amber Star – Delta Force and Intelligence Support Activity anti-Persons Indicted for War Crimes (PIFWC) reconnaissance and surveillance, Bosnia-Herzegovina Operation Anchor Guard – 10 August 1990 – 9 March 1991. Following the Iraqi invasion of Kuwait of 2 August 1990, NATO Airborne Early Warning Force Boeing E-3 Sentry aircraft were moved to Konya, Turkey to monitor the situation. The aircraft remained based at Konya to maintain surveillance of south-eastern Turkey throughout the crisis, which led to the Gulf War of January–March 1991. Anatolian Eagle – an air force exercise hosted by the Turkish Air Force and held in Konya, Turkey. There are both national and international exercises held, the international exercises usually involving air arms of the United States, other NATO forces, and Asian countries. The first exercise, Anatolian Eagle 01, was held by TAF Operations Command on 18–29 June 2001. As well as Turkey, the air forces of USA and Israel also participated. Operation Arc Light B-52 operations in Southeast Asia. Operation Arid Farmer – 1983 Support to the crisis in Chad Ardent Sentry – annual U.S. Northern Command homeland security/defense exercise. Armada Sweep – U.S. Navy electronic surveillance from ships off the coast of East Africa to support drone operations in the region. Exercises Atlantic Guard (May 2002, Land Forces Atlantic Area); Atlantic Spear (18-22 November, 2002, hosted by LFAA); Atlantic Shield (12 May 2003, hosted by Halifax Port Authority - Canadian interagency homeland security exercises. Atlas – U.S. European Command/Africa Command African and sometimes European operation first word Atlas Drop – from 1997 to 2003, U.S.-Tunisian exercise Central Accord 14 was started by U.S. European Command in 1996, at which time it was called Atlas Drop. AFRICOM took over the exercise in 2008, and renamed it Atlas Accord in 2012. This put it in line with AFRICOM's other “Accord series” exercises, which focus on training African ground forces. Atlas Accord 12 was an AFRICOM Mali-based medical exercise conducted in Mopti, Mali, on 7–15 February 2012 despite the cancellation of Flintlock 12. The joint-aerial-delivery exercise, hosted by U.S. Army Africa, brought together Army personnel with African armed forces to enhance air drop capabilities and ensure effective delivery of military resupply materials and humanitarian aid. Atlas Eagle – in 2009 described as "Train forces capable of conducting joint and combined U.S., air, and land combat interoperability operations." Atlas Response – response to Mozambique floods of 2001 Atlas Vision – peacekeeping exercise with Russia. Atlas Vision 2012 appears to have been the first of a series, according to commentators at Small Wars Journal. Atlas Vision 2013 took place in Germany. U.S. European Command had been in the planning stages for Atlas Vision 2014, which was to take place in July in Chelyabinsk (Chelyabinsk Oblast), and focus on joint peace-keeping operations. Because of the 2014 pro-Russian conflict in Ukraine, “all planning for this exercise has been suspended.” Attain Document – in 1986, the US Navy began several "Freedom of Navigation" operations in the area around Libya, the first two parts of the operation being held from January 26–30, and February 12–15 without incident. The third part began on 23 March 1986 and led to the Action in the Gulf of Sidra (1986). Operation Assured Delivery – DOD logistical support to humanitarian aid efforts in Georgia following the Russo-Georgian War in 2008. Assured Lift – a Joint Task Force carried out move of Economic Community of West African States Monitoring Group cease-fire monitoring troops into Liberia, March–April 1997, from Abidjan. See also European Command documentation. Assured Response – a Joint Task Force carried out Non-combatant evacuation operation from Monrovia, Liberia, 8 April-12 August 1996. Run by Special Operations Command, Europe. Operation Auburn Endeavor 1998 – relocation of uranium fuel from Tbilisi, Georgia. Exercise Austere Challenge – October 2012 US-Israel military exercise (missile defense). Austere Challenge '15 was a warfighting exercise conducted across several locations in the U.S. European Command area, which involved participation by the 1 (German/Netherlands) Corps. Austere Strike – U.S. Air Force system utilizing an electro-optical seeker and tracker for acquisition and tracking missions flown by McDonnell Douglas F-4 Phantom II aircraft. Autumn Forge – A series of NATO exercises conducted each year in Allied Command Europe. It began in 1975 linking a number of training exercises under a common scenario, to present a more potent public image. Autumn Forge 83. Operation Autumn Return – non-combatant evacuation operation (NEO) in Côte d'Ivoire, September–October 2002. Operation Avid Recovery – U.S. European Command activities with Nigerian and British service personnel in clearing unexploded ordnance left over after the 2002 Lagos armoury explosion at Ikeja Cantonment, Lagos, on 27 January 2002. U.S. Explosive Ordnance Disposal soldiers helped to "stabilize" the cantonment area, as well as "providing safety training to the public and special ordnance handling training" for Nigerian Armed Forces personnel. Joint Task Force Aztec Silence – European Command "established Joint Task Force AZTEC SILENCE under the Commander of the U.S. Sixth Fleet in December 2003 to counter transnational terrorism in the under-governed areas of Northern Africa and to build closer alliances with those governments. In support of this, U.S. Navy intelligence, surveillance and reconnaissance assets Lockheed P-3 Orions based in Sigonella, Sicily were used to collect and share information with partner nations and their militaries. This robust cooperative ISR effort was augmented by the release of intelligence collected by national assets." B Banner – First word for withdrawal of USAF units from Thailand in extension of Keystone operations Banner Star – Inactivation of 43d Tactical Electronic Warfare Squadron, 556th Civil Engineering Squadron (Heavy Repair), 609th Special Operations Squadron, discontinuance of F-102 detachment at Udorn and movement of planes to Clark Air Base, consolidating F-105s at Takhli, reduction of C-121s of 553d Reconnaissance Wing by one third. Banner Sun – Ended USAF activities at Takhli Royal Thai Air Force Base; inactivated 355th Tactical Fighter Wing, moved F-105s to Kadena Air Base, moved one squadron of Wild Weasel aircraft to Korat, reduced 553d Reconnaissance Wing to a squadron, moved 11th Tactical Reconnaissance Squadron to United States, discontinued F-102 detachment at Don Muang and movement of planes to Clark Air Base. Bar None - Strategic Air Command exercise to test operational effectiveness of a wing. Name replaced by Buy None. Exercise Battle Griffin – amphibious exercise practising reception, staging, and operation of a MAGTF in defense of Northern Norway. Also involved UK, Netherlands. In 1991 Exercise Battle Griffin took place in February–March. That year the 2nd MEB made the first test of the Norway Air-Landed Marine Expeditionary Brigade, composed completely of Marine Corps Reserve units as Operation Desert Storm was getting under way. The force comprised HQ Company 25th Marines, 3/25 Marines, Co E, 4th Reconnaissance Battalion, and 1st Battalion, 14th Marines (artillery, composed of HQ, Alpha, and Bravo Batteries). Battle Griffin 93; Battle Griffin 96. Beacon Flash – U.S.-Oman dissimilar air combat exercise going back to the 1970s. Carrier Air Wing 1 flying from carried out at least two 'Beacon Flash' exercises in the first half of 1983 (Command History 1983). Operation Big Buzz – a U.S. military entomological warfare field test conducted in the U.S. state of Georgia in 1955. Operation Big Star – Minuteman Mobility Test Train rail-mobile test of deployment of Minuteman ICBMs, 1960. Big Safari – a United States Air Force program begun in 1952 which provides management, direction, and control of the acquisition, modification, and logistics support for special purpose weapons systems derived from existing aircraft and systems. Operation Blade Jewel: the return of military dependents to the U.S. at the time of the United States invasion of Panama. Operation Blue Bat Deployment of Composite Air Strike Force to Lebanon in 1958 Bold Alligator – post Cold War Pacific Fleet amphibious exercise, with foreign participation. Bold Quest – Nearly 1,800 military personnel from U.S. and partner nations participated in Bold Quest 17.2 in Savannah, Georgia, the latest in a series of coalition capability demonstration and assessment events sponsored by the Joint Staff. Over the course of 18 days in October–November, members of the U.S. armed services, National Guard, U.S. Special Operations Command, NATO Headquarters and 16 partner states participated in the demonstration, which collected technical data on systems and subjective judgments from the warfighters using them. Bounty Hunter - counter-space electronic warfare system located at Peterson Air Force Base, tested by 17th Test Squadron on behalf of United States Space Force during February 2020. Exercise Bright Star – U.S./Egypt, downsizing Buckskin Rider: one of numerous exercises 40th Air Division, USAF, took part in 1951–89 time period. Operation Buffalo Hunter – Drone reconnaissance operations over North Vietnam Operation Bullet Shot – temporary duty assignment of US-based technicians to Andersen Air Force Base, Guam, during the Vietnam War. Known as "the herd shot 'round the world".) Bumpy Action – Drone reconnaissance missions over Southeast Asia 1968–1969 Operation Burnt Frost – interception and destruction of a non-functioning U.S. National Reconnaissance Office (NRO) satellite named USA-193. A launch from the cruiser Lake Erie took place on February 20, 2008. Busy Sentry: Strategic Air Command exercise for intercontinental ballistic missile units. Busy Sentry II: Strategic Air Command Single Integrated Operational Plan (SIOP) 4D missile training assistance program Busy Player: Exercise which included participation of 40th Air Division (in 1951–89 period). Busy Usher: Strategic Air Command launch of No. 13 LF-02 missile MK-1 Minuteman-II Button Up: Strategic Air Command security system reset procedures used during Minuteman facility wind down Buy None: Strategic Air Command exercise to test operational effectiveness of wings. Name replaced Bar None. Included participation of 40th Air Division in 1951–89 period. C Operation Calm Support 1998–1999 – Support to Kosovo Diplomatic Observer Mission mission. Exercise Carte Blanche - NATO atomic warfare exercise, circa 1955, which the 21st Fighter-Bomber Group took part in, vicinity of Central Europe. Celestial Balance – 2009 Baraawe raid in Somalia that killed Saleh Ali Saleh Nabhan Exercise Central Enterprise – NATO Allied Forces Baltic Approaches/Allied Forces Central Europe exercise, "designed to test the integrated air defense system throughout Western Europe. Regular exercises which incorporate a major military low flying element over the United Kingdom include Exercises Elder Forest (once every two years), Elder Joust (once a year), Central Enterprise (once a year), Mallet Blow (twice a year), OSEX (once a year) and Salty Hammer (once a year). Some of these exercises test and practice the United Kingdom air defences while others primarily provide aircrews with training in tactical low flying techniques. The June 1982 Central Enterprise exercise marked the first practical test of the new NATO airborne early warning system." 1997 included deployment of 301st Fighter Wing, Air Force Reserve. Circuit Gold – On 7 November 1973 CINCPACFLT announced the deployment of a CIRCUIT GOLD aircraft to monitor units of the Soviet Navy. CIRCUIT GOLD was the name for Navy special multi-sensor Lockheed P-3A aircraft. Two such aircraft were assigned to the United States Pacific Fleet, operated by VP-4. (CINCPAC Command History 1973, 254, 281/818 at Nautilus) Operation Chrome Dome – Strategic Air Command airborne alert indoctrination training. Cobra Ball – Boeing RC-135 reconnaissance aircraft Cobra Dane – AN/FPS-108 Cobra Dane passive electronically scanned array (PESA) phased array radar installation operated by Raytheon for the United States Air Force at Eareckson Air Station, Shemya Island, Aleutian Islands, Alaska. It was built in 1976 and brought on-line in 1977 to verify Soviet compliance with the SALT II arms limitation treaty. Cobra Eye – Boeing RC-135X reconnaissance aircraft with mission of tracking ICBM reentry vehicles. In 1993, it was converted into an additional RC-135S Cobra Ball. The sole aircraft was converted during the mid-to-late-1980s from a C-135B Telemetry/Range Instrumented Aircraft, serial number 62–4128. Cobra Jaw, Cobra King: radar/intelligence programs Cobra Judy – AN/SPQ-11 passive electronically scanned array (PESA) radar mounted aboard the missile range instrumentation ship up until 2014. Cobra Mist – Anglo-American experimental over-the-horizon radar station at Orford Ness, Suffolk, England. It was known technically as AN/FPS-95 and sometimes referred to as System 441a; a reference to the project as a whole. Combat Sent - Boeing RC-135U "Combat Sent" electronic listening aircraft are designed to collect intelligence from adversary radar emissions. The data helps develop new or upgraded radar warning receivers, radar jammers, decoys, anti-radiation missiles, and training simulators. Cobra Shoe – reported Over The Horizon (Backscatter) (OTH-B) radar designed by RCA Corporation, designed to monitor ballistic missile tests in the interior of the Soviet Union, installed in the Western Sovereign Base Area (Akrotiri), Cyprus. Source is "U.S. declassified documents". Installed since around 1964; no details on when/whether it left service. Combat Tree – AN/APX-80 equipment installed on F-4D Phantom II aircraft to enable them to locate and identify Vietnam People's Air Force aircraft by interrogating their Identification Friend or Foe (IFF) equipment. Commando Bearcat, Commando Jade, and Commando Night – regional exercises supported by 314th Air Division, Fifth Air Force, in South Korea, 1955–84. Commando Club - US operation of the Vietnam War which used command guidance of aircraft by radar at Lima Site 85 in Laos for ground-directed bombing (GDB) of targets in North Vietnam and clandestine targets in Laos. Exercise Commando Sling – Approximately three deployments of USAF F-15s and F-16s from both Active Duty and National Guard units from around the world are made each year to Singapore under this title. The 497th Combat Training Flight takes part in regional exercise and global contingencies, and provides housing; morale, recreation and welfare facilities and programs: medical services; force protection to resources and personnel; and legal, financial, communications, and contracting support to assigned and deployed personnel. Constant – Arkin lists this prefix as an 'Air force operations first word, often referring to Air Force Technical Applications Center and other reconnaissance missions. Constant programs in the 1980s included Constant Bore, Constant Dome, Constant Fish, Constant Globe, Constant Seek, and Constant Take.' Sublisted Constant programs in Arkin, 310, included Constant Blue (Presidential successor helicopter evacuation plan), Constant Gate, Constant Help, Constant Phoenix (55th Wing nuclear monitoring) Constant Pisces, Constant Shotgun, Constant Source, Constant Spur, Constant Star, Constant Stare (an Air Intelligence Agency organisation). Constant Peg – evaluation of clandestinely-acquired Soviet fighter aircraft at Nellis Air Force Base, Nevada, by 4477th Test and Evaluation Squadron. Operation Continuing Promise is a periodic series of US military exercises conducted under the direction of United States Southern Command. Designated by Roman numeral (“Continuing Promise I” was in 2007), or by year (“Continuing Promise 2009”); they provide medical, dental and veterinary aid to people in Latin America. Operation Cool Shoot – live missile firing exercise, held at Tyndall AFB, Florida, with participation of 21st Composite Wing, Alaskan Air Command. Exercise Cope North is an annual multinational military exercise taking place in and around Guam. The first exercise took place in 1978. Exercise Cope Thunder – A Pacific Air Forces-sponsored exercise initiated in 1976, Cope Thunder was devised as a way to give aircrews their first taste of warfare and quickly grew into PACAF's "premier simulated combat airpower employment exercise." Moved from Clark Air Base to Eielson Air Force Base in Alaska in 1992, permanently, after the eruption of Mount Pinatubo. Exercise Cope Tiger – USAF exercise in Thailand Corona South - the 72nd Bombardment Wing at Ramey Air Force Base in Puerto Rico hosted the annual United States Air Force Commander's Conferences, code named Corona South, which began on an irregular basis in 1955. By the 1960s, Corona South had become a regular annual event at Ramey. It continued until the wing was inactivated. Military Airlift Command then continued them until Ramey closed and they were transferred to Homestead Air Force Base, Florida. Coronet Nighthawk – Operation Coronet Nighthawk was a Caribbean deployment of Air Force fighters. Coronet Oak – the continuing operation in which Air Force Reserve Command (AFRC) and Air National Guard (ANG) C-130 aircraft, aircrews and related support personnel deploy from the United States to Muñiz Air National Guard Base, Puerto Rico, to provide theater airlift support for the U.S. Southern Command. The mission moved from Howard Air Force Base, Panama, as a result of the U.S. military withdrawal from Panama, from April 1999. Units rotate in and out of Muñiz ANGB every two weeks. Forces assigned to Coronet Oak provide United States Southern Command with logistic and contingency support throughout Central and South America. The mission typically covers embassy resupply, medical evacuations, and support of U.S. troops and/or the Drug Enforcement Administration. Creek – USAFE first word Creek Caste, Creek Claw – intelligence programs/projects Creek Action – Command-wide effort by HQ USAFE to realign functions and streamline operations, 1973 Creek Klaxon - In 1986, the 119th Fighter Group assumed the USAF Zulu alert mission at Ramstein Air Base, West Germany. The 119th and other Reserve Component Air Defense units rotated to Ramstein and stood continuous air sovereignty alert for one year, provided for NATO. Creek Party – Deployment of Air National Guard Boeing KC-97 tankers to Europe to support United States Air Forces Europe operations. Operation Crescent Wind - initial air attack against Taliban/Al Qaeda in Afghanistan after the September 11 terrorist attacks, from 7 October 2001. D Operation Dawn Blitz – Post 2010 amphibious exercise with foreign participation Operation Deep Freeze Annual resupply operations for American scientific sites in Antarctica. Exercise Deep Furrow – 1960s-1970s Allied Forces Southern Europe exercise practicing the defense of Greece and Turkey. Deep Siren – Raytheon/RRK/Ultra Electronics Maritime Systems expendable "long-range acoustic tactical pager", launched via sub/surface/air-launched buoy (JDW 21 Nov 2007). Operation Deliberate Force 1995 – NATO air strikes on Bosnian Serb military forces. Operation Deny Flight 1993–1995 – U.S./NATO enforcement of no-fly zone over Bosnia-Herzegovina. Operation Desert – various Desert Crossing 1999 – tested response to possible fall of Iraqi President Saddam Hussein. Operation Desert Fox 1998 – air strikes on Iraq WMD sites. Operation Desert Lion began on 27 March 2003, during the War in Afghanistan (2001-present). U.S. Army soldiers from the 505th Parachute Infantry Regiment launched an operation in the Kohe Safi Mountains and surrounding areas in the Kandahar Province of Afghanistan. Their mission was to hunt for supplies and members of the Taliban and Al-Qaida. Operation Desert Shield Operation Desert Storm – War to remove Iraq from Kuwait, 1991. Operation Desert Strike – 1996 missile strikes on Iraq. Operation Desert Thunder Destined Glory – Cold War NATO naval exercise, Mediterranean Operation Determined Falcon 1998 – 80-aircraft NATO show of force over Albania near Kosovo. Operation Determined Forge - maritime component of Operation Joint Force (SFOR II). Operation Determined Guard - the first naval activity associated with Operation Joint Guard (the Stabilization Force (invariably known as "S-For") in Bosnia-Herzegovina). Determined Promise-03 (DP-03) was a two-week, multi-level exercise which started on August 18, 2002, with a simulated outbreak of pneumonic plague in Nevada, adding a hurricane, an air threat in Alaska, and a train wreck in Kentucky to the list of 1,700 'injects' that would crop up during the exercise. DP-03 was intended as the final testing event before the declaration of Full Operational Capability for U.S. Northern Command, with DHS and a total of 34 federal agencies represented. Operation Dragoon Ride Operation Dragon Rouge – Airlift of Belgian troops to evacuate civilians during rebellion in the Congo, 1964. Dust Hardness – A modification improvement to Minuteman-III approved for service use in 1972 Operation Eager Glacier was a secret United States effort to spy on Iran with aircraft in 1987 and 1988. The information gathered became part of an intelligence exchange between U.S. military intelligence services and Iraq during the Iran–Iraq War. E Exercise Eager Light – In October 2012, more than 70 U.S. 1st Armored Division personnel deployed to Jordan to conduct Exercise Eager Light, a 30-day command post exercise that focuses on brigade-level warfighting tactics and procedures. This exercise dates back to the mid-1980s. Exercise Eager Lion – Eager Lion 12 took place in Jordan. Now the largest U.S. military exercise in the Middle East, surpassing Bright Star. The exercise "amounts to an outgrowth of the annual bilateral "Infinite Moonlight" US-Jordan exercise that stretches back to the 1990s." Now possibly involves Syrian Civil War contingencies. Operation Eagle Claw – Unsuccessful attempt to rescue hostages held by Iran in the American Embassy in Tehran. Operation Eagle Eye 1998–1999 – Monitoring compliance with United Nations Security Council Resolution 1199 in Kosovo. CONPLAN Eagle Guardian Operation Eagle Pull – Evacuation of Americans from Phnom Penh in April 1975. Operation Eagle's Summit (Oqab Tsuka in Pashto) was a military operation conducted by ISAF and Afghan National Army troops, with the objective of transporting a 220-tonne turbine to the Kajaki Dam in Helmand Province through territory controlled by Taliban insurgents. 2008. Operation Earnest Will – 1987–1988 protection of tankers in the Persian Gulf from Iranian attack. Operation Eastern Exit - evacuation of the United States Ambassador to Somalia and Embassy in Mogadishu, Somalia, in 1991. Exercise Eastern Wind - exercises with Somali National Army, 1980s. Held 1983 as amphibious component of Bright Star, including the deployment of VMFA(AW)-242 flying Grumman A-6 Intruders to Berbera. The exercise "failed dismally"; "The Somali army did not perform up to any standard," one diplomat said. ..The inefficiency of the Somali armed forces is legendary among foreign military men." The 24th Marine Expeditionary Unit participated in Eastern Wind in August 1987 in the area of Geesalay. At sea , , and took part as Amphibious Squadron 32/Commander Task Unit 76.8.2 from 2–9 August 1987. Eastern Venture - reported Warning Order issued for airlift support to famine relief operations in Sudan, covered by CENTCOM Command History 1985, page 30 (via www3.centcom.mil/FOIALibrary). Operation El Dorado Canyon 1986 – USAF and USN air strikes on Libya in retaliation for terrorist bombing of La Belle Disco in West Berlin. Echo Casemate – Support of French and African peacekeeping forces in the Central African Republic. Operation Enduring Freedom 2001–present – Anti-Al Qaeda operations in Afghanistan and subsequent anti-terrorist operations worldwide. Operation Essential Harvest 2001 – Successful NATO program to disarm NLA in Macedonia. Fervent Archer – EUCOM directed Joint Special Operations Command task force in Sarajevo from 2001. Believed to be a continuation of 'Amber Star' (see above) (Arkin, 364). Exercise Fearless Guardian 2015 – U.S./Ukrainian training exercise. (total 2,200 participants, including 1,000 U.S. military). Initial personnel and equipment of the 173rd Airborne Brigade arrived in Yavoriv, Lviv Oblast, on 10 April 2015. Fearless Guardian will train Ukraine's newly formed National Guard under the Congress-approved Global Contingency Security Fund. Under the program, the United States will begin training three battalions of Ukrainian troops over a six-month period beginning in April 2015. Exile Hunter – Training of Ethiopian forces for operations in Somalia F Fincastle Trophy an anti-submarine warfare contest between the air forces of the United Kingdom, Australia, Canada and New Zealand. During the competition, crews compete in anti-submarine warfare, anti-surface warfare, and intelligence gathering, and surveillance. Flexible Anvil/Sky Anvil 1998 – Planning for Balkan/Kosovo operations. "JTF Flexible Anvil [ComSixthFleet, to plan and be prepared to execute a limited strike option using YTLAM and CALCM missiles]; and JTF Sky Anvil [COMAFFOR/Sixteenth Air Force, to plan a more extensive strike option using fixed wing aircraft] developed concrete military plans which were approved and ready for execution." Operation Fluid Drive - evacuation of non-combatants from Lebanon, 1980s. Operation Focus Relief – the movement and support of West African troops intended for dispatch to the United Nations Mission in Sierra Leone (UNAMSIL). Formidable Shield – seaborne ABM exercise using NATO Military Command Structure to direct ships. Formidable Shield 2019 utilized STRIKFORNATO in northern UK waters. Fox Able – Transatlantic deployment of jet fighter aircraft. Fox Able One: Deployment of a squadron of F-80 aircraft from Selfridge Air Force Base, Michigan to RAF Odiham, England in July 1948. Fox Peter—Transpacific deployment of jet fighter aircraft. Fox Peter One – Deployment of a wing of Republic F-84G Thunderjets from California to Japan, using air refueling in July 1952. Operation Fracture Cross Alpha – Operation to prevent North Vietnamese interference with air operations supporting Operation Lam Son 719. Operation Fracture Deep – Plan to strike Vietnam People's Air Force bases south of 20th parallel. Combined with Operation Proud Bunch as Operation Proud Deep. Exercise Freedom Banner Operation Freedom Deal – Air interdiction and close air support strikes in Cambodia, 1970–1973. Operation Freedom Eagle – Part of Operation Enduring Freedom conducted in the Philippines Operation Freedom Falcon – 2011 military intervention in Libya Operation Freedom Sentinel – (or Freedom's Sentinel) Post 2015 operations in Afghanistan Operation Freedom Train – Original name for Operation Linebacker I Operation Frequent Wind – Evacuation of civilians from Saigon in April 1975. G Gallant Hand – A large scale joint warfare training exercise held in 1972 at Fort Hood in which 23,000 soldiers and airmen participated. Gallant Journey 05: Arkin write that this exercise was a "..Classified intelligence or special operations" activity held in March 2005, with DIA, NAS and CIA/OMA involvement. Gallant Knight – A command post exercise of the Rapid Deployment Force. Gallant Shield – Joint Chiefs of Staff directed and coordinated exercise. Giant Plow: a United States Air Force Minuteman launcher closure test program Giant Profit: A Minuteman modified operational missile test plan GIANT PATRIOT - operational base launch program of test flights of Minuteman-II missiles. The program was terminated by Congress in July 1974 Giant Scale – SR-71 reconnaissance missions over Southeast Asia 1969 GIGANTIC CHARGE: Program to notify NORAD of all or part of Single Integrated Operational Plan (SIOP) targeting for Minuteman GIN PLAYER: Strategic Air Command tests of Minuteman missile for identification and execution Glory Trip – United States Air Force Follow-on Test and Evaluation (FOT&E) program for intercontinental ballistic missiles. Many launches from Vandenberg Launch Facility 2, Vandenberg Air Force Base. Golden Spear Gorgon Stare is video capture technology. It is a spherical array of nine cameras attached to an Uninhabited aerial vehicle. The United States Air Force calls it "wide-area surveillance sensor system". Exercise Grand Slam was a 1952 major naval exercise of the newly formed North Atlantic Treaty Organization in the Mediterranean Sea. Granite Sentry was a NORAD Cheyenne Mountain nuclear bunker improvement program during the 1990s. H Have Blue – first nickname for Lockheed F-117 Nighthawk special access program development, later Senior Trend (Arkin, 496). Have Doughnut – Defense Intelligence Agency project whose purpose was to evaluate and exploit a Mikoyan-Gurevich MiG-21 "Fishbed-E" (YF-110) that the United States Air Force acquired in 1967 from Israel. HAVE DRILL – Defense Intelligence Agency project whose purpose was to evaluate and exploit a Mikoyan-Gurevich MiG-17 "Fresco" (YF-113A) fighter aircraft. Have Ferry – Defense Intelligence Agency project whose purpose was to evaluate and exploit a Mikoyan-Gurevich MiG-17 "Fresco-C" (YF-114C) fighter aircraft. Have Privilege – Defense Intelligence Agency project whose purpose was to evaluate and exploit a Shenyang J-5 "Fresco" (YF-113C) fighter aircraft. High Flight – during the period 15 September 1997 – 17 October 1997, the search and rescue activities carried out from Windhoek, Namibia following the mid-air collision of a U.S. Lockheed C-141 Starlifter and a German Tupolev Tu-154 transport aircraft. High Tide – Modification to Republic F-84 Thunderjets of the 136th Fighter-Bomber Wing to extend their range by equipping them for air refueling during the Korean War. Hula Hoop, Nice Dog, Dial Flower, Pock Mark (USNS Wheeling), Pot Luck – code names concerned with the monitoring of French nuclear tests at Mururoa Atoll, French Polynesia, 1972 and 1973. I Infinite Moonlight – U.S.–Jordanian exercise, 1990s. Operation Infinite Reach - cruise missile strikes on al-Qaeda bases in Khost, Afghanistan, and the Al-Shifa pharmaceutical factory in Khartoum, Sudan, on August 20, 1998. Missiles launched by United States Navy Exercise Internal Look – one of U.S. Central Command's primary planning events from the 1980s. It had frequently been used to train CENTCOM to be ready to defend the Zagros Mountains from a Soviet attack and was held annually. Operation Instant Thunder was the name given to air strike planning options by the United States Air Force in late 1990 during Operation Desert Shield. Designed by Colonel John A. Warden III, it was planned to be an overwhelming strike which would devastate the Iraq Armed Forces with minimum loss of civilian as well as American life. Iris Gold – On 3 October 1994, Company C, Second Battalion, 5th Special Forces Group (Airborne) was deployed on IRIS GOLD 95-1 for presences forward and pre-mission training with selected elements of the Kuwait Ministry of Defense (MOD). The training mission rapidly transitioned to defense of Kuwait operation establishing a Combat Air Support (CAS) umbrella, which became part of Operation Vigilant Warrior. Iron Clad - second designation for specially equipped Lockheed P-3 Orion long range maritime patrol aircraft, operated by VPU-1 and VPU-2 (Patrol Squadron, Special Projects), U.S. Navy. Operation Iron Hand – Suppression of Enemy Air Defenses missions in Southeast Asia, 1965–1973 Island Thunder – U.S.-Italian "non-combatant evacuation exercise", 1996, 1997 (Arkin, Code Names). Island Thunder 12 was a DOE NNSA-FBI sponsored weapons of mass destruction domestic crisis management table top exercise, part of the Silent Thunder series, held in Hawaii, 29 March 2012. Operation Ivory Coast – On 21 November 1970, a joint United States Air Force/United States Army force commanded by Air Force Brigadier General LeRoy J. Manor and Army Colonel Arthur D. "Bull" Simons landed 56 U.S. Army Special Forces soldiers by helicopter at the Sơn Tây prisoner-of-war camp located only west of Hanoi, North Vietnam. The raid was intended to free U.S. prisoners of war, but failed because the POWs had already been moved. Operation Ivy Bells was a joint United States Navy, CIA, and National Security Agency (NSA) mission whose objective was to place wire taps on Soviet underwater communication lines from 1971. J Exercise Jack Frost (later known as BRIM FROST) – exercise by U.S. forces in Alaska. Jagged Thorn – British exercise in Sudan, 1978, with 1st Battalion Grenadier Guards and elements Life Guards (Acorn, magazine of Life Guards, 1979). Joint Anvil – unknown special operation, 1999-2001 Operation Joint Endeavor – NATO operation to enforce the Treaty of Paris ending the war in Bosnia-Herzegovina. Began when the Allied Rapid Reaction Corps entered Bosnia on 20 December 1995. Operation Joint Forge – NATO support for SFOR 1998-c.2005 Operation Joint Guard – NATO follow-on force to Joint Endeavor, SFOR, Bosnia-Herzegovina, 1996–1998 Joint Guardian – NATO-led Kosovo Force Joint Spirit – NATO Combined Joint Task Force CPX/computer-aided exercise, planned as a building block for Strong Resolve, 1–30 September 2001. Cut short after the September 11, 2001, terrorist attacks. Operation Joint Venture Exercise Joint Winter – NATO exercise in Norway, 5–16 March 2001. Jolly Roger – UK national submarine exercise, 1995 Jukebox Lotus – Operations in Libya after attack on Benghazi Consulate. Operation Jump Start Operation Junction City – 1966 Vietnam War airborne operation in War Zone C, South Vietnam Exercise Judicious Response – U.S. Africa Command CJCS-directed warfighting TTX. JR 15 included certification of 2nd Marine Expeditionary Brigade. Junction Rain – Maritime security operations in the Gulf of Guinea. Junction Serpent – Surveillance operations of ISIS forces near Sirte, Libya Juniper – EUCOM/Israeli first word. Juniper Cobra Juniper Falconry – On 29 March 1992, Vice Admiral W. A. Owens, Commander, United States Sixth Fleet, embarked aboard with a 28-man Army, Navy, and Air Force staff including Brigadier General James Mathers (Commanding General, Provide Comfort) at Haifa for Exercise Juniper Falconry II. From 1–7 April, Monterey was underway for Juniper Falconry II, with a two-day port visit in Haifa on 3–4 April. From 7–9 April, Monterey visited Haifa again for exercise debriefs and to disembark the Joint Task Force. Juniper Fox, Juniper Hawk, Juniper Stallion - see Command History, for the year 2000, via http://history.navy.mil. Juniper Micron – Airlift of French forces to combat Islamic extremists in Mali Juniper Nimbus – Support for Nigerian Forces against Boko Haram Operation Juniper Shield – Counterterrorism operations in the northwest African Sahara/Sahel. formerly known as Operation Enduring Freedom – Trans Sahara (OEF-TS); name change occurred 2012–13, though OEF-TS was still be used at times in 2014. Closely linked with 'Flintlock' exercise. Jupiter Garrett – Joint Special Operations Command operation against high value targets in Somalia Operation Just Cause – 1989 incursion into Panama to oust Manuel Noriega. Operation Justice Reach Justified Seamount – Counter piracy operation off east African coast K Exercise Keen Edge/Keen Sword – U.S./Japan defense of Japan exercise. Every two years, the US and Japan hold the Keen Sword exercise, the biggest military exercise around Japan. Japan and the United States participate, with Canada playing a smaller role. Keystone – Overall name for withdrawal of US forces from Vietnam (see Banner) Keystone Bluejay (Increment III) Withdrawal of 50,000 troops by 15 April 1970 Keystone Cardinal (Increment II) Reduction of troop ceiling to 484,000 by 15 December 1969 Keystone Eagle (Increment I) Reduction of troop ceiling to 534,500 in August 1969 Keystone Oriole Alpha (Increment VII) Reduction of 100,000 by 1 December 1970 Keystone Robin Alpha, Bravo, Charlie (Increments IV, V, VI) 3 reductions of 50,000/ 40,000/ 60,000 by 15 April 1971 Kodiak Hunter – Training of Kenyan forces for operations in Somalia Keystone – Prefix for withdrawal of USAF units from Vietnam as part of "Vietnamization" See also Banner. Keystone Bluejay – Movement of 16th Tactical Reconnaissance Squadron to Misawa Air Base and inactivation of 557th, 558th and 559th Tactical Fighter Squadrons. Keystone Cardinal – movement of U-10 and C-47 aircraft of 5th Special Operations Squadron to Korea. Keystone Robin Alfa – 31st Tactical Fighter Wing moved to United States, 531st Tactical Fighter Squadron inactivated and planes returned to the United States, A-37s of the 8th and 90th Special Operations Squadrons turned over to the Vietnamese Air Force. Keystone Robin Bravo – Return of 45th Tactical Reconnaissance Squadron planes to the United States. L Long Skip – Support for India in border dispute with China in Kashmir, 1962–1963 Long Life: launch of LGM-30 Minuteman from 'live' launch facility w/7 sec of fuel Exercise Long Look - long-established individual exchange programme between Commonwealth armies. For example, Captain Katie Hildred, Queens Alexandra’s Royal Army Nursing Corps, was dispatched on Exercise Long Look in New Zealand in 2017, a four-month exchange programme that will see her deployed on various exercises and training packages with the New Zealand Army. Operation Looking Glass – SAC/Strategic Command survivable airborne command post. The name came from the aircraft's ability to "mirror" the command and control functions of the underground command post at Strategic Air Command headquarters. Began 1961. Operation Louisville Slugger – 1971 RF-4C Phantom II reconnaissance north of the DMZ to locate North Vietnam Fan Song radar sites M Operation Mango Ramp - most known as initial airlift entry to Mogadishu for the African Union Mission in Somalia, 2007. In a larger sense, a Department of State [DoS] contract with the Africa Union Mission in Somalia [AMISOM] worth $208m. Exercise Mavi Balina – Turkish anti-submarine exercise Millennium Challenge 2002 Mongoose Hunter – Training of Somali forces (Danab Brigade) for operations against Al Shabab Operation Mount Hope III - retrieval of abandoned Libyan Mil Mi-25 attack helicopyer from Wadi Doum in the Aozou Strip, Chad, 1988. Operation Mountain Resolve – launched by the United States and coalition allies on 7 November 2003 in the Nuristan province and Kunar province in Afghanistan. It involved an airdrop into the Hindu Kush mountains by the U.S. 10th Mountain Division and resulted in the killing of Hezbi commander Ghulam Sakhee, a few clashes, and the finding of some minor weapon caches. Mountain Shield I and II – rehearsal exercises for withdrawing UNPROFOR from the former Yugoslavia, 1990s. Mountain Shield I was held at Grafenwöhr, Germany, from 7–15 July 1995, by United States Army Europe. Operation Mountain Storm began on or about 12 March 2004, following the completion of Operation Mountain Blizzard. Part of spring fighting against Taliban in Afghanistan (Operation Enduring Freedom). N Exercise Natural Fire – East Africa Neon – U.S./Bahrain first word Neon Response Neon Spark – U.S./Bahrain naval exercise series, including the UK. Neon Spark 98. Neon Spear – Disaster response symphosium with Eastern African countries New Normal – Development of rapid response capability in Africa New Tape – Airlift support for UN operations and humanitarian airlift in Republic of the Congo (Léopoldville) 1960–1964 Operation Nickel Grass 1973 – Support of Israel during the 1973 October War. Exercise Nifty Nugget, a 1978 transportation plans exercise, exposed great gaps in understanding between military and civilian participants: mobilization and deployment plans fell apart, and as a result, the United States and its NATO allies "lost the war". Estimated "400,000 troop 'casualties,' and thousands of tons of supplies and 200,000 to 500,000 trained combat troops would not have arrived at the identified conflict scene on time." "Two major recommendations came out of Nifty Nugget: a direct line of command between the transportation agencies and the Joint Chiefs of Staff; and the creation of an agency responsible for deployments. This agency was to be established as the Joint Deployment Agency, a forerunner to United States Transportation Command. Operation Nifty Package was a United States Delta and Navy SEAL-operated plan conducted in December 1989 designed to capture Panamanian leader Manuel Noriega. It unfolded as part of the wider Operation Just Cause. When Noriega took refuge in the Apostolic Nunciature of the Holy See (diplomatic quarter), deafening music and other psychological warfare tactics were used to convince him to exit and surrender himself. Operation Night Harvest – investigation of abandoned military aircraft in Iraq Operation Night Reach – Transported Second United Nations Emergency Force (UNEF II) peacekeepers to Middle East at end of Yom Kippur War, 6-24 October 1973. Night Train was part of a series of chemical and biological warfare tests overseen by the DOD Deseret Test Center as part of Project 112. The test was conducted near Fort Greely, Alaska from November 1963 to January 1964. The primary purpose of Night Train was to study the penetration of an arctic inversion by a biological aerosol cloud. Nimble Shield – Operation against Boko Haram and ISIL West Africa. Operation Nimbus Moon – Cleared the Suez Canal Operation Nimbus Stream – Cleared the Suez Canal Operation Nimbus Spar 1974–1975 – Cleared the Suez Canal Joint Task Force Noble Anvil 1999 – Operation Allied Force, air war plannnig and execution against Serbia. Operation Noble Eagle – Air defense, mobilization of reserve forces after the September 11 terrorist attacks. Exercise Noble Jump (:de:Noble Jump) – is a NATO maneuver that took place in the summer of 2015 in Żagań, Poland. A second edition of the maneuver (Noble Jump II) took place in the summer of 2017 in Bulgaria and Romania. In May and June 2019 the exercise will take place again in Żagań, Poland. Operation Noble Response – U.S. delivery of over 900,000 kg of food after unseasonable rains and flooding in the northeastern part of Kenya in 1998. Included formation of Joint Task Force Kenya. Exercise Noble Suzanne - exercise with Israel in the first half of 2000, involving , , and . Noble Resolve – a United States Joint Forces Command (USJFCOM) experimentation campaign plan to enhance homeland defense and improve military support to civil authorities in advance of and following natural and man-made disasters. Operation Nomad Shadow is the name of a classified military operation that may have begun in November 2007 to share intelligence information between the U.S. and the Republic of Turkey. Appears to involve UAV patrols, potentially in connection with the Syrian Civil War. Operation Nomad Vigil – deployment to Gjadër Air Base, Albania of General Atomics MQ-1 Predator unmanned aerial vehicles, April 1995 – 1996. Operation Nordic Shield II was held in the summer of 1992. As they did five years before, units of the 94th Army Reserve Command, principally the 187th Infantry Brigade (Separate), the 167th Support Group (Corps) and their subordinate battalions and companies, deployed to Canadian Forces Base Gagetown in southern New Brunswick, to simulate the defense of Iceland against Warsaw Pact forces. Iceland defense was the CAPSTONE mission of both the 187th IB and 167th Support Group. Part of the 1992 exercise included lanes training as part of the United States Army Forces Command's "Bold Shift" initiative to reinforce unit war-fighting task proficiency. Operation Northern Delay occurred on 26 March 2003 as part of the 2003 invasion of Iraq. It involved dropping paratroopers of the 173rd Airborne Brigade into Northern Iraq. It was the last large-scale combat parachute operation conducted by the U.S. military. Exercise Northern Edge – exercise in Alaska Exercise Northern Entry – 1 New Zealand Special Air Service Group special forces training in Canada. Solely NZ exercise. Exercise Northern Light – 1 NZ SAS Group extreme cold weather training in Norway. Exercise Northern Safari – Conducted on Great Barrier Island from 5–28 March (1983 or 1984). The aim was to mobilize the New Zealand Army Ready Reaction Force and practice selected elements in air/sea deployment and the conduct of operations. The exercise was supported by , a company of the Gurkha Regiment from British Forces, Hong Kong which acted as the enemy for the exercise, and an Australian Army engineer squadron. Exercise Northern Strike is an annual readiness exercise hosted by the Michigan National Guard at Camp Grayling and Alpena Combat Readiness Training Center each August. Beginning in 2012, the exercise has grown to become the largest joint reserve component exercise in the United States. Operation Northern Watch – 1997–2003 enforcement of No Fly Zone over northern Iraq. Northern Wind 2019 – Swedish, Norwegian, Finnish main-defense style exercise being conducted on the Swedish/Finn border March 2019. Live exercise March 20–27. Fifteen hundred Finnish troops incorporated into Swedish 3rd Brigade; Norway to contribute the entirety of Brigade North including elements No. 339 Squadron RNoAF, a United States Marine Corps infantry battalion and a Royal Marines company group. Exercise area stretching from Boden (SWE) to Haparanda in Finland, and north to vicinity of Overtornea. O Operation Oaken Sonnet Oaken Sonnet I – 2013 rescue of United States personnel from South Sudan during its civil war Oaken Sonnet II – 2014 operation in South Sudan Oaken Sonnet III – 2016 operation in South Sudan Oaken Steel – July 2016 to January 2017 deployment to Uganda and reinforcement of security forces at US embassy in South Sudan. Objective Voice – Information operations and psychological warfare in Africa Oblique Pillar – private contractor helicopter support to U.S. Navy SEAL-advised units of the Somali National Army fighting al-Shabaab in Somalia. The operation was in existence as of February 2018. Bases used included Camp Lemonnier, Djibouti; Mombasa and Wajir, Kenya; Baidoa, Baledogle, Kismayo and Mogadishu, Somalia; Entebbe, Uganda. Operation Observant Compass – initially attempts to kill Joseph Kony and eradicate the Lord's Liberation Army. In 2017, with around $780 million spent on the operation, and Kony still in the field, the United States wound down Observant Compass and shifted its forces elsewhere. But the operation didn't completely disband, according to the Defense Department: “U.S. military forces supporting Operation Observant Compass transitioned to broader scope security and stability activities that continue the success of our African partners." Obsidian Lotus – Training Libyan special operations units Obsidian Mosaic – Operation in Mali. Obsidian Nomad I – Counterterrorism operation in Diffa, Niger Obsidian Nomad II – Counterterrorism operation in Arlit, Niger Octave Anchor – Psychological warfare operations focused on Somalia. Octave Shield – operation by Combined Joint Task Force – Horn of Africa. Octave Soundstage – Psychological warfare operations focused on Somalia. Octave Stingray – Psychological warfare operations focused on Somalia. Octave Summit – Psychological warfare operations focused on Somalia. Operation Odyssey Dawn – air campaign against Libya, 2011. Odyssey Lightning – Airstrikes on Sirte, Libya in 2016. Odyssey Resolve – Intelligence, Surveillance and Reconnaissance operations in area of Sirte, Libya. Olympic Defender – "U.S. space war plan", to be first shared with unspecified allies after a new version of the plan was promulgated in December 2018. Exercise Orient Shield – United States Army/JGSDF annual exercise Oil Burner – Strategic Air Command low level bomber training. Replaced by Olive Branch. Olive Branch – Strategic Air Command low level bomber training. Replaced Oil Burner. Name later dropped and training areas called Instrument Routes or Visual Routes. Exercise Open Gate - NATO air/naval exercise in the Mediterranean, late 1970s. 1979 iteration included No. 12 Squadron RAF deployment from Honington to RAF Gibraltar, carrying out the low-level anti-shipping mission. P Operation Pacer Goose – annual resupply of Thule Air Base, Greenland, by a heavy supply ship each summer. made the trip in 2010. PACEX (Pacific Exercise) United States Pacific Fleet exercise series. PACEX '89 was the biggest peacetime exercise since the end of World War II. It was designed by Seventh Fleet "to determine the ability of US and allied naval forces to sustain high tempo combat operations for an extended period." Three aircraft carrier battle groups, and two different battleship battle groups, gathered off the U.S. West Coast, proceeded through the Gulf of Alaska and the Pacific Ocean to Japan, and merged to conducted Battle Force operations against "opposing" USAF and JASDF and Navy as ANNUALEX 01G. served as the "..Anti-Air Warfare Coordinator for her successive battle groups and as alternate AAWC for the entire Battle Force. Steaming into the Sea of Japan, Antietam was also the AAWC for the Amphibious Task Force as they made their assault on the South Korean beach as Exercise VALIANT BLITZ 90." Test of Maritime Strategy. See also Also PACEX 02. Pacific Bond – U.S.-Australian army reserve exchange Pacific Castle – Pacific naval exercise Pacific Haven – emergency evacuation of pro-U.S. Kurds to Andersen AFB, Guam, September 1996-April 1997 Pacific Horizon – WMD exercise Pacific Kukri – UK–NZ exercise, 2000–2001 Pacific Look – U.S.–Australian army reserve exchange, 1997 Pacific Nightingale – PACAF exercise, South Korea Exercise Pacific Partnership Pacific Protector – Proliferation Security Initiative exercise involving Japanese-flagged merchant ship simulating carrying WMD. Pacific Reserve Pacific Spectrum Pacific Warrior – SPAWAR telemedicine exercise connected to South Korea Pacific Wind Palace Lightning - USAF withdrawal of its aircraft and personnel from Thailand. Paladin Hunter – Counterterrorism operation in Puntland PANAMAX – exercise to defend the Panama Canal. Held 2005 and in 2006, under the leadership of Commander, United States Naval Forces Southern Command. Peace Atlas II Peace Crown - air defence automation study for Imperial Iranian Air Force (FMS), effective 3 August 1972, LGFX. Peace Hawk - Foreign Military Sales case for Northrop F-5B/E aircraft, effective date 8 September 1971, USAF implementing organisation SMS/AC. Peace Hercules - Foreign Military Sales of Lockheed C-130H aircraft for the Congo, effective date 11 September 1973, implementing organisation SMSAC. Peace Icarus - Foreign Military Sales of McDonnell Douglas F-4E aircraft for Greece, effective date 3 April 1972, USAF implementing organisation LGFXR. Peace Inca - Northrop F-5s for Peru, effective date 8 February 1973, LGFXR. Phoenix Banner – "Special Air Mission", air transportation of the president of the United States, aircraft usually codenamed Air Force One. The basic procedures for such flights are stipulated in Air Force Instruction #11-289. Phoenix Duke I and II - involved NATO efforts to resettle ethnic Albanians into a secure environment as part of the peace agreement with Serbia, 1999, with participation of 433rd Airlift Wing. Phoenix Copper – flights flown in support of the Secret Service for VIPs other than the president and vice president. Operation Phoenix Jackal – Support for Saudi Arabian and Kuwait against Iraq in 1994 Phoenix Oak – See Coronet Oak. Operation name when directed by Air Combat Command 1992–? Phoenix Raven program involves specially trained United States Air Force Security Forces airmen flying with and protecting Air Mobility Command aircraft around the world, in areas where there is "inadequate security." Operations Phoenix Scorpion I & II 1997–1998, also phases III and IV. Deployment of additional troops and equipment to Kuwait, Saudi Arabia, and the Middle East during 'Desert Thunder' confrontation with Iraq. In 1998 the 433rd Airlift Wing participated in Phoenix Scorpions I – III. Phoenix Scorpion IV involved David Grant USAF Medical Center. Phoenix Silver designates a Special Air Mission flight involving the Vice President of the United States. Exercise Pitch Black – air exercise held in northern Australia Polo Hat – nuclear command and control exercise Polo Step (code name)- classified as Top Secret, Polo Step was a United States Department of Defense code name or ‘compartment’ that was initially created in the late 1990s to designate closely held planning information on covert operations against Al Qaeda in Afghanistan. A person could have a Top Secret clearance, but if they would not have a need to know about the planning as well, they did not have a ‘Polo Step’ authorization. Following the September 11, 2001 attacks, ‘Polo Step’ started to be used by United States Central Command to be the planning compartment for the 2003 invasion of Iraq. Operation Pony Express – was the covert transportation of, and the provision of aerial support for, indigenous soldiers and material operating across the Laotian and North Vietnamese borders during the Vietnam War. Exercise Port Call 86 was a Joint Chiefs of Staff sponsored command post exercise carried out 12-22 November 1985 (CENTCOM Command History 1985 via https://www3.centcom.mil/FOIALibrary/Search, p.100). Operation Power Flite – a United States Air Force mission in which three Boeing B-52 Stratofortresses became the first jet aircraft to circle the world nonstop, when they made the journey in January 1957 in 45 hours and 19 minutes, using in-flight refueling to stay aloft. The mission was intended to demonstrate that the United States had the ability to drop a hydrogen bomb anywhere in the world. Operation Power Pack – Intervention in the Dominican Republic following 1965 military coup. Proud Phantom - unprogrammed tactical deployment ordered by Secretary of Defense/JCS, not part of the regular exercise program, in which 12 F-4E Phantom IIs and at least 400 personnel were dispatched to Cairo West Air Base, Egypt, during FY 80. Operation Prime Chance – special operations forces operating off U.S. Navy vessels in the Persian Gulf, mid-1980s. Operation Prize Bull – September 1971 trikes against North Vietnamese POL storage sites Promise Kept – International Committee of the Red Cross facilitated visit to the crash site of Scott Speicher, Iraq, 1995. Operation Proud Bunch – Plan to strike hard logistics sites in North Vietnam within 35 miles of the DMZ. Combined with Operation Fracture Deep as Operation Proud Deep. Operation Proud Deep – Combined Operation Fracture Deep and Operation Proud Bunch to strike Vietnam People's Air Force bases and logistics sites south of 18th parallel. Operation Proud Deep Alpha – Extension of Operation Proud Deep to targets south of 20th parallel. Proven Force – northern air campaign from Turkey over Iraq in 1991. General Jamerson activated JTF Proven Force at Ramstein Air Base, Germany. The task force had three component organizations: Commander Air Force Forces (later to be mostly the 7440th Composite Wing (Prov)), Commander Army Forces, and Commander Joint Special Operations Task Force, which would seek and rescue downed allied pilots. Provide – EUCOM humanitarian assistance operations first word Provide Assistance Operation Provide Comfort – Provide Comfort II – Kurdish security zone in northern Iraq, 1991. Provide Hope I/II/III/IV/V Airlift of humanitarian relief to Commonwealth of Independent States Provide Promise Airlift of humanitarian relief to Bosnia Herzegovina Provide Refuge Operation Provide Relief – 1992 humanitarian relief missions to Somalia. See Operation Restore Hope. Provide Transition Purple – British joint exercise prefix Purple Dragon – joint forced entry operations. Purple Dragon 00/Roving Sands 00, Fort Bragg and Puerto Rico; Purple Dragon 98/JTFEX 98–1, Fort Bragg and Puerto Rico, Jan-Feb. 1998. Exercise Purple Star/Royal Dragon – held in April–May 1996, the exercise brought together the XVIII Airborne Corps and the 82nd Airborne Division (both from the United States), 5th Airborne Brigade (British Army), the U.S. Air Force, the Royal Air Force, the U.S. Marines, 3 Commando Brigade Royal Marines and the Royal Navy. It saw the deployment of 5th Airborne Brigade to North Carolina in the largest Anglo-American exercise for twenty-three years. Relieved from the back-to-back commitment of aircraft carriers to the Adriatic in support of UNPROFOR, the Royal Navy sent a large force, headed by a Carrier Task Group with HMS Invincible flying the flag of Rear Admiral Alan West, Commander UK Task Group; HMS Fearless and an amphibious group; and a mine countermeasures group headed by . U.S. Atlantic Command, headquartered at Norfolk, Virginia, directed the exercise. The aim of the operation was to practise a joint UK force in combined manoeuver in an overseas theatre. The exercise provided the first opportunity to test the new UK Permanent Joint Headquarters, which provided the core of the British Joint Headquarters in support of the exercise Joint Commander. The exercise also was designed to test the new UK Joint Rapid Deployment Force which was established on 1 August 1996. A description of 1st Brigade, 82nd Airborne Division's experience during Royal Dragon can be found in Tom Clancy, Airborne: A Guided Tour of an Airborne Task Force, Berkley Books, New York, 1997, 222–228. Purple Guardian – U.S. homeland defense exercise Purple Horizon – Cyprus, 2005. Purple Solace – 4-6 Jun 2013 Three officers from the Combined Joint Operations from the Sea Center of Excellence (CJOS COE) supported the U.S. Joint Forces Staff College's Exercise “Purple Solace” as mentors. This exercise is a 3-day faculty guided planning exercise which reinforces the initial steps necessary to derive a mission statement and a commander's intent (end state) and a limited Concept of Operations in response to a series of natural disasters. Exercise Purple Sound is a high level computer assisted exercise designed to support the training of the Command and Staff of the Permanent Joint Headquarters which deploys and commands the Joint Rapid Deployment Force. Operation Purple Storm Exercise Purple Warrior Q Operation Quick Lift 1995 – Support of NATO Rapid Reaction Force and Croatia forces deployment to Bosnia-Herzegovina. R Rainmaker – Turse and Naylor write that this United States Africa Command codename refers to "A highly sensitive classified signals intelligence effort. Bases used: Chebelley, Djibouti; Baidoa, Baledogle, Kismayo and Mogadishu, Somalia." Rapid Trident 14 – the exercise, in Lviv, Ukraine, near the border with Poland, is to “promote regional stability and security, strengthen partnership capacity, and foster trust while improving interoperability between USAREUR, the land forces of Ukraine, and other (NATO and partner) nations,” according to the USAREUR website. Operation Ready Swap – Use of reserve units to transport aircraft engines between Air Materiel Command's depots. Exercise Real Thaw – an annual exercise run by the Portuguese Air Force with the participation of the Army and Navy and foreign military forces. The exercise has the objective of creating a realistic as possible operational environment in which Portuguese forces might participate, provide joint training with both land, air and naval forces, and provide interoperability between different countries. Operation Red Hat – publicly acknowledged part of this operation involved relocation of chemical and biological weapons stored in Okinawa to Johnston Atoll for destruction. Most of the operation took place at night, to avoid observation of the operation by the Okinawans, who resented the presence of chemical munitions on the island. The Chemical weapons were brought from Okinawa under Operation Red Hat with the re-deployment of the 267th Chemical Company and consisted of rockets, mines, artillery projectiles, and bulk 1-ton containers filled with Sarin, Agent VX, vomiting agent, and blister agent such as mustard gas. Chemical agents were stored in the high security Red Hat Storage Area (RHSA) which included hardened igloos in the weapon storage area, the Red Hat building (#850), two Red Hat hazardous waste warehouses (#851 and #852), an open storage area, and security entrances and guard towers. There are indications that the codename was also used to designate storage and/or testing of chemical and biological agents on Okinawa in the 1960s, connected with Project 112. Reef Point - first designation for specially equipped Lockheed P-3 Orion long range maritime patrol aircraft, operated by VPU-1 and VPU-2 (Patrol Squadron, Special Projects), U.S. Navy. Exercise Reforger – Return of Forces to Germany (Cold War). Operation Resolute Response – DOD support to recovery from 1998 U.S. embassy bombings in Nairobi, Kenya and Dar es Salaaam, Tanzania. Operation Resolute Support – NATO non-combat advisory and training mission to support the Government of the Islamic Republic of Afghanistan (GiROA) from 2015 onwards. Operation Restore Hope – U.S. participation in UNOSOM II, 1992–1994, Somalian humanitarian aid and security efforts. Resultant Fury – DoD activity in November 2004 which included the weapons testing of free-fall bombs against decommissioned USN vessels off Hawaii. Exercise RIMPAC – Rim of the Pacific Exercise, large-scale U.S. Pacific Fleet activity with allied involvement. Rivet Amber – one of a kind Boeing RC-135E reconnaissance aircraft equipped with a large 7 MW Hughes Aircraft phased-array radar system. Originally designated C-135B-II, project name Lisa Ann. Rivet Cap – 1981-1984 decommissioning of Titan II intercontinental ballistic missiles Rivet Switch – A 1970s program to upgrade VHF and UHF air/ground communications to solid state devices. Operation Rolling Thunder Air strikes on North Vietnam. Rugged Nautilus '96 – a joint service exercise aimed at discouraging any possible terrorist challenges through a show of force in the Gulf while the 1996 Atlanta Olympics were underway. Also described as "..a USAF-Navy exercise to test US Central Command's ability to gather and organize forces quickly in theater." S Exercise Saber Guardian – July 2016 exercise involving 116th Cavalry Brigade Combat Team (ARNG) and troop elements from Armenia, Azerbaijan, Bulgaria, Canada, Georgia, Moldova, Poland, Romania, Ukraine and the U.S. Safari Hunter - 2017 operation in Somalia with SNA/Jubaland striking north from Kismayo against Al-Shabaab centered in Middle Juba. "Hunter" series shows Somali National Army Danab participation. Exercise Safe Skies – 2011 Ukrainian, Polish and American air forces fly-together to help prepare the Polish and Ukrainians for enhanced air supremacy and air sovereignty operations. Envisaged as helping lead up to Ukraine hosting Euro 2012. California Air National Guard began preparing the event in 2009 via the State Partnership Program. Exercise Sage Brush – November–December 1955 joint U.S. Army/Air Force exercise at Fort Polk, Louisiana, lasting forty-five days. Involved 110,000 Army and 30,500 Air Force personnel to trial army airmobility concepts to try to settle a dispute over the matter by the Army and Air Force. Some helicopter lift provided by the special 516th Troop Carrier Group, Assault, Rotary Wing, flying Piasecki H-21s as part of the 20th Combat Airlift Division (Provisional). Saharan Express: AFRICOM Naval Forces Africa scheduled and conducted, multilateral combined maritime exercises with West and North African nations, supported by European partners, focusing on maritime security, and domain awareness. Saharan Express 2012 was to be held 23–30 April 2012. Exercise Salty Hammer – NATO air defense exercise, including sorties flown over the UK. Operation Secure Tomorrow was a multinational peace operation that took place from February 2004 to July 2004 in Haiti. Seed Hawk X-Ray – 1971 program to modify Wild Weasel aircraft to operate the AGM-78B Standard ARM Senior Ball – Shipment of material directed by USAF. Senior Blade – Senior Year ground station (a van capable of exploiting U-2R digital imagery). Senior Blue – Air-to-Air Anti-Radiation Missile (?) Senior Book – U-2R COMINT system, used on flights from OL-20, 1970s – 1980s Senior Bowl – 2 B-52Hs, serials 60-21 and 60–36, modified to carry 2 Lockheed D-21B "Tagboard" reconnaissance drones Senior Cejay – Northrop B-2A stealth bomber; continuation of Senior Ice (name changed, when the development contract was awarded to Northrop on 4 November 1981). Sometimes quoted as Senior CJ. Senior Chevron – Senior Year-related program. Senior Citizen – Classified program; probably a projected Special Operations stealth and/or STOL transport aircraft. Arkin writes that this was an Aurora reconnaissance aircraft or similar low observable system (Arkin, 495). Senior Class – Shipment of material directed by Headquarters USAF. Senior Club – Low-observable anti-tamper advanced technology systems assessment. Senior Crown – Lockheed SR-71 reconnaissance aircraft, based on CIA-sponsored A-12 "Oxcart" Senior Dagger – A test & evaluation exercise performed by Control Data Corp. for Air Force Rome Air Development Center for purposes of reconnaissance. It may involve flights of Lockheed SR-71 reconnaissance aircraft in Southeast Asia. Senior Dance – ELINT/SIGINT program, possibly U-2 related. Senior Game – A military item shipping designation. Senior Glass – U-2 SIGINT sensor package upgrade combing Senior Spear and Senior Ruby Senior Guardian – Grob/E-Systems D-500 Egrett, high-altitude surveillance / reconnaissance aircraft, German-US cooperation, 1980s Senior Trend – Lockheed F-117 Nighthawk special access program development, previously Have Blue (Arkin, 496). Sentinel Alloy: Land gravity surveys in support of the Minuteman system, cancelled Sentinel Lock – Development of raster annotated photography by Aeronautical Charting and Information Service for mapping in Sooutheast Asia. Shadow Express – a Non-combatant evacuation operation in Liberia, September–October 1998, to assure the evacuation of Liberian faction leader Roosevelt Johnson (Krahn). Run by Special Operations Command, Europe, involving a 12-man survey and assessment team (ESAT), the , dispatched from NSWU-10 at Rota, Spain, and a Hercules-delivered detachment of NSWU-2 which was moved to Freetown. USS Firebolt also arrived. (Arkin, 500). Operation Sharp Edge – was a non-combatant evacuation operation (NEO) carried out by the 22nd and 26th Marine Expeditionary Units of the United States Marine Corps in Liberia from 5–21 August 1990 (and 1991?). Involved . (Arkin, 503) Operation Sharp Guard was a multi-year joint naval blockade in the Adriatic Sea by NATO and the Western European Union on shipments to the former Yugoslavia. Succeeding NATO's Maritime Guard and the WEU's Sharp Fence, it ran from 1993 to 1996. Shining Express – NEO evacuation in Liberia, 2003, coordinated aboard . Silent Warrior – Exercise Silent Warrior 16, held in Garmisch, Germany from Nov. 9-13, 2016, brought together U.S. Special Operations Forces and representatives from 19 African states to discuss cooperative methods to combat Violent Extremist Organizations. Operation Sixteen Ton – Use of reserve troop carrier units to move United States Coast Guard equipment From Floyd Bennett Naval Air Station to Isla Grande Airport in Puerto Rico and San Salvador in the Bahamas. Sky Shield was a series of three large-scale military exercises conducted in the United States in 1960, 1961, and 1962 by the North American Aerospace Defense Command (NORAD) and the Strategic Air Command (SAC) to test defenses against a Soviet Air Forces attack. Operation Southern Watch – Enforcement of no fly zone in southern Iraq Exercise Space Flag is a United States Space Force exercise dedicated to providing tactical space units with advanced training under contested, degraded, and operationally-limited ("CDO") conditions. First held circa 2017. Exercise Spring Train - an annual Royal Navy-led exercise. Operation Steel Box/Golden Python – DOD supported withdrawal of chemical munitions from Germany and coordination of delivery/transport to Johnston Atoll. Operation Steep Hill (I through XV) – Planning and intelligence operations for the use of military force to prevent violence in association with civil rights demonstrations in the early 1960s. Steep Hill XIII called elements of the Alabama National Guard and regular army into service to protect marches from Selma to Montgomery, Alabama. Stellar Wind was the code name of a National Security Agency (NSA) warrantless surveillance program begun under the George W. Bush administration's President's Surveillance Program (PSP). The program was approved by President Bush shortly after the September 11, 2001 attacks. Sunset Lily - A project to conduct a test launch of a Martin CGM-13B Mace missile from Kadena Air Base, Okinawa to a target island in Japan after testing of the Mace/Matador family had at Cape Canaveral Air Force Station ended. Cancelled because of political implications. Operation Swift Lift – Use of reserve units to transport high priority cargo for the air force during their inactive duty training periods. T Exercise Talisman Saber Tamale Pete – Vietnam War air refueling operations planning. See Young Tiger. Tandem Thrust – in 2005, Exercise Tandem Thrust, along with Exercises Crocodile and Kingfisher, was combined to form Exercise Talisman Saber. Teal Ruby – STS-62-A was a planned Space Shuttle mission to deliver a reconnaissance payload (Teal Ruby) into polar orbit Exercise Teamwork was a major NATO biennial exercise in defense of Norway against a Soviet land and maritime threat. It was established by Norway, Denmark, the UK and the U.S. in 1982 and grew considerably up until the early 1990s. Teamwork '88 allowed NATO to evaluate its ability to conduct a maritime campaign in the Norwegian Sea and project forces ashore in northern Norway. Teamwork '92 was the largest NATO exercise for more than a decade. Held in the northern spring of 1992, it included a total of over 200 ships and 300 aircraft, held in the North Atlantic. Vice Admiral Nicholas Hill-Norton, Flag Officer, Surface Flotilla, led the RN contingent as Commander, Anti-Submarine Warfare Striking Force (CASWF), with Commodore Amphibious Warfare (COMAW) embarked in . Tempest Express – United States Pacific Command computer-assisted exercise to train the HQ USPACOM staff to function as a Joint Task Force headquarters. The exercise is held as often as needed, three to seven times a year. Tempest Express 2013 involved elements of the PACOM command post traveling to New Zealand to carry out a disaster relief exercise. Tempest Rapid – Employment of DOD resources in natural disaster emergencies in the Continental United States. U Ulchi-Freedom Guardian – previously Ulchi Focus Lens. Command post/computerised exercise simulating the defense of South Korea. Ultimate Hunter – Counterterorism operation by U.S. trained Kenyan force in Somalia Union Flash – Simulation exercise, annual, 1998: WPC, Einsiedlerhof AFS, Germany, 05/1998. Operation United Shield 1995 – Support of US withdrawal from Somalia. Operation Unified Resolve Upgrade Silo: A modification improvement program for Minuteman III. Upgun Cobras A Bell Huey Cobra with a Sperry helmet-mounted optical gunsight. Operation Uphold Democracy—removal of junta in Haiti Upper Hand – A joint U.S.-Norwegian exercise designed to promote proficiency in Anti-Submarine Warfare (ASW), underway logistic support, and communications procedures. Operation Urgent Fury—1983 replacement of revolutionary government in Grenada V Valiant Blitz - 1990 iteration amphibious exercise landing in South Korea, part of larger PACEX 89. Exercise Valiant Shield – United States Pacific Command large-scale warfighting exercise Exercise Valiant Usher 86 – a declassified U.S. Central Command historical document said that: 'Valiant Usher 86 was conducted in Somalia from 1 to 7 November 1985. Initially planned to be an amphibious, combined/joint exercise including the Mediterranean ARG/MAU and [Somali] forces, the exercise was completely restructured when the ARG was retained in the Mediterranean and replaced with a battalion (-) of the 101st Airborne Division. In spite of limited planning time, the exercise was described as a "total success", highlighting both the rapid capability.. to substitute forces, as well as the flexibility of the forces to accomplish assigned objectives.' Victory Scrimmage - V Corps multi-divisional exercise of January–February 2003 to prepare for Operation Iraqi Freedom Exercise Vigilant Eagle – NORAD/Russian Armed Forces exercise, repeated several times, involving response to a simulated hijacked airliner over Canadian/U.S./Russian airspace. Operation Vigilant Warrior 1994 – Response to Iraqi buildup along Kuwait border. Volant Oak – See Coronet Oak. Operation name when directed by Military Airlift Command from 1977 to 1992 Exercise Vortex Warrior – RAF Chinook exercise for desert operations in preparation for Afghan deployments at the U.S. Naval Air Facility El Centro, in Imperial County, Southern California. 2014, planned 2018. W Wanda Belle – "was Nancy Rae, 1 RC-135S '59-1491', named after Wanda Leigh O'Rear, daughter of Big Safari program director F. E. O'Rear, 01/1964-01/1967, to Rivet Ball, modified under Big Safari." White Alice - White Alice Communications System (WACS), a United States Air Force telecommunication network with 80 radio stations located in Alaska during the Cold War. White Cloud – (1) A Navy program for liquid propellant guns; (2) USN, ocean reconnaissance/surveillance Naval Ocean Surveillance System, first generation of satellites. Operation White Star, also known as Project White Star, was a United States military advisory mission to Laos in 1959–62. Wild Weasel – general codename for U.S. Air Force Suppression of Enemy Air Defenses fighter-bomber aircraft. Air-launched anti-radar missile firing aircraft guided by radar emissions. Y Yankee Team – reconnaissance over Laos 1964; Vietnam War Tanker Task Force. Yellow Tag – An electronics project administered by Naval Sea Systems Command. Young Tiger – Tanker Task Force. [P]lanned under Tamale Pete, Operations Order for Boeing KC-135 Stratotanker force operating from Kadena Air Base and later Ching Chuan Kang Air Base (1967 onwards), refueling tactical air operations over Vietnam, Laos, etc. after 1965." LGM-30 Minuteman related code names anyone who wishes is at liberty to help move these designations into the alphabetic main list HAVE LEAP: A Space and Missile Test Center support of Minuteman-III program MIDDLE GUST: An Air Force test conducted at Crowley, CO involving a simulated nuclear overblast of a Minuteman silo OLD FOX: Minuteman-III flight tests OLYMPIC ARENA III: Strategic Air Command missile competition of all nine operational missile units OLYMPIC EVENT: A Minuteman III nuclear operational systems test OLYMPIC PLAY: A Strategic Air Command missiles and operational ground equipment program for EWO missions OLYMPIC TRIALS: A program to represent a series of launches having common objectives PACER GALAXY: Support of Minuteman force modification program PAVE PEPPER: An Air Force SAMSO (Space & Missile Systems Organization) project to decrease the size of the Minuteman III warheads and allow for more to be launched by one Minuteman. RIVET ADD: Modification of Minuteman-II launch facilities to hold MM III missiles RIVET MILE: Minuteman Integrated Life Extension. Included IMPSS security system upgrade. RIVET SAVE: A Minuteman crew sleep program modification to reduce personnel number SABER SAFE: Minuteman pre-launch survivability program SABER SECURE: A Minuteman rebasing program See also Lists of allied military operations of the Vietnam War Notes References (link to Google Books partial text) Andreas Parsch, Code Names for U.S. Military Projects and Operations External links Electrospaces.net, National Security Agency nicknames and codewords, last updated December 22, 2018. https://www.thedrive.com/the-war-zone/29353/how-the-pentagon-comes-up-with-all-those-secret-project-nicknames-and-crazy-code-words (and CJCSM referred to within it). Department of Defense code names code names code names
4404582
https://en.wikipedia.org/wiki/Marine%20Air-Ground%20Task%20Force
Marine Air-Ground Task Force
Marine Air-Ground Task Force (MAGTF, pronounced MAG-TAF) is a term used by the United States Marine Corps to describe the principal organization for all missions across the range of military operations. MAGTFs are a balanced air-ground, combined arms task organization of Marine Corps forces under a single commander that is structured to accomplish a specific mission. The MAGTF was formalized by the publishing of Marine Corps Order 3120.3 in December 1963 "The Marine Corps in the National Defense, MCDP 1-0". It stated: A Marine air-ground task force with separate air ground headquarters is normally formed for combat operations and training exercises in which substantial combat forces of both Marine aviation and Marine ground units are included in the task organization of participating Marine forces. Since World War II in many crises the United States Marine Corps has deployed projection forces, with the ability to move ashore with sufficient sustainability for prolonged operations. MAGTFs have long provided the United States with a broad spectrum of response options when U.S. and allied interests have been threatened and in non-combat situations which require critical response. Selective, timely and credible commitment of air-ground units has, on many occasions, helped bring stability to a region and sent signals worldwide that the United States is willing to defend its interests, and is able to do so with a powerful force on short notice. Composition The four core elements of a Marine air–ground task force are: The command element (CE), a headquarters unit organized into a MAGTF (MEU, MEB, MEF) headquarters (HQ) group, that exercises command and control (management and planning for manpower, intelligence, operations and training, and logistics functions) over the other elements of the MAGTF. The HQ group consists of communications, intelligence, surveillance, and law enforcement (i.e., military police) detachments, companies, and battalions, and reconnaissance (Force Reconnaissance), and liaison (ANGLICO) platoons, detachments, and companies. The ground combat element (GCE), composed primarily of infantry units (infantry battalions organized into battalion landing teams, regimental combat teams, and Marine divisions). These organizations contain a headquarters unit that provides command and control (management and planning for manpower, intelligence, operations and training, and logistics functions) as well as scout/sniper, aviation liaison/forward air controller, NBC defense, communications, service (supply, motor transport, weapons maintenance, and dining facility), and Navy combat medical and chaplain's corps personnel. The GCE also contains combat support units, including artillery, armor (tank, assault amphibian, and light armored reconnaissance), combat engineer (including EOD), and reconnaissance units. At the division level, the GCE also contains limited organic combat service support, including a truck company, a military police/law enforcement company, and the division band. The aviation combat element (ACE), which contributes the air power to the MAGTF includes all aircraft (fixed wing, helicopters, tiltrotor, and UAV) and aviation support units. The units are organized into detachments, squadrons, groups, and wings, except for low altitude air defense units, which are organized into platoons, detachments, batteries, and battalions. These units include pilots, flight officers, enlisted aircrewmen, aviation logistics (aircraft maintenance, aviation electronics, aviation ordnance, and aviation supply) and Navy aviation medical and chaplain's corps personnel, as well as ground-based air defense units, and those units necessary for command and control (management and planning for manpower, intelligence, operations and training, and logistics functions), aviation command and control (tactical air command, air defense control, air support control, and air traffic control), communications, and aviation ground support (e.g., airfield services, bulk fuels/aircraft refueling, crash rescue, engineer construction and utilities support, EOD, motor transport, ground equipment supply and maintenance, local security/law enforcement, and the wing band). The logistics combat element (LCE), organized into battalions, regiments, and groups, has its own headquarters element for command and control (management and planning for manpower, intelligence, operations and training, and logistics functions) of its subordinate units and contains the majority of the combat service support units for the MAGTF, including: heavy motor transport, ground supply, heavy engineer support, ground equipment maintenance, and advanced medical and dental units, along with certain specialized groups such as air delivery, EOD, and landing support teams. Navy SEABEES are also part of MAGTF see https://www.globalsecurity.org/military/library/policy/usmc/mcwp/4-11-5/mcwp4-11-5.pdf The four core elements describe types of forces needed and not actual military units or commands. The basic structure of the MAGTF never varies, though the number, size, and type of Marine Corps units composing each of its four elements will always be mission dependent. The flexibility of the organizational structure allows for one or more subordinate MAGTFs to be assigned. Types Marine Expeditionary Force (MEF) A Marine Expeditionary Force (MEF), commanded by a lieutenant general, is composed of a MEF headquarters group (MEF HQG), a Marine division (MARDIV), a Marine aircraft wing (MAW), and a Marine logistics group (MLG). For comparison purposes, in relation to other U.S. ground and air combat forces, the MEF HQG may be considered as roughly analogous to a notional U.S. Army (USA) corps headquarters that also contains a combined battlefield surveillance brigade (BFSB)/maneuver enhancement brigade (Army MEB). This comparison is based on the fact that the MEF HQG contains several of the key components of the BSB and Army MEB (viz., network support, military intelligence, military police, and long-range surveillance) resident in its organic communications, intelligence, law enforcement, and radio battalions and attached force reconnaissance company. The MARDIV, containing two or three infantry regiments, an artillery regiment, and several separate armored vehicle battalions (i.e., tank, assault amphibian, and light armored reconnaissance) and other combat support battalions (i.e., reconnaissance, combat engineer, and headquarters) is approximately equivalent to a notional U.S. Army light infantry division organized with two or three brigade combat teams, division artillery (DIVARTY), a division sustainment brigade, a division headquarters and headquarters battalion and others, and is reinforced with an armored brigade combat team (ABCT). (While the tank battalion of a MARDIV has fewer tanks than an ABCT, with 58 vice 90, respectively, the MARDIV assault amphibian vehicle (AAV) battalion has four companies of 42 AAVs each and is capable of transforming an entire Marine infantry regiment into an amphibious mechanized infantry force.) The MAW, with its aircraft groups (MAGs) and air control groups (MACGs), is comparable to a notional U.S. Air Force (USAF) numbered air force consisting of a mix of several USAF wings and USA combat aviation brigades (nominally at least two of each). Lastly, the MLG and its organic logistics regiments are the USMC organizational and functional equivalents of a USA Sustainment Command (Expeditionary) and its constituent sustainment brigades. The MEF, which varies in size, is capable of conducting missions across the full range of military operations and to support and sustain itself for up to 60 days in an austere expeditionary environment. For example, the I Marine Expeditionary Force (I MEF) is composed of the I MEF Headquarters Group, the 1st Marine Division, the 3rd Marine Aircraft Wing and the 1st Marine Logistics Group, all based on the West Coast. Two notable deployments of an entire MEF were when I Marine Expeditionary Force deployed in support of Operations Desert Shield and Desert Storm. I MEF ultimately consisted of the 1st and 2nd Marine Divisions as well as considerable Marine air and support units. I MEF also deployed to Somalia in December 1992 for the humanitarian relief effort there as well as deploying to Kuwait beginning in 2002 and taking part in the 2003 Invasion of Iraq. The three Marine Expeditionary Forces are: I Marine Expeditionary Force located at Camp Pendleton, California II Marine Expeditionary Force located at Camp Lejeune, North Carolina III Marine Expeditionary Force located at Camp Courtney, Okinawa, Japan Marine Expeditionary Brigade (MEB) A Marine Expeditionary Brigade (MEB) is larger than a Marine Expeditionary Unit (MEU) but smaller than a MEF. The MEB, which varies in size, is capable of conducting missions across the full range of military operations and to support and sustain itself for up to 30 days in an austere expeditionary environment. It is constructed around a reinforced infantry regiment designated as a regimental combat team (RCT), a composite Marine aircraft group, and a combat logistics regiment (CLR), formerly known as a brigade service support group, all commanded by a battalion-sized command element designated as a MEB headquarters group. The MEB, commanded by a general officer (either a Major General or a Brigadier General), is task-organized to meet the requirements of a specific situation. It can function as part of a joint task force, as the lead echelon of the MEF, or alone. 1st Marine Expeditionary Brigade 2nd Marine Expeditionary Brigade 3rd Marine Expeditionary Brigade 4th Marine Expeditionary Brigade (Anti-Terrorism) 5th Marine Expeditionary Brigade 9th Marine Expeditionary Brigade Marine Expeditionary Unit (MEU) The smallest type of MAGTF is the Marine expeditionary unit (MEU) Special Operations Capable (SOC), designated as a MEU (SOC), commanded by a colonel. The MEU is capable of conducting limited, specialized, and selected special operations missions and to support and sustain itself for up to 15 days in an austere expeditionary environment. The MEU is based on a reinforced Marine infantry battalion, designated as a battalion landing team (BLT), supported by a medium tiltrotor squadron (VMM) (reinforced), containing both fixed-wing and rotary-wing aircraft and aviation support detachments, and a combat logistics battalion (CLB), all commanded by a company-sized MEU headquarters group. There are usually three MEUs assigned to each of the U.S. Navy Atlantic and Pacific Fleets, with another MEU based on Okinawa. While one MEU is on deployment, one MEU is training to deploy and one is standing down, resting its marines, and refitting. Each MEU is rated as capable of performing special operations, though USMC's definition of this is not consistent with that of SOCOM. They are not considered special operations unit by the Department of Defense. 11th Marine Expeditionary Unit 13th Marine Expeditionary Unit 15th Marine Expeditionary Unit 22nd Marine Expeditionary Unit 24th Marine Expeditionary Unit 26th Marine Expeditionary Unit 31st Marine Expeditionary Unit See also Fleet Marine Force (FMF) Organization of the United States Marine Corps Special Purpose Marine Air-Ground Task Force – Crisis Response – Africa Special Purpose Marine Air-Ground Task Force – Crisis Response – Central Command United States Army's Brigade Combat Team, for comparison United States Marine Corps Aviation References Bibliography External links Additional info from Globalsecurity.com United States Marine Corps aviation MAGTF Ad hoc units and formations of the United States Marine Corps Military task forces
21402632
https://en.wikipedia.org/wiki/Electroencephalography
Electroencephalography
Electroencephalography (EEG) is a method to record an electrogram of the electrical activity on the scalp that has been shown to represent the macroscopic activity of the surface layer of the brain underneath. It is typically non-invasive, with the electrodes placed along the scalp. Electrocorticography, involving invasive electrodes, is sometimes called "intracranial EEG". EEG measures voltage fluctuations resulting from ionic current within the neurons of the brain. Clinically, EEG refers to the recording of the brain's spontaneous electrical activity over a period of time, as recorded from multiple electrodes placed on the scalp. Diagnostic applications generally focus either on event-related potentials or on the spectral content of EEG. The former investigates potential fluctuations time locked to an event, such as 'stimulus onset' or 'button press'. The latter analyses the type of neural oscillations (popularly called "brain waves") that can be observed in EEG signals in the frequency domain. EEG is most often used to diagnose epilepsy, which causes abnormalities in EEG readings. It is also used to diagnose sleep disorders, depth of anesthesia, coma, encephalopathies, and brain death. EEG used to be a first-line method of diagnosis for tumors, stroke and other focal brain disorders, but this use has decreased with the advent of high-resolution anatomical imaging techniques such as magnetic resonance imaging (MRI) and computed tomography (CT). Despite limited spatial resolution, EEG continues to be a valuable tool for research and diagnosis. It is one of the few mobile techniques available and offers millisecond-range temporal resolution which is not possible with CT, PET or MRI. Derivatives of the EEG technique include evoked potentials (EP), which involves averaging the EEG activity time-locked to the presentation of a stimulus of some sort (visual, somatosensory, or auditory). Event-related potentials (ERPs) refer to averaged EEG responses that are time-locked to more complex processing of stimuli; this technique is used in cognitive science, cognitive psychology, and psychophysiological research. History In 1875, Richard Caton (1842–1926), a physician practicing in Liverpool, presented his findings about electrical phenomena of the exposed cerebral hemispheres of rabbits and monkeys in the British Medical Journal. In 1890, Polish physiologist Adolf Beck published an investigation of spontaneous electrical activity of the brain of rabbits and dogs that included rhythmic oscillations altered by light. Beck started experiments on the electrical brain activity of animals. Beck placed electrodes directly on the surface of the brain to test for sensory stimulation. His observation of fluctuating brain activity led to the conclusion of brain waves. In 1912, Ukrainian physiologist Vladimir Vladimirovich Pravdich-Neminsky published the first animal EEG and the evoked potential of the mammalian (dog). In 1914, Napoleon Cybulski and Jelenska-Macieszyna photographed EEG recordings of experimentally induced seizures. German physiologist and psychiatrist Hans Berger (1873–1941) recorded the first human EEG in 1924. Expanding on work previously conducted on animals by Richard Caton and others, Berger also invented the electroencephalogram (giving the device its name), an invention described "as one of the most surprising, remarkable, and momentous developments in the history of clinical neurology". His discoveries were first confirmed by British scientists Edgar Douglas Adrian and B. H. C. Matthews in 1934 and developed by them. In 1934, Fisher and Lowenbach first demonstrated epileptiform spikes. In 1935, Gibbs, Davis and Lennox described interictal spike waves and the three cycles/s pattern of clinical absence seizures, which began the field of clinical electroencephalography. Subsequently, in 1936 Gibbs and Jasper reported the interictal spike as the focal signature of epilepsy. The same year, the first EEG laboratory opened at Massachusetts General Hospital. Franklin Offner (1911–1999), professor of biophysics at Northwestern University developed a prototype of the EEG that incorporated a piezoelectric inkwriter called a Crystograph (the whole device was typically known as the Offner Dynograph). In 1947, The American EEG Society was founded and the first International EEG congress was held. In 1953 Aserinsky and Kleitman described REM sleep. In the 1950s, William Grey Walter developed an adjunct to EEG called EEG topography, which allowed for the mapping of electrical activity across the surface of the brain. This enjoyed a brief period of popularity in the 1980s and seemed especially promising for psychiatry. It was never accepted by neurologists and remains primarily a research tool. An electroencephalograph system manufactured by Beckman Instruments was used on at least one of the Project Gemini manned spaceflights (1965-1966) to monitor the brain waves of astronauts on the flight. It was one of many Beckman Instruments specialized for and used by NASA. In 1988, report was given by Stevo Bozinovski, Mihail Sestakov, and Liljana Bozinovska on EEG control of a physical object, a robot. In October 2018, scientists connected the brains of three people to experiment with the process of thoughts sharing. Five groups of three people participated in the experiment using EEG. The success rate of the experiment was 81%. Medical use EEG is one of the main diagnostic tests for epilepsy. A routine clinical EEG recording typically lasts 20–30 minutes (plus preparation time). It is a test that detects electrical activity in the brain using small, metal discs (electrodes) attached to the scalp. Routinely, EEG is used in clinical circumstances to determine changes in brain activity that might be useful in diagnosing brain disorders, especially epilepsy or another seizure disorder. An EEG might also be helpful for diagnosing or treating the following disorders: Brain tumor Brain damage from head injury Brain dysfunction that can have a variety of causes (encephalopathy) Inflammation of the brain (encephalitis) Stroke Sleep disorders It can also: distinguish epileptic seizures from other types of spells, such as psychogenic non-epileptic seizures, syncope (fainting), sub-cortical movement disorders and migraine variants differentiate "organic" encephalopathy or delirium from primary psychiatric syndromes such as catatonia serve as an adjunct test of brain death in comatose patients prognosticate in comatose patients (in certain instances) determine whether to wean anti-epileptic medications. At times, a routine EEG is not sufficient to establish the diagnosis or to determine the best course of action in terms of treatment. In this case, attempts may be made to record an EEG while a seizure is occurring. This is known as an ictal recording, as opposed to an inter-ictal recording which refers to the EEG recording between seizures. To obtain an ictal recording, a prolonged EEG is typically performed accompanied by a time-synchronized video and audio recording. This can be done either as an outpatient (at home) or during a hospital admission, preferably to an Epilepsy Monitoring Unit (EMU) with nurses and other personnel trained in the care of patients with seizures. Outpatient ambulatory video EEGs typically last one to three days. An admission to an Epilepsy Monitoring Unit typically lasts several days but may last for a week or longer. While in the hospital, seizure medications are usually withdrawn to increase the odds that a seizure will occur during admission. For reasons of safety, medications are not withdrawn during an EEG outside of the hospital. Ambulatory video EEGs, therefore, have the advantage of convenience and are less expensive than a hospital admission, but the disadvantage of a decreased probability of recording a clinical event. Epilepsy monitoring is typically done to distinguish epileptic seizures from other types of spells, such as psychogenic non-epileptic seizures, syncope (fainting), sub-cortical movement disorders and migraine variants, to characterize seizures for the purposes of treatment, and to localize the region of brain from which a seizure originates for work-up of possible seizure surgery. Hospitals use an EEG monitor to help diagnose a seizure. They use that information to help with the treatment process as well as discovering risks. "Many professionals have stated the importance of EEG’s when it comes to suspected seizures, for diagnosis and evaluation". Doctors will be able to use the EEG monitoring system to help look at some treatment options as well as some risk factors. As technology advances, researchers are finding new monitors that are more accurate in regards to seizures. "Advanced techniques with continuous EEG and simplified technique with aEEG allows clinicians to detect more seizures at the bedside”. An aEEG stands for amplitude integrated electroencephalography and can detect any electrical brain activity just like an EEG monitor. An aEEG monitor can monitor the brain function for a long period, whereas an EEG monitor can only monitor brain function for a couple hours to days. This helps the improvement of detecting more seizures faster, and the preterm babies suffering with seizures can be treated earlier and have less long term effects. Additionally, EEG may be used to monitor the depth of anesthesia, as an indirect indicator of cerebral perfusion in carotid endarterectomy, or to monitor amobarbital effect during the Wada test. EEG can also be used in intensive care units for brain function monitoring to monitor for non-convulsive seizures/non-convulsive status epilepticus, to monitor the effect of sedative/anesthesia in patients in medically induced coma (for treatment of refractory seizures or increased intracranial pressure), and to monitor for secondary brain damage in conditions such as subarachnoid hemorrhage (currently a research method). If a patient with epilepsy is being considered for resective surgery, it is often necessary to localize the focus (source) of the epileptic brain activity with a resolution greater than what is provided by scalp EEG. This is because the cerebrospinal fluid, skull and scalp smear the electrical potentials recorded by scalp EEG. In these cases, neurosurgeons typically implant strips and grids of electrodes (or penetrating depth electrodes) under the dura mater, through either a craniotomy or a burr hole. The recording of these signals is referred to as electrocorticography (ECoG), subdural EEG (sdEEG) or intracranial EEG (icEEG)--all terms for the same thing. The signal recorded from ECoG is on a different scale of activity than the brain activity recorded from scalp EEG. Low voltage, high frequency components that cannot be seen easily (or at all) in scalp EEG can be seen clearly in ECoG. Further, smaller electrodes (which cover a smaller parcel of brain surface) allow even lower voltage, faster components of brain activity to be seen. Some clinical sites record from penetrating microelectrodes. EEG is not indicated for diagnosing headache. Recurring headache is a common pain problem, and this procedure is sometimes used in a search for a diagnosis, but it has no advantage over routine clinical evaluation. Research use EEG, and the related study of ERPs are used extensively in neuroscience, cognitive science, cognitive psychology, neurolinguistics and psychophysiological research, but also to study human functions such as swallowing. Many EEG techniques used in research are not standardised sufficiently for clinical use, and many ERP studies fail to report all of the necessary processing steps for data collection and reduction, limiting the reproducibility and replicability of many studies. But research on mental disabilities, such as auditory processing disorder (APD), ADD, or ADHD, is becoming more widely known and EEGs are used as research and treatment. Advantages Several other methods to study brain function exist, including functional magnetic resonance imaging (fMRI), positron emission tomography (PET), magnetoencephalography (MEG), nuclear magnetic resonance spectroscopy (NMR or MRS), electrocorticography (ECoG), single-photon emission computed tomography (SPECT), near-infrared spectroscopy (NIRS), and event-related optical signal (EROS). Despite the relatively poor spatial sensitivity of EEG, the "one-dimensional signals from localised peripheral regions on the head make it attractive for its simplistic fidelity and has allowed high clinical and basic research throughput". Thus, EEG possesses some advantages over some of those other techniques: Hardware costs are significantly lower than those of most other techniques EEG prevents limited availability of technologists to provide immediate care in high traffic hospitals. EEG only requires a quiet room and briefcase-size equipment, whereas fMRI, SPECT, PET, MRS, or MEG require bulky and immobile equipment. For example, MEG requires equipment consisting of liquid helium-cooled detectors that can be used only in magnetically shielded rooms, altogether costing upwards of several million dollars; and fMRI requires the use of a 1-ton magnet in, again, a shielded room. EEG can readily have a high temporal resolution, (although sub-millisecond resolution generates less meaningful data), because the two to 32 data streams generated by that number of electrodes is easily stored and processed, whereas 3D spatial technologies provide thousands or millions times as many input data streams, and are thus limited by hardware and software. EEG is commonly recorded at sampling rates between 250 and 2000 Hz in clinical and research settings. EEG is relatively tolerant of subject movement, unlike most other neuroimaging techniques. There even exist methods for minimizing, and even eliminating movement artifacts in EEG data EEG is silent, which allows for better study of the responses to auditory stimuli. EEG does not aggravate claustrophobia, unlike fMRI, PET, MRS, SPECT, and sometimes MEG EEG does not involve exposure to high-intensity (>1 Tesla) magnetic fields, as in some of the other techniques, especially MRI and MRS. These can cause a variety of undesirable issues with the data, and also prohibit use of these techniques with participants that have metal implants in their body, such as metal-containing pacemakers EEG does not involve exposure to radioligands, unlike positron emission tomography. ERP studies can be conducted with relatively simple paradigms, compared with IE block-design fMRI studies Relatively non-invasive, in contrast to Electrocorticography, which requires electrodes to be placed on the actual surface of the brain. EEG also has some characteristics that compare favorably with behavioral testing: EEG can detect covert processing (i.e., processing that does not require a response) EEG can be used in subjects who are incapable of making a motor response Some ERP components can be detected even when the subject is not attending to the stimuli Unlike other means of studying reaction time, ERPs can elucidate stages of processing (rather than just the final end result) the simplicity of EEG readily provides for tracking of brain changes during different phases of life. EEG sleep analysis can indicate significant aspects of the timing of brain development, including evaluating adolescent brain maturation. In EEG there is a better understanding of what signal is measured as compared to other research techniques, e.g. the BOLD response in MRI. Disadvantages Low spatial resolution on the scalp. fMRI, for example, can directly display areas of the brain that are active, while EEG requires intense interpretation just to hypothesize what areas are activated by a particular response. EEG poorly measures neural activity that occurs below the upper layers of the brain (the cortex). Unlike PET and MRS, cannot identify specific locations in the brain at which various neurotransmitters, drugs, etc. can be found. Often takes a long time to connect a subject to EEG, as it requires precise placement of dozens of electrodes around the head and the use of various gels, saline solutions, and/or pastes to maintain good conductivity, and a cap is used to keep them in place. While the length of time differs dependent on the specific EEG device used, as a general rule it takes considerably less time to prepare a subject for MEG, fMRI, MRS, and SPECT. Signal-to-noise ratio is poor, so sophisticated data analysis and relatively large numbers of subjects are needed to extract useful information from EEG. With other neuroimaging techniques Simultaneous EEG recordings and fMRI scans have been obtained successfully, though recording both at the same time effectively requires that several technical difficulties be overcome, such as the presence of ballistocardiographic artifact, MRI pulse artifact and the induction of electrical currents in EEG wires that move within the strong magnetic fields of the MRI. While challenging, these have been successfully overcome in a number of studies. MRI's produce detailed images created by generating strong magnetic fields that may induce potentially harmful displacement force and torque. These fields produce potentially harmful radio frequency heating and create image artifacts rendering images useless. Due to these potential risks, only certain medical devices can be used in an MR environment. Similarly, simultaneous recordings with MEG and EEG have also been conducted, which has several advantages over using either technique alone: EEG requires accurate information about certain aspects of the skull that can only be estimated, such as skull radius, and conductivities of various skull locations. MEG does not have this issue, and a simultaneous analysis allows this to be corrected for. MEG and EEG both detect activity below the surface of the cortex very poorly, and like EEG, the level of error increases with the depth below the surface of the cortex one attempts to examine. However, the errors are very different between the techniques, and combining them thus allows for correction of some of this noise. MEG has access to virtually no sources of brain activity below a few centimetres under the cortex. EEG, on the other hand, can receive signals from greater depth, albeit with a high degree of noise. Combining the two makes it easier to determine what in the EEG signal comes from the surface (since MEG is very accurate in examining signals from the surface of the brain), and what comes from deeper in the brain, thus allowing for analysis of deeper brain signals than either EEG or MEG on its own. Recently, a combined EEG/MEG (EMEG) approach has been investigated for the purpose of source reconstruction in epilepsy diagnosis. EEG has also been combined with positron emission tomography. This provides the advantage of allowing researchers to see what EEG signals are associated with different drug actions in the brain. Recent studies using machine learning techniques such as neural networks with statistical temporal features extracted from frontal lobe EEG brainwave data has shown high levels of success in classifying mental states (Relaxed, Neutral, Concentrating), mental emotional states (Negative, Neutral, Positive) and thalamocortical dysrhythmia. Mechanisms The brain's electrical charge is maintained by billions of neurons. Neurons are electrically charged (or "polarized") by membrane transport proteins that pump ions across their membranes. Neurons are constantly exchanging ions with the extracellular milieu, for example to maintain resting potential and to propagate action potentials. Ions of similar charge repel each other, and when many ions are pushed out of many neurons at the same time, they can push their neighbours, who push their neighbours, and so on, in a wave. This process is known as volume conduction. When the wave of ions reaches the electrodes on the scalp, they can push or pull electrons on the metal in the electrodes. Since metal conducts the push and pull of electrons easily, the difference in push or pull voltages between any two electrodes can be measured by a voltmeter. Recording these voltages over time gives us the EEG. The electric potential generated by an individual neuron is far too small to be picked up by EEG or MEG. EEG activity therefore always reflects the summation of the synchronous activity of thousands or millions of neurons that have similar spatial orientation. If the cells do not have similar spatial orientation, their ions do not line up and create waves to be detected. Pyramidal neurons of the cortex are thought to produce the most EEG signal because they are well-aligned and fire together. Because voltage field gradients fall off with the square of distance, activity from deep sources is more difficult to detect than currents near the skull. Scalp EEG activity shows oscillations at a variety of frequencies. Several of these oscillations have characteristic frequency ranges, spatial distributions and are associated with different states of brain functioning (e.g., waking and the various sleep stages). These oscillations represent synchronized activity over a network of neurons. The neuronal networks underlying some of these oscillations are understood (e.g., the thalamocortical resonance underlying sleep spindles), while many others are not (e.g., the system that generates the posterior basic rhythm). Research that measures both EEG and neuron spiking finds the relationship between the two is complex, with a combination of EEG power in the gamma band and phase in the delta band relating most strongly to neuron spike activity. Method In conventional scalp EEG, the recording is obtained by placing electrodes on the scalp with a conductive gel or paste, usually after preparing the scalp area by light abrasion to reduce impedance due to dead skin cells. Many systems typically use electrodes, each of which is attached to an individual wire. Some systems use caps or nets into which electrodes are embedded; this is particularly common when high-density arrays of electrodes are needed. Electrode locations and names are specified by the International 10–20 system for most clinical and research applications (except when high-density arrays are used). This system ensures that the naming of electrodes is consistent across laboratories. In most clinical applications, 19 recording electrodes (plus ground and system reference) are used. A smaller number of electrodes are typically used when recording EEG from neonates. Additional electrodes can be added to the standard set-up when a clinical or research application demands increased spatial resolution for a particular area of the brain. High-density arrays (typically via cap or net) can contain up to 256 electrodes more-or-less evenly spaced around the scalp. Each electrode is connected to one input of a differential amplifier (one amplifier per pair of electrodes); a common system reference electrode is connected to the other input of each differential amplifier. These amplifiers amplify the voltage between the active electrode and the reference (typically 1,000–100,000 times, or 60–100 dB of voltage gain). In analog EEG, the signal is then filtered (next paragraph), and the EEG signal is output as the deflection of pens as paper passes underneath. Most EEG systems these days, however, are digital, and the amplified signal is digitized via an analog-to-digital converter, after being passed through an anti-aliasing filter. Analog-to-digital sampling typically occurs at 256–512 Hz in clinical scalp EEG; sampling rates of up to 20 kHz are used in some research applications. During the recording, a series of activation procedures may be used. These procedures may induce normal or abnormal EEG activity that might not otherwise be seen. These procedures include hyperventilation, photic stimulation (with a strobe light), eye closure, mental activity, sleep and sleep deprivation. During (inpatient) epilepsy monitoring, a patient's typical seizure medications may be withdrawn. The digital EEG signal is stored electronically and can be filtered for display. Typical settings for the high-pass filter and a low-pass filter are 0.5–1 Hz and 35–70 Hz respectively. The high-pass filter typically filters out slow artifact, such as electrogalvanic signals and movement artifact, whereas the low-pass filter filters out high-frequency artifacts, such as electromyographic signals. An additional notch filter is typically used to remove artifact caused by electrical power lines (60 Hz in the United States and 50 Hz in many other countries). The EEG signals can be captured with opensource hardware such as OpenBCI and the signal can be processed by freely available EEG software such as EEGLAB or the Neurophysiological Biomarker Toolbox. As part of an evaluation for epilepsy surgery, it may be necessary to insert electrodes near the surface of the brain, under the surface of the dura mater. This is accomplished via burr hole or craniotomy. This is referred to variously as "electrocorticography (ECoG)", "intracranial EEG (I-EEG)" or "subdural EEG (SD-EEG)". Depth electrodes may also be placed into brain structures, such as the amygdala or hippocampus, structures, which are common epileptic foci and may not be "seen" clearly by scalp EEG. The electrocorticographic signal is processed in the same manner as digital scalp EEG (above), with a couple of caveats. ECoG is typically recorded at higher sampling rates than scalp EEG because of the requirements of Nyquist theorem—the subdural signal is composed of a higher predominance of higher frequency components. Also, many of the artifacts that affect scalp EEG do not impact ECoG, and therefore display filtering is often not needed. A typical adult human EEG signal is about 10 µV to 100 µV in amplitude when measured from the scalp. Since an EEG voltage signal represents a difference between the voltages at two electrodes, the display of the EEG for the reading encephalographer may be set up in one of several ways. The representation of the EEG channels is referred to as a montage. Sequential montage Each channel (i.e., waveform) represents the difference between two adjacent electrodes. The entire montage consists of a series of these channels. For example, the channel "Fp1-F3" represents the difference in voltage between the Fp1 electrode and the F3 electrode. The next channel in the montage, "F3-C3", represents the voltage difference between F3 and C3, and so on through the entire array of electrodes. Referential montage Each channel represents the difference between a certain electrode and a designated reference electrode. There is no standard position for this reference; it is, however, at a different position than the "recording" electrodes. Midline positions are often used because they do not amplify the signal in one hemisphere vs. the other, such as Cz, Oz, Pz etc. as online reference. The other popular offline references are: REST reference: which is an offline computational reference at infinity where the potential is zero. REST (reference electrode standardization technique) takes the equivalent sources inside the brain of any a set of scalp recordings as springboard to link the actual recordings with any an online or offline( average, linked ears etc.) non-zero reference to the new recordings with infinity zero as the standardized reference. A free software can be found at (Dong L, Li F, Liu Q, Wen X, Lai Y, Xu P and Yao D (2017) MATLAB Toolboxes for Reference Electrode Standardization Technique (REST) of Scalp EEG. Front. Neurosci. 11:601. ), and for more details and its performance, please refer to the original paper (Yao, D. (2001). A method to standardize a reference of scalp EEG recordings to a point at infinity. Physiol. Meas. 22, 693–711. ) "linked ears": which is a physical or mathematical average of electrodes attached to both earlobes or mastoids. Average reference montage The outputs of all of the amplifiers are summed and averaged, and this averaged signal is used as the common reference for each channel. Laplacian montage Each channel represents the difference between an electrode and a weighted average of the surrounding electrodes. When analog (paper) EEGs are used, the technologist switches between montages during the recording in order to highlight or better characterize certain features of the EEG. With digital EEG, all signals are typically digitized and stored in a particular (usually referential) montage; since any montage can be constructed mathematically from any other, the EEG can be viewed by the electroencephalographer in any display montage that is desired. The EEG is read by a clinical neurophysiologist or neurologist (depending on local custom and law regarding medical specialities), optimally one who has specific training in the interpretation of EEGs for clinical purposes. This is done by visual inspection of the waveforms, called graphoelements. The use of computer signal processing of the EEG—so-called quantitative electroencephalography—is somewhat controversial when used for clinical purposes (although there are many research uses). Dry EEG electrodes In the early 1990s Babak Taheri, at University of California, Davis demonstrated the first single and also multichannel dry active electrode arrays using micro-machining. The single channel dry EEG electrode construction and results were published in 1994. The arrayed electrode was also demonstrated to perform well compared to silver/silver chloride electrodes. The device consisted of four sites of sensors with integrated electronics to reduce noise by impedance matching. The advantages of such electrodes are: (1) no electrolyte used, (2) no skin preparation, (3) significantly reduced sensor size, and (4) compatibility with EEG monitoring systems. The active electrode array is an integrated system made of an array of capacitive sensors with local integrated circuitry housed in a package with batteries to power the circuitry. This level of integration was required to achieve the functional performance obtained by the electrode. The electrode was tested on an electrical test bench and on human subjects in four modalities of EEG activity, namely: (1) spontaneous EEG, (2) sensory event-related potentials, (3) brain stem potentials, and (4) cognitive event-related potentials. The performance of the dry electrode compared favorably with that of the standard wet electrodes in terms of skin preparation, no gel requirements (dry), and higher signal-to-noise ratio. In 1999 researchers at Case Western Reserve University, in Cleveland, Ohio, led by Hunter Peckham, used 64-electrode EEG skullcap to return limited hand movements to quadriplegic Jim Jatich. As Jatich concentrated on simple but opposite concepts like up and down, his beta-rhythm EEG output was analysed using software to identify patterns in the noise. A basic pattern was identified and used to control a switch: Above average activity was set to on, below average off. As well as enabling Jatich to control a computer cursor the signals were also used to drive the nerve controllers embedded in his hands, restoring some movement. In 2018, a functional dry electrode composed of a polydimethylsiloxane elastomer filled with conductive carbon nanofibers was reported. This research was conducted at the U.S. Army Research Laboratory. EEG technology often involves applying a gel to the scalp which facilitates strong signal-to-noise ratio. This results in more reproducible and reliable experimental results. Since patients dislike having their hair filled with gel, and the lengthy setup requires trained staff on hand, utilizing EEG outside the laboratory setting can be difficult. Additionally, it has been observed that wet electrode sensors’ performance reduces after a span of hours. Therefore, research has been directed to developing dry and semi-dry EEG bioelectronic interfaces. Dry electrode signals depend upon mechanical contact. Therefore, it can be difficult getting a usable signal because of impedance between the skin and the electrode. Some EEG systems attempt to circumvent this issue by applying a saline solution. Others have a semi dry nature and release small amounts of the gel upon contact with the scalp. Another solution uses spring loaded pin setups. These may be uncomfortable. They may also be dangerous if they were used in a situation where a patient could bump their head since they could become lodged after an impact trauma incident. ARL also developed a visualization tool, Customizable Lighting Interface for the Visualization of EEGs or CLIVE, which showed how well two brains are synchronized. Currently, headsets are available incorporating dry electrodes with up to 30 channels. Such designs are able to compensate for some of the signal quality degradation related to high impedances by optimizing pre-amplification, shielding and supporting mechanics. Limitations EEG has several limitations. Most important is its poor spatial resolution. EEG is most sensitive to a particular set of post-synaptic potentials: those generated in superficial layers of the cortex, on the crests of gyri directly abutting the skull and radial to the skull. Dendrites, which are deeper in the cortex, inside sulci, in midline or deep structures (such as the cingulate gyrus or hippocampus), or producing currents that are tangential to the skull, have far less contribution to the EEG signal. EEG recordings do not directly capture axonal action potentials. An action potential can be accurately represented as a current quadrupole, meaning that the resulting field decreases more rapidly than the ones produced by the current dipole of post-synaptic potentials. In addition, since EEGs represent averages of thousands of neurons, a large population of cells in synchronous activity is necessary to cause a significant deflection on the recordings. Action potentials are very fast and, as a consequence, the chances of field summation are slim. However, neural backpropagation, as a typically longer dendritic current dipole, can be picked up by EEG electrodes and is a reliable indication of the occurrence of neural output. Not only do EEGs capture dendritic currents almost exclusively as opposed to axonal currents, they also show a preference for activity on populations of parallel dendrites and transmitting current in the same direction at the same time. Pyramidal neurons of cortical layers II/III and V extend apical dendrites to layer I. Currents moving up or down these processes underlie most of the signals produced by electroencephalography. Therefore, EEG provides information with a large bias to select neuron types, and generally should not be used to make claims about global brain activity. The meninges, cerebrospinal fluid and skull "smear" the EEG signal, obscuring its intracranial source. It is mathematically impossible to reconstruct a unique intracranial current source for a given EEG signal, as some currents produce potentials that cancel each other out. This is referred to as the inverse problem. However, much work has been done to produce remarkably good estimates of, at least, a localized electric dipole that represents the recorded currents. EEG vs fMRI, fNIRS, fUS and PET EEG has several strong points as a tool for exploring brain activity. EEGs can detect changes over milliseconds, which is excellent considering an action potential takes approximately 0.5–130 milliseconds to propagate across a single neuron, depending on the type of neuron. Other methods of looking at brain activity, such as PET, fMRI or fUS have time resolution between seconds and minutes. EEG measures the brain's electrical activity directly, while other methods record changes in blood flow (e.g., SPECT, fMRI, fUS ) or metabolic activity (e.g., PET, NIRS), which are indirect markers of brain electrical activity. EEG can be used simultaneously with fMRI or fUS so that high-temporal-resolution data can be recorded at the same time as high-spatial-resolution data, however, since the data derived from each occurs over a different time course, the data sets do not necessarily represent exactly the same brain activity. There are technical difficulties associated with combining EEG and fMRI including the need to remove the MRI gradient artifact present during MRI acquisition. Furthermore, currents can be induced in moving EEG electrode wires due to the magnetic field of the MRI. EEG can be used simultaneously with NIRS or fUS without major technical difficulties. There is no influence of these modalities on each other and a combined measurement can give useful information about electrical activity as well as hemodynamics at medium spatial resolution. EEG vs MEG EEG reflects correlated synaptic activity caused by post-synaptic potentials of cortical neurons. The ionic currents involved in the generation of fast action potentials may not contribute greatly to the averaged field potentials representing the EEG. More specifically, the scalp electrical potentials that produce EEG are generally thought to be caused by the extracellular ionic currents caused by dendritic electrical activity, whereas the fields producing magnetoencephalographic signals are associated with intracellular ionic currents. EEG can be recorded at the same time as MEG so that data from these complementary high-time-resolution techniques can be combined. Studies on numerical modeling of EEG and MEG have also been done. Normal activity The EEG is typically described in terms of (1) rhythmic activity and (2) transients. The rhythmic activity is divided into bands by frequency. To some degree, these frequency bands are a matter of nomenclature (i.e., any rhythmic activity between 8–12 Hz can be described as "alpha"), but these designations arose because rhythmic activity within a certain frequency range was noted to have a certain distribution over the scalp or a certain biological significance. Frequency bands are usually extracted using spectral methods (for instance Welch) as implemented for instance in freely available EEG software such as EEGLAB or the Neurophysiological Biomarker Toolbox. Computational processing of the EEG is often named quantitative electroencephalography (qEEG). Most of the cerebral signal observed in the scalp EEG falls in the range of 1–20 Hz (activity below or above this range is likely to be artifactual, under standard clinical recording techniques). Waveforms are subdivided into bandwidths known as alpha, beta, theta, and delta to signify the majority of the EEG used in clinical practice. Comparison of EEG bands The practice of using only whole numbers in the definitions comes from practical considerations in the days when only whole cycles could be counted on paper records. This leads to gaps in the definitions, as seen elsewhere on this page. The theoretical definitions have always been more carefully defined to include all frequencies. Unfortunately there is no agreement in standard reference works on what these ranges should be – values for the upper end of alpha and lower end of beta include 12, 13, 14 and 15. If the threshold is taken as 14 Hz, then the slowest beta wave has about the same duration as the longest spike (70 ms), which makes this the most useful value. Others sometimes divide the bands into sub-bands for the purposes of data analysis. Wave patterns Delta Waves is the frequency range up to 4 Hz. It tends to be the highest in amplitude and the slowest waves. It is seen normally in adults in slow-wave sleep. It is also seen normally in babies. It may occur focally with subcortical lesions and in general distribution with diffuse lesions, metabolic encephalopathy hydrocephalus or deep midline lesions. It is usually most prominent frontally in adults (e.g. FIRDA – frontal intermittent rhythmic delta) and posteriorly in children (e.g. OIRDA – occipital intermittent rhythmic delta). Theta is the frequency range from 4 Hz to 7 Hz. Theta is seen normally in young children. It may be seen in drowsiness or arousal in older children and adults; it can also be seen in meditation. Excess theta for age represents abnormal activity. It can be seen as a focal disturbance in focal subcortical lesions; it can be seen in generalized distribution in diffuse disorder or metabolic encephalopathy or deep midline disorders or some instances of hydrocephalus. On the contrary this range has been associated with reports of relaxed, meditative, and creative states. Alpha is the frequency range from 7 Hz to 13 Hz. Hans Berger named the first rhythmic EEG activity he observed the "alpha wave". This was the "posterior basic rhythm" (also called the "posterior dominant rhythm" or the "posterior alpha rhythm"), seen in the posterior regions of the head on both sides, higher in amplitude on the dominant side. It emerges with closing of the eyes and with relaxation, and attenuates with eye opening or mental exertion. The posterior basic rhythm is actually slower than 8 Hz in young children (therefore technically in the theta range). In addition to the posterior basic rhythm, there are other normal alpha rhythms such as the mu rhythm (alpha activity in the contralateral sensory and motor cortical areas) that emerges when the hands and arms are idle; and the "third rhythm" (alpha activity in the temporal or frontal lobes). Alpha can be abnormal; for example, an EEG that has diffuse alpha occurring in coma and is not responsive to external stimuli is referred to as "alpha coma". Beta is the frequency range from 14 Hz to about 30 Hz. It is seen usually on both sides in symmetrical distribution and is most evident frontally. Beta activity is closely linked to motor behavior and is generally attenuated during active movements. Low-amplitude beta with multiple and varying frequencies is often associated with active, busy or anxious thinking and active concentration. Rhythmic beta with a dominant set of frequencies is associated with various pathologies, such as Dup15q syndrome, and drug effects, especially benzodiazepines. It may be absent or reduced in areas of cortical damage. It is the dominant rhythm in patients who are alert or anxious or who have their eyes open. Gamma is the frequency range approximately 30–100 Hz. Gamma rhythms are thought to represent binding of different populations of neurons together into a network for the purpose of carrying out a certain cognitive or motor function. Mu range is 8–13 Hz and partly overlaps with other frequencies. It reflects the synchronous firing of motor neurons in rest state. Mu suppression is thought to reflect motor mirror neuron systems, because when an action is observed, the pattern extinguishes, possibly because the normal and mirror neuronal systems "go out of sync" and interfere with one other. "Ultra-slow" or "near-DC" activity is recorded using DC amplifiers in some research contexts. It is not typically recorded in a clinical context because the signal at these frequencies is susceptible to a number of artifacts. Some features of the EEG are transient rather than rhythmic. Spikes and sharp waves may represent seizure activity or interictal activity in individuals with epilepsy or a predisposition toward epilepsy. Other transient features are normal: vertex waves and sleep spindles are seen in normal sleep. Note that there are types of activity that are statistically uncommon, but not associated with dysfunction or disease. These are often referred to as "normal variants". The mu rhythm is an example of a normal variant. The normal electroencephalogram (EEG) varies by age. The prenatal EEG and neonatal EEG is quite different from the adult EEG. Fetuses in the third trimester and newborns display two common brain activity patterns: "discontinuous" and "trace alternant." "Discontinuous" electrical activity refers to sharp bursts of electrical activity followed by low frequency waves. "Trace alternant" electrical activity describes sharp bursts followed by short high amplitude intervals and usually indicates quiet sleep in newborns. The EEG in childhood generally has slower frequency oscillations than the adult EEG. The normal EEG also varies depending on state. The EEG is used along with other measurements (EOG, EMG) to define sleep stages in polysomnography. Stage I sleep (equivalent to drowsiness in some systems) appears on the EEG as drop-out of the posterior basic rhythm. There can be an increase in theta frequencies. Santamaria and Chiappa cataloged a number of the variety of patterns associated with drowsiness. Stage II sleep is characterized by sleep spindles – transient runs of rhythmic activity in the 12–14 Hz range (sometimes referred to as the "sigma" band) that have a frontal-central maximum. Most of the activity in Stage II is in the 3–6 Hz range. Stage III and IV sleep are defined by the presence of delta frequencies and are often referred to collectively as "slow-wave sleep". Stages I–IV comprise non-REM (or "NREM") sleep. The EEG in REM (rapid eye movement) sleep appears somewhat similar to the awake EEG. EEG under general anesthesia depends on the type of anesthetic employed. With halogenated anesthetics, such as halothane or intravenous agents, such as propofol, a rapid (alpha or low beta), nonreactive EEG pattern is seen over most of the scalp, especially anteriorly; in some older terminology this was known as a WAR (widespread anterior rapid) pattern, contrasted with a WAIS (widespread slow) pattern associated with high doses of opiates. Anesthetic effects on EEG signals are beginning to be understood at the level of drug actions on different kinds of synapses and the circuits that allow synchronized neuronal activity (see: http://www.stanford.edu/group/maciverlab/). Artifacts Biological artifacts Electrical signals detected along the scalp by an EEG, but are of non-cerebral origin are called artifacts. EEG data is almost always contaminated by such artifacts. The amplitude of artifacts can be quite large relative to the size of amplitude of the cortical signals of interest. This is one of the reasons why it takes considerable experience to correctly interpret EEGs clinically. Some of the most common types of biological artifacts include: eye-induced artifacts (includes eye blinks, eye movements and extra-ocular muscle activity) ECG (cardiac) artifacts EMG (muscle activation)-induced artifacts glossokinetic artifacts skull defect artifacts, such as those found in patients who have undergone a craniotomy which may be described as "breach effect" or "breach rhythm" The most prominent eye-induced artifacts are caused by the potential difference between the cornea and retina, which is quite large compared to cerebral potentials. When the eyes and eyelids are completely still, this corneo-retinal dipole does not affect EEG. However, blinks occur several times per minute, the eyes movements occur several times per second. Eyelid movements, occurring mostly during blinking or vertical eye movements, elicit a large potential seen mostly in the difference between the Electrooculography (EOG) channels above and below the eyes. An established explanation of this potential regards the eyelids as sliding electrodes that short-circuit the positively charged cornea to the extra-ocular skin. Rotation of the eyeballs, and consequently of the corneo-retinal dipole, increases the potential in electrodes towards which the eyes are rotated, and decrease the potentials in the opposing electrodes. Eye movements called saccades also generate transient electromyographic potentials, known as saccadic spike potentials (SPs). The spectrum of these SPs overlaps the gamma-band (see Gamma wave), and seriously confounds analysis of induced gamma-band responses, requiring tailored artifact correction approaches. Purposeful or reflexive eye blinking also generates electromyographic potentials, but more importantly there is reflexive movement of the eyeball during blinking that gives a characteristic artifactual appearance of the EEG (see Bell's phenomenon). Eyelid fluttering artifacts of a characteristic type were previously called Kappa rhythm (or Kappa waves). It is usually seen in the prefrontal leads, that is, just over the eyes. Sometimes they are seen with mental activity. They are usually in the Theta (4–7 Hz) or Alpha (7–14 Hz) range. They were named because they were believed to originate from the brain. Later study revealed they were generated by rapid fluttering of the eyelids, sometimes so minute that it was difficult to see. They are in fact noise in the EEG reading, and should not technically be called a rhythm or wave. Therefore, current usage in electroencephalography refers to the phenomenon as an eyelid fluttering artifact, rather than a Kappa rhythm (or wave). Some of these artifacts can be useful in various applications. The EOG signals, for instance, can be used to detect and track eye-movements, which are very important in polysomnography, and is also in conventional EEG for assessing possible changes in alertness, drowsiness or sleep. ECG artifacts are quite common and can be mistaken for spike activity. Because of this, modern EEG acquisition commonly includes a one-channel ECG from the extremities. This also allows the EEG to identify cardiac arrhythmias that are an important differential diagnosis to syncope or other episodic/attack disorders. Glossokinetic artifacts are caused by the potential difference between the base and the tip of the tongue. Minor tongue movements can contaminate the EEG, especially in parkinsonian and tremor disorders. Environmental artifacts In addition to artifacts generated by the body, many artifacts originate from outside the body. Movement by the patient, or even just settling of the electrodes, may cause electrode pops, spikes originating from a momentary change in the impedance of a given electrode. Poor grounding of the EEG electrodes can cause significant 50 or 60 Hz artifact, depending on the local power system's frequency. A third source of possible interference can be the presence of an IV drip; such devices can cause rhythmic, fast, low-voltage bursts, which may be confused for spikes. Motion artifacts introduce signal noise that can mask the neural signal of interest. An EEG equipped phantom head can be placed on a motion platform and moved in a sinusoidal fashion. This contraption enabled researchers to study the effectiveness of motion artifact removal algorithms. Using the same model of phantom head and motion platform, it was determined that cable sway was a major attributor to motion artifacts. However, increasing the surface area of the electrode had a small but significant effect on reducing the artifact. This research was sponsored by the U.S. Army Research Laboratory as a part of the Cognition and Neuroergonomics Collaborative Technical Alliance. Artifact correction A simple approach to deal with artifacts is to simply remove epochs of data that exceed a certain threshold of contamination, for example, epochs with amplitudes higher than ±100 μV. However, this might lead to the loss of data that still contain artifact-free information. Another approach is to apply spatial and frequency band filters to remove artifacts, however, artifacts may overlap with the signal of interest in the spectral domain making this approach inefficient. Recently, independent component analysis (ICA) techniques have been used to correct or remove EEG contaminants. These techniques attempt to "unmix" the EEG signals into some number of underlying components. There are many source separation algorithms, often assuming various behaviors or natures of EEG. Regardless, the principle behind any particular method usually allow "remixing" only those components that would result in "clean" EEG by nullifying (zeroing) the weight of unwanted components. Usually, artifact correction of EEG data, including the classification of artifactual components of ICA is performed by EEG experts. However, with the advent of EEG array with 64 to 256 electrodes and increased studies with large populations, manual artifact correction has become extremely time-consuming. To deal with this as well as with the subjectivity of many corrections of artifacts, fully automated artifact rejection pipelines have also been developed. In the last few years, by comparing data from paralysed and unparalysed subjects, EEG contamination by muscle has been shown to be far more prevalent than had previously been realized, particularly in the gamma range above 20 Hz. However, Surface Laplacian has been shown to be effective in eliminating muscle artefact, particularly for central electrodes, which are further from the strongest contaminants. The combination of Surface Laplacian with automated techniques for removing muscle components using ICA proved particularly effective in a follow up study. Abnormal activity Abnormal activity can broadly be separated into epileptiform and non-epileptiform activity. It can also be separated into focal or diffuse. Focal epileptiform discharges represent fast, synchronous potentials in a large number of neurons in a somewhat discrete area of the brain. These can occur as interictal activity, between seizures, and represent an area of cortical irritability that may be predisposed to producing epileptic seizures. Interictal discharges are not wholly reliable for determining whether a patient has epilepsy nor where his/her seizure might originate. (See focal epilepsy.) Generalized epileptiform discharges often have an anterior maximum, but these are seen synchronously throughout the entire brain. They are strongly suggestive of a generalized epilepsy. Focal non-epileptiform abnormal activity may occur over areas of the brain where there is focal damage of the cortex or white matter. It often consists of an increase in slow frequency rhythms and/or a loss of normal higher frequency rhythms. It may also appear as focal or unilateral decrease in amplitude of the EEG signal. Diffuse non-epileptiform abnormal activity may manifest as diffuse abnormally slow rhythms or bilateral slowing of normal rhythms, such as the PBR. Intracortical Encephalogram electrodes and sub-dural electrodes can be used in tandem to discriminate and discretize artifact from epileptiform and other severe neurological events. More advanced measures of abnormal EEG signals have also recently received attention as possible biomarkers for different disorders such as Alzheimer's disease. Remote communication The United States Army Research Office budgeted $4 million in 2009 to researchers at the University of California, Irvine to develop EEG processing techniques to identify correlates of imagined speech and intended direction to enable soldiers on the battlefield to communicate via computer-mediated reconstruction of team members' EEG signals, in the form of understandable signals such as words.<ref>MURI: Synthetic Telepathy . Cnslab.ss.uci.ed m mm m m Retrieved 2011-07-19.</ref> Systems for decoding imagined speech from EEG has non-military applications such as in Brain–computer interfaces. EEG diagnostics The Department of Defense (DoD) and Veteran’s Affairs (VA), and U.S Army Research Laboratory (ARL), collaborated on EEG diagnostics in order to detect mild to moderate Traumatic Brain Injury (mTBI) in combat soldiers. Between 2000 and 2012 seventy-five percent of U.S. military operations brain injuries were classified mTBI. In response, the DoD pursued new technologies capable of rapid, accurate, non-invasive, and field-capable detection of mTBI to address this injury. Combat personnel often suffer PTSD and mTBI in correlation. Both conditions present with altered low-frequency brain wave oscillations. Altered brain waves from PTSD patients present with decreases in low-frequency oscillations, whereas, mTBI injuries are linked to increased low-frequency wave oscillations. Effective EEG diagnostics can help doctors accurately identify conditions and appropriately treat injuries in order to mitigate long-term effects. Traditionally, clinical evaluation of EEGs involved visual inspection. Instead of a visual assessment of brain wave oscillation topography, quantitative electroencephalography (qEEG), computerized algorithmic methodologies, analyzes a specific region of the brain and transforms the data into a meaningful “power spectrum” of the area. Accurately differentiating between mTBI and PTSD can significantly increase positive recovery outcomes for patients especially since long-term changes in neural communication can persist after an initial mTBI incident. Another common measurement made from EEG data is that of complexity measures such as Lempel-Ziv complexity, fractal dimension, and spectral flatness, which are associated with particular pathologies or pathology stages. Economics Inexpensive EEG devices exist for the low-cost research and consumer markets. Recently, a few companies have miniaturized medical grade EEG technology to create versions accessible to the general public. Some of these companies have built commercial EEG devices retailing for less than US$100. In 2004 OpenEEG released its ModularEEG as open source hardware. Compatible open source software includes a game for balancing a ball. In 2007 NeuroSky released the first affordable consumer based EEG along with the game NeuroBoy. This was also the first large scale EEG device to use dry sensor technology. In 2008 OCZ Technology developed device for use in video games relying primarily on electromyography. In 2008 the Final Fantasy developer Square Enix announced that it was partnering with NeuroSky to create a game, Judecca. In 2009 Mattel partnered with NeuroSky to release the Mindflex, a game that used an EEG to steer a ball through an obstacle course. By far the best selling consumer based EEG to date. In 2009 Uncle Milton Industries partnered with NeuroSky to release the Star Wars Force Trainer, a game designed to create the illusion of possessing the Force. In 2009 Emotiv released the EPOC, a 14 channel EEG device. The EPOC is the first commercial BCI to not use dry sensor technology, requiring users to apply a saline solution to electrode pads (which need remoistening after an hour or two of use). In 2010, NeuroSky added a blink and electromyography function to the MindSet. In 2011, NeuroSky released the MindWave, an EEG device designed for educational purposes and games. The MindWave won the Guinness Book of World Records award for "Heaviest machine moved using a brain control interface". In 2012, a Japanese gadget project, neurowear, released Necomimi: a headset with motorized cat ears. The headset is a NeuroSky MindWave unit with two motors on the headband where a cat's ears might be. Slipcovers shaped like cat ears sit over the motors so that as the device registers emotional states the ears move to relate. For example, when relaxed, the ears fall to the sides and perk up when excited again. In 2014, OpenBCI released an eponymous open source brain-computer interface after a successful kickstarter campaign in 2013. The basic OpenBCI has 8 channels, expandable to 16, and supports EEG, EKG, and EMG. The OpenBCI is based on the Texas Instruments ADS1299 IC and the Arduino or PIC microcontroller, and costs $399 for the basic version. It uses standard metal cup electrodes and conductive paste. In 2015, Mind Solutions Inc released the smallest consumer BCI to date, the NeuroSync. This device functions as a dry sensor at a size no larger than a Bluetooth ear piece. In 2015, A Chinese-based company Macrotellect released BrainLink Pro and BrainLink Lite, a consumer grade EEG wearable product providing 20 brain fitness enhancement Apps on Apple and Android App Stores. In 2021, BioSerenity release the Neuronaute and Icecap a single-use disposable EEG headset that allows recording with equivalent quality to traditional cup electrodes. Future research The EEG has been used for many purposes besides the conventional uses of clinical diagnosis and conventional cognitive neuroscience. An early use was during World War II by the U.S. Army Air Corps to screen out pilots in danger of having seizures; long-term EEG recordings in epilepsy patients are still used today for seizure prediction. Neurofeedback remains an important extension, and in its most advanced form is also attempted as the basis of brain computer interfaces. The EEG is also used quite extensively in the field of neuromarketing. The EEG is altered by drugs that affect brain functions, the chemicals that are the basis for psychopharmacology. Berger's early experiments recorded the effects of drugs on EEG. The science of pharmaco-electroencephalography has developed methods to identify substances that systematically alter brain functions for therapeutic and recreational use. Honda is attempting to develop a system to enable an operator to control its Asimo robot using EEG, a technology it eventually hopes to incorporate into its automobiles. EEGs have been used as evidence in criminal trials in the Indian state of Maharashtra. Brain Electrical Oscillation Signature Profiling (BEOS), an EEG technique, was used in the trial of State of Maharashtra v. Sharma to show Sharma remembered using arsenic to poisoning her ex-fiancé, although the reliability and scientific basis of BEOS is disputed. A lot of research is currently being carried out in order to make EEG devices smaller, more portable and easier to use. So called "Wearable EEG" is based upon creating low power wireless collection electronics and ‘dry’ electrodes which do not require a conductive gel to be used. Wearable EEG aims to provide small EEG devices which are present only on the head and which can record EEG for days, weeks, or months at a time, as ear-EEG. Such prolonged and easy-to-use monitoring could make a step change in the diagnosis of chronic conditions such as epilepsy, and greatly improve the end-user acceptance of BCI systems. Research is also being carried out on identifying specific solutions to increase the battery lifetime of Wearable EEG devices through the use of the data reduction approach. For example, in the context of epilepsy diagnosis, data reduction has been used to extend the battery lifetime of Wearable EEG devices by intelligently selecting, and only transmitting, diagnostically relevant EEG data. In research, currently EEG is often used in combination with machine learning. EEG data are pre-processed to be passed on to machine learning algorithms. These algorithms are then trained to recognize different diseases like schizophrenia, epilepsy or dementia. Furthermore, they are increasingly used to study seizure detection. By using machine learning, the data can be analyzed automatically. In the long run this research is intended to build algorithms that support physicians in their clinical practice and to provide further insights into diseases. In this vein, complexity measures of EEG data are often calculated, such as Lempel-Ziv complexity, fractal dimension, and spectral flatness. It has been shown that combining or multiplying such measures can reveal previously hidden information in EEG data. EEG signals from musical performers were used to create instant compositions and one CD by the Brainwave Music Project, run at the Computer Music Center at Columbia University by Brad Garton and Dave Soldier. Similarly, an hour-long recording of the brainwaves of Ann Druyan was included on the Voyager Golden Record, launched on the Voyager probes in 1977, in case any extraterrestrial intelligence could decode her thoughts, which included what it was like to fall in love. See also References 65. Keiper, A. (2006). The age of neuroelectronics. The New Atlantis'', 11, 4-41. Further reading External links Tanzer Oguz I., (2006) Numerical Modeling in Electro- and Magnetoencephalography, Ph.D. Thesis, Helsinki University of Technology, Finland. A tutorial on simulating and estimating EEG sources in Matlab A tutorial on analysis of ongoing, evoked, and induced neuronal activity: Power spectra, wavelet analysis, and coherence Electrophysiology Neurophysiology Neurotechnology Electrodiagnosis Brain–computer interfacing Emerging technologies Mathematics in medicine
11168
https://en.wikipedia.org/wiki/Fortran
Fortran
Fortran (; formerly FORTRAN) is a general-purpose, compiled imperative programming language that is especially suited to numeric computation and scientific computing. Fortran was originally developed by IBM in the 1950s for scientific and engineering applications, and subsequently came to dominate scientific computing. It has been in use for over six decades in computationally intensive areas such as numerical weather prediction, finite element analysis, computational fluid dynamics, geophysics, computational physics, crystallography and computational chemistry. It is a popular language for high-performance computing and is used for programs that benchmark and rank the world's fastest supercomputers. Fortran has had numerous versions, each of which has added extensions while largely retaining compatibility with preceding versions. Successive versions have added support for structured programming and processing of character-based data (FORTRAN 77), array programming, modular programming and generic programming (Fortran 90), high performance Fortran (Fortran 95), object-oriented programming (Fortran 2003), concurrent programming (Fortran 2008), and native parallel computing capabilities (Coarray Fortran 2008/2018). Fortran's design was the basis for many other programming languages. Among the better-known is BASIC, which is based on FORTRAN II with a number of syntax cleanups, notably better logical structures, and other changes to work more easily in an interactive environment. Fortran was ranked 13 in the TIOBE index, a measure of the popularity of programming languages, climbing 29 positions from its ranking of 42 in August 2020. Naming The name FORTRAN is derived from Formula Translating System, Formula Translator, Formula Translation, or Formulaic Translation. The names of earlier versions of the language through FORTRAN 77 were conventionally spelled in all-uppercase (FORTRAN 77 was the last version in which the Fortran character set included only uppercase letters). The official language standards for Fortran have referred to the language as "Fortran" with initial caps (rather than "FORTRAN" in all-uppercase) since Fortran 90. Origins In late 1953, John W. Backus submitted a proposal to his superiors at IBM to develop a more practical alternative to assembly language for programming their IBM 704 mainframe computer. Backus' historic FORTRAN team consisted of programmers Richard Goldberg, Sheldon F. Best, Harlan Herrick, Peter Sheridan, Roy Nutt, Robert Nelson, Irving Ziller, Harold Stern, Lois Haibt, and David Sayre. Its concepts included easier entry of equations into a computer, an idea developed by J. Halcombe Laning and demonstrated in the Laning and Zierler system of 1952. A draft specification for The IBM Mathematical Formula Translating System was completed by November 1954. The first manual for FORTRAN appeared in October 1956, with the first FORTRAN compiler delivered in April 1957. This was the first optimizing compiler, because customers were reluctant to use a high-level programming language unless its compiler could generate code with performance approaching that of hand-coded assembly language. While the community was skeptical that this new method could possibly outperform hand-coding, it reduced the number of programming statements necessary to operate a machine by a factor of 20, and quickly gained acceptance. John Backus said during a 1979 interview with Think, the IBM employee magazine, "Much of my work has come from being lazy. I didn't like writing programs, and so, when I was working on the IBM 701, writing programs for computing missile trajectories, I started work on a programming system to make it easier to write programs." The language was widely adopted by scientists for writing numerically intensive programs, which encouraged compiler writers to produce compilers that could generate faster and more efficient code. The inclusion of a complex number data type in the language made Fortran especially suited to technical applications such as electrical engineering. By 1960, versions of FORTRAN were available for the IBM 709, 650, 1620, and 7090 computers. Significantly, the increasing popularity of FORTRAN spurred competing computer manufacturers to provide FORTRAN compilers for their machines, so that by 1963 over 40 FORTRAN compilers existed. For these reasons, FORTRAN is considered to be the first widely used cross-platform programming language. The development of Fortran paralleled the early evolution of compiler technology, and many advances in the theory and design of compilers were specifically motivated by the need to generate efficient code for Fortran programs. FORTRAN The initial release of FORTRAN for the IBM 704 contained 32 statements, including: and statements Assignment statements Three-way arithmetic statement, which passed control to one of three locations in the program depending on whether the result of the arithmetic statement was negative, zero, or positive statements for checking exceptions (, , and ); and statements for manipulating sense switches and sense lights , computed , , and assigned loops Formatted I/O: , , , , , , and Unformatted I/O: , , , and Other I/O: , , and , , and statement (for providing optimization hints to the compiler). The arithmetic statement was reminiscent of (but not readily implementable by) a three-way comparison instruction (CAS—Compare Accumulator with Storage) available on the 704. The statement provided the only way to compare numbers—by testing their difference, with an attendant risk of overflow. This deficiency was later overcome by "logical" facilities introduced in FORTRAN IV. The statement was used originally (and optionally) to give branch probabilities for the three branch cases of the arithmetic IF statement. The first FORTRAN compiler used this weighting to perform at compile time a Monte Carlo simulation of the generated code, the results of which were used to optimize the placement of basic blocks in memory—a very sophisticated optimization for its time. The Monte Carlo technique is documented in Backus et al.'s paper on this original implementation, The FORTRAN Automatic Coding System: The fundamental unit of program is the basic block; a basic block is a stretch of program which has one entry point and one exit point. The purpose of section 4 is to prepare for section 5 a table of predecessors (PRED table) which enumerates the basic blocks and lists for every basic block each of the basic blocks which can be its immediate predecessor in flow, together with the absolute frequency of each such basic block link. This table is obtained by running the program once in Monte-Carlo fashion, in which the outcome of conditional transfers arising out of IF-type statements and computed GO TO's is determined by a random number generator suitably weighted according to whatever FREQUENCY statements have been provided. Many years later, the statement had no effect on the code, and was treated as a comment statement, since the compilers no longer did this kind of compile-time simulation. A similar fate has befallen compiler hints in several other programming languages, e.g. the keyword in C. The first FORTRAN compiler reported diagnostic information by halting the program when an error was found and outputting an error code on its console. That code could be looked up by the programmer in an error messages table in the operator's manual, providing them with a brief description of the problem. Later, an error-handling subroutine to handle user errors such as division by zero, developed by NASA, was incorporated, informing users of which line of code contained the error. Fixed layout and punched cards Before the development of disk files, text editors and terminals, programs were most often entered on a keypunch keyboard onto 80-column punched cards, one line to a card. The resulting deck of cards would be fed into a card reader to be compiled. Punched card codes included no lower-case letters or many special characters, and special versions of the IBM 026 keypunch were offered that would correctly print the re-purposed special characters used in FORTRAN. Reflecting punched card input practice, Fortran programs were originally written in a fixed-column format, with the first 72 columns read into twelve 36-bit words. A letter "C" in column 1 caused the entire card to be treated as a comment and ignored by the compiler. Otherwise, the columns of the card were divided into four fields: 1 to 5 were the label field: a sequence of digits here was taken as a label for use in DO or control statements such as GO TO and IF, or to identify a FORMAT statement referred to in a WRITE or READ statement. Leading zeros are ignored and 0 is not a valid label number. 6 was a continuation field: a character other than a blank or a zero here caused the card to be taken as a continuation of the statement on the prior card. The continuation cards were usually numbered 1, 2, etc. and the starting card might therefore have zero in its continuation column—which is not a continuation of its preceding card. 7 to 72 served as the statement field. 73 to 80 were ignored (the IBM 704's card reader only used 72 columns). Columns 73 to 80 could therefore be used for identification information, such as punching a sequence number or text, which could be used to re-order cards if a stack of cards was dropped; though in practice this was reserved for stable, production programs. An IBM 519 could be used to copy a program deck and add sequence numbers. Some early compilers, e.g., the IBM 650's, had additional restrictions due to limitations on their card readers. Keypunches could be programmed to tab to column 7 and skip out after column 72. Later compilers relaxed most fixed-format restrictions, and the requirement was eliminated in the Fortran 90 standard. Within the statement field, whitespace characters (blanks) were ignored outside a text literal. This allowed omitting spaces between tokens for brevity or including spaces within identifiers for clarity. For example, was a valid identifier, equivalent to , and 101010DO101I=1,101 was a valid statement, equivalent to 10101 DO 101 I = 1, 101 because the zero in column 6 is treated as if it were a space (!), while 101010DO101I=1.101 was instead 10101 DO101I = 1.101, the assignment of 1.101 to a variable called DO101I. Note the slight visual difference between a comma and a period. Hollerith strings, originally allowed only in FORMAT and DATA statements, were prefixed by a character count and the letter H (e.g., ), allowing blanks to be retained within the character string. Miscounts were a problem. Evolution FORTRAN II IBM's FORTRAN II appeared in 1958. The main enhancement was to support procedural programming by allowing user-written subroutines and functions which returned values with parameters passed by reference. The COMMON statement provided a way for subroutines to access common (or global) variables. Six new statements were introduced: , , and and Over the next few years, FORTRAN II would also add support for the and data types. Early FORTRAN compilers supported no recursion in subroutines. Early computer architectures supported no concept of a stack, and when they did directly support subroutine calls, the return location was often stored in one fixed location adjacent to the subroutine code (e.g. the IBM 1130) or a specific machine register (IBM 360 et seq), which only allows recursion if a stack is maintained by software and the return address is stored on the stack before the call is made and restored after the call returns. Although not specified in FORTRAN 77, many F77 compilers supported recursion as an option, and the Burroughs mainframes, designed with recursion built-in, did so by default. It became a standard in Fortran 90 via the new keyword RECURSIVE. Simple FORTRAN II program This program, for Heron's formula, reads data on a tape reel containing three 5-digit integers A, B, and C as input. There are no "type" declarations available: variables whose name starts with I, J, K, L, M, or N are "fixed-point" (i.e. integers), otherwise floating-point. Since integers are to be processed in this example, the names of the variables start with the letter "I". The name of a variable must start with a letter and can continue with both letters and digits, up to a limit of six characters in FORTRAN II. If A, B, and C cannot represent the sides of a triangle in plane geometry, then the program's execution will end with an error code of "STOP 1". Otherwise, an output line will be printed showing the input values for A, B, and C, followed by the computed AREA of the triangle as a floating-point number occupying ten spaces along the line of output and showing 2 digits after the decimal point, the .2 in F10.2 of the FORMAT statement with label 601. C AREA OF A TRIANGLE WITH A STANDARD SQUARE ROOT FUNCTION C INPUT - TAPE READER UNIT 5, INTEGER INPUT C OUTPUT - LINE PRINTER UNIT 6, REAL OUTPUT C INPUT ERROR DISPLAY ERROR OUTPUT CODE 1 IN JOB CONTROL LISTING READ INPUT TAPE 5, 501, IA, IB, IC 501 FORMAT (3I5) C IA, IB, AND IC MAY NOT BE NEGATIVE OR ZERO C FURTHERMORE, THE SUM OF TWO SIDES OF A TRIANGLE C MUST BE GREATER THAN THE THIRD SIDE, SO WE CHECK FOR THAT, TOO IF (IA) 777, 777, 701 701 IF (IB) 777, 777, 702 702 IF (IC) 777, 777, 703 703 IF (IA+IB-IC) 777, 777, 704 704 IF (IA+IC-IB) 777, 777, 705 705 IF (IB+IC-IA) 777, 777, 799 777 STOP 1 C USING HERON'S FORMULA WE CALCULATE THE C AREA OF THE TRIANGLE 799 S = FLOATF (IA + IB + IC) / 2.0 AREA = SQRTF( S * (S - FLOATF(IA)) * (S - FLOATF(IB)) * + (S - FLOATF(IC))) WRITE OUTPUT TAPE 6, 601, IA, IB, IC, AREA 601 FORMAT (4H A= ,I5,5H B= ,I5,5H C= ,I5,8H AREA= ,F10.2, + 13H SQUARE UNITS) STOP END FORTRAN III IBM also developed a FORTRAN III in 1958 that allowed for inline assembly code among other features; however, this version was never released as a product. Like the 704 FORTRAN and FORTRAN II, FORTRAN III included machine-dependent features that made code written in it unportable from machine to machine. Early versions of FORTRAN provided by other vendors suffered from the same disadvantage. IBM 1401 FORTRAN FORTRAN was provided for the IBM 1401 computer by an innovative 63-phase compiler that ran entirely in its core memory of only 8000 (six-bit) characters. The compiler could be run from tape, or from a 2200-card deck; it used no further tape or disk storage. It kept the program in memory and loaded overlays that gradually transformed it, in place, into executable form, as described by Haines. This article was reprinted, edited, in both editions of Anatomy of a Compiler and in the IBM manual "Fortran Specifications and Operating Procedures, IBM 1401". The executable form was not entirely machine language; rather, floating-point arithmetic, sub-scripting, input/output, and function references were interpreted, preceding UCSD Pascal P-code by two decades. IBM later provided a FORTRAN IV compiler for the 1400 series of computers. FORTRAN IV IBM began development of FORTRAN IV starting in 1961, as a result of customer demands. FORTRAN IV removed the machine-dependent features of FORTRAN II (such as ), while adding new features such as a data type, logical Boolean expressions and the logical IF statement as an alternative to the arithmetic IF statement. FORTRAN IV was eventually released in 1962, first for the IBM 7030 ("Stretch") computer, followed by versions for the IBM 7090, IBM 7094, and later for the IBM 1401 in 1966. By 1965, FORTRAN IV was supposed to be compliant with the standard being developed by the American Standards Association X3.4.3 FORTRAN Working Group. Between 1966 and 1968, IBM offered several FORTRAN IV compilers for its System/360, each named by letters that indicated the minimum amount of memory the compiler needed to run. The letters (F, G, H) matched the codes used with System/360 model numbers to indicate memory size, each letter increment being a factor of two larger: 1966 : FORTRAN IV F for DOS/360 (64K bytes) 1966 : FORTRAN IV G for OS/360 (128K bytes) 1968 : FORTRAN IV H for OS/360 (256K bytes) At about this time FORTRAN IV had started to become an important educational tool and implementations such as the University of Waterloo's WATFOR and WATFIV were created to simplify the complex compile and link processes of earlier compilers. FORTRAN 66 Perhaps the most significant development in the early history of FORTRAN was the decision by the American Standards Association (now American National Standards Institute (ANSI)) to form a committee sponsored by BEMA, the Business Equipment Manufacturers Association, to develop an American Standard Fortran. The resulting two standards, approved in March 1966, defined two languages, FORTRAN (based on FORTRAN IV, which had served as a de facto standard), and Basic FORTRAN (based on FORTRAN II, but stripped of its machine-dependent features). The FORTRAN defined by the first standard, officially denoted X3.9-1966, became known as FORTRAN 66 (although many continued to term it FORTRAN IV, the language on which the standard was largely based). FORTRAN 66 effectively became the first industry-standard version of FORTRAN. FORTRAN 66 included: Main program, , , and program units , , , , and data types , , and statements statement for specifying initial values Intrinsic and (e.g., library) functions Assignment statement , computed , assigned , and statements Logical and arithmetic (three-way) statements loop statement , , , , and statements for sequential I/O statement and assigned format , , , and statements Hollerith constants in and statements, and as arguments to procedures Identifiers of up to six characters in length Comment lines line FORTRAN 77 After the release of the FORTRAN 66 standard, compiler vendors introduced several extensions to Standard Fortran, prompting ANSI committee X3J3 in 1969 to begin work on revising the 1966 standard, under sponsorship of CBEMA, the Computer Business Equipment Manufacturers Association (formerly BEMA). Final drafts of this revised standard circulated in 1977, leading to formal approval of the new FORTRAN standard in April 1978. The new standard, called FORTRAN 77 and officially denoted X3.9-1978, added a number of significant features to address many of the shortcomings of FORTRAN 66: Block and statements, with optional and clauses, to provide improved language support for structured programming loop extensions, including parameter expressions, negative increments, and zero trip counts , , and statements for improved I/O capability Direct-access file I/O statement, to override implicit conventions that undeclared variables are INTEGER if their name begins with I, J, K, L, M, or N (and REAL otherwise) data type, replacing Hollerith strings with vastly expanded facilities for character input and output and processing of character-based data statement for specifying constants statement for persistent local variables Generic names for intrinsic functions (e.g. also accepts arguments of other types, such as or ). A set of intrinsics () for lexical comparison of strings, based upon the ASCII collating sequence. (These ASCII functions were demanded by the U.S. Department of Defense, in their conditional approval vote.) In this revision of the standard, a number of features were removed or altered in a manner that might invalidate formerly standard-conforming programs. (Removal was the only allowable alternative to X3J3 at that time, since the concept of "deprecation" was not yet available for ANSI standards.) While most of the 24 items in the conflict list (see Appendix A2 of X3.9-1978) addressed loopholes or pathological cases permitted by the prior standard but rarely used, a small number of specific capabilities were deliberately removed, such as: Hollerith constants and Hollerith data, such as GREET = 12HHELLO THERE! Reading into an H edit (Hollerith field) descriptor in a FORMAT specification Overindexing of array bounds by subscripts DIMENSION A(10,5) Y= A(11,1) Transfer of control out of and back into the range of a DO loop (also known as "Extended Range") Transition to ANSI Standard Fortran The development of a revised standard to succeed FORTRAN 77 would be repeatedly delayed as the standardization process struggled to keep up with rapid changes in computing and programming practice. In the meantime, as the "Standard FORTRAN" for nearly fifteen years, FORTRAN 77 would become the historically most important dialect. An important practical extension to FORTRAN 77 was the release of MIL-STD-1753 in 1978. This specification, developed by the U.S. Department of Defense, standardized a number of features implemented by most FORTRAN 77 compilers but not included in the ANSI FORTRAN 77 standard. These features would eventually be incorporated into the Fortran 90 standard. , , , and statements statement variant of the statement Bit manipulation intrinsic functions, based on similar functions included in Industrial Real-Time Fortran (ANSI/ISA S61.1 (1976)) The IEEE 1003.9 POSIX Standard, released in 1991, provided a simple means for FORTRAN 77 programmers to issue POSIX system calls. Over 100 calls were defined in the document allowing access to POSIX-compatible process control, signal handling, file system control, device control, procedure pointing, and stream I/O in a portable manner. Fortran 90 The much-delayed successor to FORTRAN 77, informally known as Fortran 90 (and prior to that, Fortran 8X), was finally released as ISO/IEC standard 1539:1991 in 1991 and an ANSI Standard in 1992. In addition to changing the official spelling from FORTRAN to Fortran, this major revision added many new features to reflect the significant changes in programming practice that had evolved since the 1978 standard: Free-form source input, also with lowercase Fortran keywords Identifiers up to 31 characters in length (In the previous standard, it was only six characters). Inline comments Ability to operate on arrays (or array sections) as a whole, thus greatly simplifying math and engineering computations. whole, partial and masked array assignment statements and array expressions, such as X(1:N)=R(1:N)*COS(A(1:N)) statement for selective array assignment array-valued constants and expressions, user-defined array-valued functions and array constructors. procedures Modules, to group related procedures and data together, and make them available to other program units, including the capability to limit the accessibility to only specific parts of the module. A vastly improved argument-passing mechanism, allowing interfaces to be checked at compile time User-written interfaces for generic procedures Operator overloading Derived (structured) data types New data type declaration syntax, to specify the data type and other attributes of variables Dynamic memory allocation by means of the attribute and the and statements attribute, pointer assignment, and statement to facilitate the creation and manipulation of dynamic data structures Structured looping constructs, with an statement for loop termination, and and statements for terminating normal loop iterations in an orderly way . . . construct for multi-way selection Portable specification of numerical precision under the user's control New and enhanced intrinsic procedures. Obsolescence and deletions Unlike the prior revision, Fortran 90 removed no features. Any standard-conforming FORTRAN 77 program was also standard-conforming under Fortran 90, and either standard should have been usable to define its behavior. A small set of features were identified as "obsolescent" and were expected to be removed in a future standard. All of the functionalities of these early-version features can be performed by newer Fortran features. Some are kept to simplify porting of old programs but many have now been deleted. "Hello, World!" example program helloworld print *, "Hello, World!" end program helloworld Fortran 95 Fortran 95, published officially as ISO/IEC 1539-1:1997, was a minor revision, mostly to resolve some outstanding issues from the Fortran 90 standard. Nevertheless, Fortran 95 also added a number of extensions, notably from the High Performance Fortran specification: and nested constructs to aid vectorization User-defined and procedures Default initialization of derived type components, including pointer initialization Expanded the ability to use initialization expressions for data objects Initialization of pointers to Clearly defined that arrays are automatically deallocated when they go out of scope. A number of intrinsic functions were extended (for example a argument was added to the intrinsic). Several features noted in Fortran 90 to be "obsolescent" were removed from Fortran 95: statements using and index variables Branching to an statement from outside its block statement and assigned statement, and assigned format specifiers Hollerith edit descriptor. An important supplement to Fortran 95 was the ISO technical report TR-15581: Enhanced Data Type Facilities, informally known as the Allocatable TR. This specification defined enhanced use of arrays, prior to the availability of fully Fortran 2003-compliant Fortran compilers. Such uses include arrays as derived type components, in procedure dummy argument lists, and as function return values. ( arrays are preferable to -based arrays because arrays are guaranteed by Fortran 95 to be deallocated automatically when they go out of scope, eliminating the possibility of memory leakage. In addition, elements of allocatable arrays are contiguous, and aliasing is not an issue for optimization of array references, allowing compilers to generate faster code than in the case of pointers.) Another important supplement to Fortran 95 was the ISO technical report TR-15580: Floating-point exception handling, informally known as the IEEE TR. This specification defined support for IEEE floating-point arithmetic and floating-point exception handling. Conditional compilation and varying length strings In addition to the mandatory "Base language" (defined in ISO/IEC 1539-1 : 1997), the Fortran 95 language also includes two optional modules: Varying length character strings (ISO/IEC 1539-2 : 2000) Conditional compilation (ISO/IEC 1539-3 : 1998) which, together, compose the multi-part International Standard (ISO/IEC 1539). According to the standards developers, "the optional parts describe self-contained features which have been requested by a substantial body of users and/or implementors, but which are not deemed to be of sufficient generality for them to be required in all standard-conforming Fortran compilers." Nevertheless, if a standard-conforming Fortran does provide such options, then they "must be provided in accordance with the description of those facilities in the appropriate Part of the Standard". Modern Fortran The language defined by the twenty-first century standards, in particular because of its incorporation of Object-oriented programming support and subsequently Coarray Fortran, is often referred to as 'Modern Fortran', and the term is increasingly used in the literature. Fortran 2003 Fortran 2003, officially published as ISO/IEC 1539-1:2004, is a major revision introducing many new features. A comprehensive summary of the new features of Fortran 2003 is available at the Fortran Working Group (ISO/IEC JTC1/SC22/WG5) official Web site. From that article, the major enhancements for this revision include: Derived type enhancements: parameterized derived types, improved control of accessibility, improved structure constructors, and finalizers Object-oriented programming support: type extension and inheritance, polymorphism, dynamic type allocation, and type-bound procedures, providing complete support for abstract data types Data manipulation enhancements: allocatable components (incorporating TR 15581), deferred type parameters, attribute, explicit type specification in array constructors and allocate statements, pointer enhancements, extended initialization expressions, and enhanced intrinsic procedures Input/output enhancements: asynchronous transfer, stream access, user specified transfer operations for derived types, user specified control of rounding during format conversions, named constants for preconnected units, the statement, regularization of keywords, and access to error messages Procedure pointers Support for IEEE floating-point arithmetic and floating-point exception handling (incorporating TR 15580) Interoperability with the C programming language Support for international usage: access to ISO 10646 4-byte characters and choice of decimal or comma in numeric formatted input/output Enhanced integration with the host operating system: access to command line arguments, environment variables, and processor error messages An important supplement to Fortran 2003 was the ISO technical report TR-19767: Enhanced module facilities in Fortran. This report provided sub-modules, which make Fortran modules more similar to Modula-2 modules. They are similar to Ada private child sub-units. This allows the specification and implementation of a module to be expressed in separate program units, which improves packaging of large libraries, allows preservation of trade secrets while publishing definitive interfaces, and prevents compilation cascades. Fortran 2008 ISO/IEC 1539-1:2010, informally known as Fortran 2008, was approved in September 2010. As with Fortran 95, this is a minor upgrade, incorporating clarifications and corrections to Fortran 2003, as well as introducing some new capabilities. The new capabilities include: Sub-modules – additional structuring facilities for modules; supersedes ISO/IEC TR 19767:2005 Coarray Fortran – a parallel execution model The DO CONCURRENT construct – for loop iterations with no interdependencies The CONTIGUOUS attribute – to specify storage layout restrictions The BLOCK construct – can contain declarations of objects with construct scope Recursive allocatable components – as an alternative to recursive pointers in derived types The Final Draft international Standard (FDIS) is available as document N1830. A supplement to Fortran 2008 is the International Organization for Standardization (ISO) Technical Specification (TS) 29113 on Further Interoperability of Fortran with C, which has been submitted to ISO in May 2012 for approval. The specification adds support for accessing the array descriptor from C and allows ignoring the type and rank of arguments. Fortran 2018 The latest revision of the language (Fortran 2018) was earlier referred to as Fortran 2015. It is a significant revision and was released on 28 November 2018. Fortran 2018 incorporates two previously published Technical Specifications: ISO/IEC TS 29113:2012 Further Interoperability with C ISO/IEC TS 18508:2015 Additional Parallel Features in Fortran Additional changes and new features include support for ISO/IEC/IEEE 60559:2011 (the version of the IEEE floating-point standard before the latest minor revision IEEE ), hexadecimal input/output, IMPLICIT NONE enhancements and other changes. Language features A full description of the Fortran language features brought by Fortran 95 is covered in the related article, Fortran 95 language features. The language versions defined by later standards are often referred to collectively as 'Modern Fortran' and are described in the literature. Science and engineering Although a 1968 journal article by the authors of BASIC already described FORTRAN as "old-fashioned", programs have been written in Fortran for over six decades and there is a vast body of Fortran software in daily use throughout the scientific and engineering communities. Jay Pasachoff wrote in 1984 that "physics and astronomy students simply have to learn FORTRAN. So much exists in FORTRAN that it seems unlikely that scientists will change to Pascal, Modula-2, or whatever." In 1993, Cecil E. Leith called FORTRAN the "mother tongue of scientific computing", adding that its replacement by any other possible language "may remain a forlorn hope". It is the primary language for some of the most intensive super-computing tasks, such as in astronomy, climate modeling, computational chemistry, computational economics, computational fluid dynamics, computational physics, data analysis, hydrological modeling, numerical linear algebra and numerical libraries (LAPACK, IMSL and NAG), optimization, satellite simulation, structural engineering, and weather prediction. Many of the floating-point benchmarks to gauge the performance of new computer processors, such as the floating-point components of the SPEC benchmarks (e.g., CFP2006, CFP2017) are written in Fortran. Math algorithms are well documented in Numerical Recipes. Apart from this, more modern codes in computational science generally use large program libraries, such as METIS for graph partitioning, PETSc or Trilinos for linear algebra capabilities, deal.II or FEniCS for mesh and finite element support, and other generic libraries. Since the early 2000s, many of the widely used support libraries have also been implemented in C and more recently, in C++. On the other hand, high-level languages such as MATLAB, Python, and R have become popular in particular areas of computational science. Consequently, a growing fraction of scientific programs is also written in such higher-level scripting languages. For this reason, facilities for inter-operation with C were added to Fortran 2003 and enhanced by the ISO/IEC technical specification 29113, which was incorporated into Fortran 2018 to allow more flexible interoperation with other programming languages. Software for NASA probes Voyager 1 and Voyager 2 was originally written in FORTRAN 5, and later ported to FORTRAN 77. , some of the software is still written in Fortran and some has been ported to C. Portability Portability was a problem in the early days because there was no agreed upon standard—not even IBM's reference manual—and computer companies vied to differentiate their offerings from others by providing incompatible features. Standards have improved portability. The 1966 standard provided a reference syntax and semantics, but vendors continued to provide incompatible extensions. Although careful programmers were coming to realize that use of incompatible extensions caused expensive portability problems, and were therefore using programs such as The PFORT Verifier, it was not until after the 1977 standard, when the National Bureau of Standards (now NIST) published FIPS PUB 69, that processors purchased by the U.S. Government were required to diagnose extensions of the standard. Rather than offer two processors, essentially every compiler eventually had at least an option to diagnose extensions. Incompatible extensions were not the only portability problem. For numerical calculations, it is important to take account of the characteristics of the arithmetic. This was addressed by Fox et al. in the context of the 1966 standard by the PORT library. The ideas therein became widely used, and were eventually incorporated into the 1990 standard by way of intrinsic inquiry functions. The widespread (now almost universal) adoption of the IEEE 754 standard for binary floating-point arithmetic has essentially removed this problem. Access to the computing environment (e.g., the program's command line, environment variables, textual explanation of error conditions) remained a problem until it was addressed by the 2003 standard. Large collections of library software that could be described as being loosely related to engineering and scientific calculations, such as graphics libraries, have been written in C, and therefore access to them presented a portability problem. This has been addressed by incorporation of C interoperability into the 2003 standard. It is now possible (and relatively easy) to write an entirely portable program in Fortran, even without recourse to a preprocessor. Obsolete variants Until the Fortran 66 standard was developed, each compiler supported its own variant of Fortran. Some were more divergent from the mainstream than others. The first Fortran compiler set a high standard of efficiency for compiled code. This goal made it difficult to create a compiler so it was usually done by the computer manufacturers to support hardware sales. This left an important niche: compilers that were fast and provided good diagnostics for the programmer (often a student). Examples include Watfor, Watfiv, PUFFT, and on a smaller scale, FORGO, Wits Fortran, and Kingston Fortran 2. Fortran 5 was marketed by Data General Corp in the late 1970s and early 1980s, for the Nova, Eclipse, and MV line of computers. It had an optimizing compiler that was quite good for minicomputers of its time. The language most closely resembles FORTRAN 66. FORTRAN V was distributed by Control Data Corporation in 1968 for the CDC 6600 series. The language was based upon FORTRAN IV. Univac also offered a compiler for the 1100 series known as FORTRAN V. A spinoff of Univac Fortran V was Athena FORTRAN. Specific variants produced by the vendors of high-performance scientific computers (e.g., Burroughs, Control Data Corporation (CDC), Cray, Honeywell, IBM, Texas Instruments, and UNIVAC) added extensions to Fortran to take advantage of special hardware features such as instruction cache, CPU pipelines, and vector arrays. For example, one of IBM's FORTRAN compilers (H Extended IUP) had a level of optimization which reordered the machine code instructions to keep multiple internal arithmetic units busy simultaneously. Another example is CFD, a special variant of FORTRAN designed specifically for the ILLIAC IV supercomputer, running at NASA's Ames Research Center. IBM Research Labs also developed an extended FORTRAN-based language called VECTRAN for processing vectors and matrices. Object-Oriented Fortran was an object-oriented extension of Fortran, in which data items can be grouped into objects, which can be instantiated and executed in parallel. It was available for Sun, Iris, iPSC, and nCUBE, but is no longer supported. Such machine-specific extensions have either disappeared over time or have had elements incorporated into the main standards. The major remaining extension is OpenMP, which is a cross-platform extension for shared memory programming. One new extension, Coarray Fortran, is intended to support parallel programming. FOR TRANSIT was the name of a reduced version of the IBM 704 FORTRAN language, which was implemented for the IBM 650, using a translator program developed at Carnegie in the late 1950s. The following comment appears in the IBM Reference Manual (FOR TRANSIT Automatic Coding System C28-4038, Copyright 1957, 1959 by IBM): The FORTRAN system was designed for a more complex machine than the 650, and consequently some of the 32 statements found in the FORTRAN Programmer's Reference Manual are not acceptable to the FOR TRANSIT system. In addition, certain restrictions to the FORTRAN language have been added. However, none of these restrictions make a source program written for FOR TRANSIT incompatible with the FORTRAN system for the 704. The permissible statements were: Arithmetic assignment statements, e.g., a = b GO TO (n1, n2, ..., nm), i IF (a) n1, n2, n3 DO n i = m1, m2 Up to ten subroutines could be used in one program. FOR TRANSIT statements were limited to columns 7 through 56, only. Punched cards were used for input and output on the IBM 650. Three passes were required to translate source code to the "IT" language, then to compile the IT statements into SOAP assembly language, and finally to produce the object program, which could then be loaded into the machine to run the program (using punched cards for data input, and outputting results onto punched cards). Two versions existed for the 650s with a 2000 word memory drum: FOR TRANSIT I (S) and FOR TRANSIT II, the latter for machines equipped with indexing registers and automatic floating-point decimal (bi-quinary) arithmetic. Appendix A of the manual included wiring diagrams for the IBM 533 card reader/punch control panel. Fortran-based languages Prior to FORTRAN 77, a number of preprocessors were commonly used to provide a friendlier language, with the advantage that the preprocessed code could be compiled on any machine with a standard FORTRAN compiler. These preprocessors would typically support structured programming, variable names longer than six characters, additional data types, conditional compilation, and even macro capabilities. Popular preprocessors included FLECS, iftran, MORTRAN, SFtran, S-Fortran, Ratfor, and Ratfiv. Ratfor and Ratfiv, for example, implemented a C-like language, outputting preprocessed code in standard FORTRAN 66. Despite advances in the Fortran language, preprocessors continue to be used for conditional compilation and macro substitution. One of the earliest versions of FORTRAN, introduced in the '60s, was popularly used in colleges and universities. Developed, supported, and distributed by the University of Waterloo, WATFOR was based largely on FORTRAN IV. A student using WATFOR could submit their batch FORTRAN job and, if there were no syntax errors, the program would move straight to execution. This simplification allowed students to concentrate on their program's syntax and semantics, or execution logic flow, rather than dealing with submission Job Control Language (JCL), the compile/link-edit/execution successive process(es), or other complexities of the mainframe/minicomputer environment. A down side to this simplified environment was that WATFOR was not a good choice for programmers needing the expanded abilities of their host processor(s), e.g., WATFOR typically had very limited access to I/O devices. WATFOR was succeeded by WATFIV and its later versions. (line programming) LRLTRAN was developed at the Lawrence Radiation Laboratory to provide support for vector arithmetic and dynamic storage, among other extensions to support systems programming. The distribution included the LTSS operating system. The Fortran-95 Standard includes an optional Part 3 which defines an optional conditional compilation capability. This capability is often referred to as "CoCo". Many Fortran compilers have integrated subsets of the C preprocessor into their systems. SIMSCRIPT is an application specific Fortran preprocessor for modeling and simulating large discrete systems. The F programming language was designed to be a clean subset of Fortran 95 that attempted to remove the redundant, unstructured, and deprecated features of Fortran, such as the statement. F retains the array features added in Fortran 90, and removes control statements that were made obsolete by structured programming constructs added to both FORTRAN 77 and Fortran 90. F is described by its creators as "a compiled, structured, array programming language especially well suited to education and scientific computing". Essential Lahey Fortran 90 (ELF90) was a similar subset. Lahey and Fujitsu teamed up to create Fortran for the Microsoft .NET Framework. Silverfrost FTN95 is also capable of creating .NET code. Code examples The following program illustrates dynamic memory allocation and array-based operations, two features introduced with Fortran 90. Particularly noteworthy is the absence of loops and / statements in manipulating the array; mathematical operations are applied to the array as a whole. Also apparent is the use of descriptive variable names and general code formatting that conform with contemporary programming style. This example computes an average over data entered interactively. program average ! Read in some numbers and take the average ! As written, if there are no data points, an average of zero is returned ! While this may not be desired behavior, it keeps this example simple implicit none real, dimension(:), allocatable :: points integer :: number_of_points real :: average_points, positive_average, negative_average average_points = 0.0 positive_average = 0.0 negative_average = 0.0 write (*,*) "Input number of points to average:" read (*,*) number_of_points allocate (points(number_of_points)) write (*,*) "Enter the points to average:" read (*,*) points ! Take the average by summing points and dividing by number_of_points if (number_of_points > 0) average_points = sum(points) / number_of_points ! Now form average over positive and negative points only if (count(points > 0.) > 0) positive_average = sum(points, points > 0.) / count(points > 0.) if (count(points < 0.) > 0) negative_average = sum(points, points < 0.) / count(points < 0.) ! Print result to terminal write (*,'(a,g12.4)') 'Average = ', average_points write (*,'(a,g12.4)') 'Average of positive points = ', positive_average write (*,'(a,g12.4)') 'Average of negative points = ', negative_average end program average Humor During the same FORTRAN standards committee meeting at which the name "FORTRAN 77" was chosen, a satirical technical proposal was incorporated into the official distribution bearing the title "Letter O Considered Harmful". This proposal purported to address the confusion that sometimes arises between the letter "O" and the numeral zero, by eliminating the letter from allowable variable names. However, the method proposed was to eliminate the letter from the character set entirely (thereby retaining 48 as the number of lexical characters, which the colon had increased to 49). This was considered beneficial in that it would promote structured programming, by making it impossible to use the notorious statement as before. (Troublesome statements would also be eliminated.) It was noted that this "might invalidate some existing programs" but that most of these "probably were non-conforming, anyway". When X3J3 debated whether the minimum trip count for a DO loop should be zero or one in Fortran 77, Loren Meissner suggested a minimum trip count of two—reasoning (tongue-in-cheek) that if it was less than two then there would be no reason for a loop! When assumed-length arrays were being added, there was a dispute as to the appropriate character to separate upper and lower bounds. In a comment examining these arguments, Dr. Walt Brainerd penned an article entitled "Astronomy vs. Gastroenterology" because some proponents had suggested using the star or asterisk ("*"), while others favored the colon (":"). Variable names beginning with the letters I–N have a default type of integer, while variables starting with any other letters defaulted to real, although programmers could override the defaults with an explicit declaration. This led to the joke: "In FORTRAN, GOD is REAL (unless declared INTEGER)." See also f2c FORMAC List of Fortran compilers List of Fortran numerical libraries List of programming languages Matrix representation Row-major order Spaghetti code References Further reading Language standards Informally known as FORTRAN 66. Also known as ISO 1539–1980, informally known as FORTRAN 77. Informally known as Fortran 90. Informally known as Fortran 95. There are a further two parts to this standard. Part 1 has been formally adopted by ANSI. Informally known as Fortran 2003. Informally known as Fortran 2008. Related standards Other reference material Books Arjen, Markus (2012), "Modern Fortran in Practice", Cambridge Univ. Press, ISBN 978-1-13908479-6. Articles External links ISO/IEC JTC1/SC22/WG5—the official home of Fortran standards Fortran Standards Documents—GFortran standards fortran-lang.org (2020). History of FORTRAN and Fortran II—Computer History Museum Valmer Norrod, et al.: A self-study course in FORTRAN programing—Volume I—textbook, Computer Science Corporation El Segundo, California (April 1970). NASA (N70-25287). Valmer Norrod, Sheldom Blecher, and Martha Horton: A self-study course in FORTRAN programing—Volume II—workbook, NASA CR-1478 (April 1970), NASA (N70-25288). American inventions Array programming languages Computer standards Numerical programming languages Object-oriented programming languages Procedural programming languages High-level programming languages IBM software Programming languages created in 1957 Programming languages with an ISO standard Statically typed programming languages Unix programming tools
10347776
https://en.wikipedia.org/wiki/HDHomeRun
HDHomeRun
HDHomeRun is a network-attached digital television tuner box, produced by the company SiliconDust USA, Inc.. Overview Unlike standard set-top box (or set-top unit) appliances, HDHomeRun does not have a video output that connects directly to the user's television. Instead it receives a live TV signal and then streams the decoded video over a local area network to an existing smart phone, tablet computer, smart tv, set top streaming device, computer, or game console. This allows it to stream content to multiple viewing locations. General details There are currently a number of HDHomeRun models on the market: single-tuner ATSC/clear QAM dual-tuner ATSC/clear QAM dual-tuner commercial (TECH) ATSC/clear QAM dual-tuner DVB-T/unencrypted DVB-C three tuner CableCard/clear QAM All models are designed to receive unencrypted digital broadcast or cable television and stream it over a network for use by any PC on the network. HDHomeRun normally receives an IP address via DHCP but will also work via an auto IP address if no DHCP server is available. The HDHomeRun Windows driver presents the tuners as standard BDA tuners, enabling BDA-compliant applications to work with the HDHomeRun. The HDHomeRun can also be controlled via a command-line application which is available for Windows, Mac OS X, Linux, FreeBSD, and other POSIX-compliant operating systems. The control library is open source and is available under the LGPL for use in custom applications. Select retail packaged HDHomeRun units are distributed with Arcsoft TotalMedia Theatre. Technical specifications ATSC models 8VSB (ATSC over-the-air digital TV) QAM 64/256 (unencrypted digital cable TV) CableCard (encrypted digital cable TV) 100Mbit Ethernet RJ45 connection QAM model QAM - North American cable with protected content support for USA cable using CableCARD for access control ISDB model ISDB-T - South America over the air DVB models DVB-T / DVB-T2 over-the-air unencrypted digital TV DVB-C - QAM 64/128/256 (annex A/C) unencrypted digital cable TV 6/7/8 MHz channel bandwidth (Australia, Europe, New Zealand, Taiwan) 100Mbit Ethernet RJ45 connection Compatibility The HDHomeRun can be controlled and viewed from a wide variety of DVR/PVR software. Microsoft provides Windows Media Center for Windows XP through 8, but discontinued the product in Windows 10. Apple macOS 10 runs EyeTV 3. Linux runs Myth TV. Newer models of HDHomeRun are DLNA device compatible. HDHomeRun Tuners Consumer Tuners Commercial Tuners Sources: HDHomeRun PRIME Introduced Fall 2011, the HDHomeRun PRIME provided the ability to view and record all the digital cable channels they subscribe to without using a cable supplied set-top-box. The device employed a CableCARD to replace the set-top-box. The rental fee was usually much less than the rental fee for the cable box. The HDHomeRun PRIME integrated easily with Windows Media Center (WMC), included with Windows 7 and available with Windows 8, and turned your PC into an HD DVR. With 3 tuners, the PRIME let you record two programs and watch another live all at the same time. Although a few HDHomeRun Prime boxes can be found for sale on the internet, they are no longer in production. Cessation of production has been attributed to a change in FCC regulation, which no longer requires cable service providers to make CableCARDs available to their customers. CableCards are still available from many US digital cable providers. The Prime can be used without a CableCard in cable systems that still have clear QAM channels available. The Prime does not support digital cable on demand, but can receive PPV that is ordered on the phone from your cable provider. The Prime is a dedicated cable TV device with no ATSC tuner and thus can not be used with an antenna. Windows Media Center is the most widely used software available for use with the Prime, and the only one with Digital Rights Management (DRM) to view and record premium cable channels like HBO. Many other software options are available; please see the list in this Wiki and the Silicondust website for details. When running WMC, an additional, separate ATSC tuner can be used with the PC and WMC will combine both the Prime and ATSC tuner in the guide for live TV and recording. By default WMC has a 4 tuner limit for each type (ATSC, CableCard) of tuner, but a 3rd party software product called TunerSalad increases the number of tuners to 32 per type; you can use up to 32 cable tuners (11 Primes = 33 cable tuners) and 32 ATSC tuners, for a total of 64 tuners. There is also another 3rd party software called My Channel Logos that adds channel logos to the WMC channel guide. For detailed discussion of WMC, please see TheGreenButton.tv website. WMC was included with Windows 7 but is an additional $100 for Windows 8/8.1 and an additional $10 for Windows 8/8.1 Pro. It is not available from Microsoft on Windows 10 but members at The Green Button are developing a way to use a modified version of WMC with Windows 10. HDHomeRun DVR The HDHomeRun DVR is a DVR software designed for installation on a network-attached storage device. It is intended to be used with a HDHomeRun tuner and is expected to overcome digital rights management complications. HDHomeRun Premium TV Launched in 2018, HDHomeRun Premium TV was a virtual MVPD service that worked with the HDHomeRun Prime and HDHomeRun DVR service. A unique feature of this service over most other MVPDs was the ability to record the channel streams to a local hard drive for time-shifted viewing. In March 2019, HDHomeRun announced that it would shut down its Premium service. See also Monsoon HAVA Dreambox DBox2 Slingbox LocationFree Player References External links Television technology Television placeshifting technology
22224628
https://en.wikipedia.org/wiki/B2W%20Software
B2W Software
B2W Software (formerly known as Bid2Win Software) is a privately held company based in Portsmouth, New Hampshire that develops specialized estimation and bid management software for heavy construction contractors to manage construction estimating and bidding, field tracking and analysis, equipment maintenance, resource scheduling and dispatching and eForms and reporting. Founded in 1993 under the name Niche Software, the company was first started with an estimating and bidding software program for the construction industry. The company rebranded once more in January 2013, going from BID2WIN Software to B2W Software. The B2W Software ONE Platform includes individual modules, or elements - B2W Estimate (estimating & bidding), B2W Track (field tracking & analysis), released in January 2008, B2W Schedule (resource scheduling & dispatching), released in 2018, B2W Maintain (equipment maintenance & repair management), which was released in 2013, and B2W Inform (eForms and reporting). References 1993 establishments in New Hampshire Companies based in Portsmouth, New Hampshire Software companies established in 1993 Software companies based in New Hampshire Software companies of the United States
14435941
https://en.wikipedia.org/wiki/Amiga%20productivity%20software
Amiga productivity software
This article deals with productivity software created for the Amiga line of computers and covers the AmigaOS operating system and its derivates AROS and MorphOS and is a split of main article Amiga software. See also related articles Amiga Internet and communications software, Amiga music software, Amiga programming languages, and Amiga support and maintenance software for other information regarding software that run on Amiga. History The Amiga originally supported such prestigious software titles as WordPerfect, Electronic Arts' Deluxe Paint, and Lattice C. Newtek's Video Toaster, one of the first all-in-one graphics and video editing packages, began on the Amiga. The Video Toaster was one of the few accessories for the "big box" Amigas (2000, 3000 and 4000) that used the video slot and enabled users to turn their Amiga into the heart of an entire TV production suite. The later addition of the Video Flyer by Newtek made possible the first non-linear video editing program for the Amiga. The Amiga made 3D raytracing graphics available for the masses with Sculpt 3D. Before the Amiga, raytracing was only available for dedicated graphic workstations such as the SGI. Other raytracing software also included TurboSilver. The Amiga was well known for its 3D rendering capability, with many titles being added to the mix as the years went by. Some titles were later ported to Microsoft Windows and continue to thrive there, such as the rendering software Cinema 4D from Maxon, and LightWave from Newtek, which was originally part of the Video Toaster. The Video Toaster itself has even been ported to the Windows platform. LightWave was used for low-cost computer generated special effects during the early 1990s, with Babylon 5 being a notable example of a TV-series utilizing LightWave. Even Microsoft produced software for use on the Amiga. AmigaBASIC, an advanced BASIC software development environment, complete with an integrated development environment (IDE), was written by Microsoft under contract. Graphics software Amiga had its beginnings in 1985 with a strong attitude for graphics, more so than other PCs of its age due to its peculiar hardware, and its multimedia chipset. The graphical chip Agnus could access directly RAM and pilot it with DMA (Direct Memory Access) privileges, and featured Bit Blitter and Copper circuits capable to move ranges of pixels on the screen and deal directly with the electronic beam of the TV set. It could render graphic screens of various number of colors (2, 4, 8, 16, 32, 64 and 4096 color HAM modes) starting from 320x200 up to 720x576 pixel graphic pages. Amiga released a vast number of graphics software programs, such as Graphicraft, Deluxe Paint, TVPaint, Photon Paint, Brilliance!, (a program entirely realized upon the suggestions and wishes of well known computer artist Jim Sachs), Aegis Images, ArtEffect, fxPAINT by IOSpirit, Personal Paint from Cloanto, Photogenics, Express Paint, Digi Paint, XiPaint, PerfectPaint, SketchBlock 24 bit painting program by Andy Broad for AmigaOS 4.x users Graphic applications on AmigaOne systems Unlike Commodore Amiga systems, AmigaOne systems have no integrated multimedia chipsets. AmigaOne systems, similar to Mac or PC, sport AGP/PCIe graphic cards, embedded audio AC'97 sound system, and can use PCI/PCIe audio cards, even some professional models. The expanded capability of faster CPU performance, and the availability of standard expansion graphic cards, lead to a new generation of graphic software being born for the AmigaOne machines such as Hollywood "Visual Programming Suite". This made it also easy to port modern Open Source software like Blender3D. Visual programming Hollywood suite of programs by German software house Airsoft SoftWair is a multimedia and presentation program available for all Amigas (AmigaOS, MorphOS, AROS) and recently, , a version of Hollywood became available for Microsoft Windows as well. It is able to load Scala projects and Microsoft PowerPoint ".PPT" files. Its module Hollywood Designer is not only a modern multimedia authoring software but also a true complete cross platform multimedia application layer capable of creating whole Amiga programs through a Visual design approach. It also can save executables in various formats: 68k Amiga, WarpUP, AmigaOS 4 and MorphOS executables and Intel X86 code for AROS. Recent versions of Hollywood allow for creating executable programs for Intel Windows machines, Mac OS X for PPC processors and Mac OS X for Intel processors. Modern graphic software There are fairly modern, recent graphic software that is available for AmigaOne machines, and some are still usable on Amiga platforms. TV Paint was born in 1991, and it was one of the first commercial 32bit graphic software on the market. Latest Amiga version (3.59) was released in 1994 and actually is distributed publicly, but the source code is still proprietary. It is still a valid graphic program, and continues to be used despite its age, due to its ease of use and its vast number of features. Programs like Candy Factory for AOS 4.0 are designed to create special effects for images, brushes and fonts to create gorgeous internet objects and buttons used in designing web pages. Pixel image editor, formerly Pixel32 is available for MorphOS. Blender 3D is one of the best Open Source cross platform software. Also a first pre-release of GIMP is available on AmigaOS 4.0 through the AmiCygnix X11 graphic engine. Beginning with release 2.1 in 2008, MorphOS has included its own standard Paint utility called Sketch, simple but powerful, and AROS has bundled with the last free version of Luna Paint which become actually a commercial paint program for various operating systems. Graphic utilities As in any Operating Systems there can't exists only bitmap and vector paint software. All around these main software aimed at creating drawings directly by an author, or aimed at manipulating existing image files it exists a vast market of graphic utilities with peculiar features created in order to support main graphic programs. For example, in Amiga it existed very professional software and it is noteworthy to mention at least some the most important and widely used into Amiga market like Cinematte, CineMorph, Morph Plus, Impact!, Essence, Magic Lantern and Pixel 3D Pro, that were only some of the most notorious ones in the vast range of graphic utilities that could be purchased by skilled and professional users of Amiga platform in its golden age. Cinematte utility allow the user to easily make complex photo-realistic composites of subjects that are photographed against a bluescreen, or green screen background. It uses the same sophisticated techniques used worldwide in motion picture technology for precise bluescreen compositing. CineMorph it is a program to automatically create morphing effects between two given original images, and create a compound third image, or even all the animation movie associated with the morphing effect. Morph Plus performed same effects as Cinemorph. Impact! created physics simulation in 3D scenes. Essence was a texture maker to apply textures on 3D surfaces of objects created by 3D tracing programs. Amiga Magic Lantern was a true color animation compressor and player for the Amiga, Pixel 3D Pro utility it was used to create models for 3D objects and save it in various 3D formats, or to transform any model object from a 3D file format to another. Vector graphics Common widely used format for vector graphics in Amiga are EPS and IFF DR2D. It originated from the fact that Amiga was the first platform on which ran Ghostscript natively, and also IFF DR2D was the original standard for vector graphics generated by Amiga ProVector and later adopted by other applications such as Art Expression and Professional Draw. Foremost used Amiga drawing and vector graphics utilities are Aegis Draw, ProDraw (Professional Draw) from Gold Disk Inc., DrawStudio, Art Expression, ProVector, and for some basic vector graphics, also the tools of Professional Page and PageStream are useful. The most modern vector graphics programs on Amiga are actually MindSpace 1.1, which is aimed mainly at design flowcharts, mindtables, UML and diagrams, and Steam Draw a 2D simple vector paint program available for MorphOS. Flash and SWF SWFTools is a collection of command line programs to convert and save various raster(bitmap) image formats from and to Flash SWF vector animation format. Tracing software On AmigaOS are available the widely used free distributable vector to graphics conversion facilities Autotrace, Potrace, XTrace which can run also in AROS Amiga Open Source clone OS and MorphOS Amiga-Like system. The Desktop Publishing software PageStream has a tracing utility as bundled software. The structured drawing program ProVector had an optional add-on tracing utility named StylusTracer. DXF, EMF, SVG file formats Various programs can read DXF (almost all Amiga CAD programs), EMF, SVG, CGM, GEM, WMF, an example of converting tool that reads many formats and output DR2D is the Amiga program MetaView. It exists also a SVG Datatype to support directly in the OS, on any program, the feature of loading and saving files in SVG (Scalable Vector Graphics) format. Computer aided design At its beginning Amiga was considered to offer the most powerful graphic platform at a reasonable price. It had various CAD programs available for it, such as X-CAD, IntelliCAD, DynaCaDD, MaxonCAD, IntroCAD, and even programs to design and test electronic circuits, such as ElektroCAD. Animation, Comics and Cartoons Due to the peculiar capabilities of Amiga in multimedia, and the features of the bit blitter circuit, Amiga was capable of performing advanced animation and video authoring at professional level in the 1980s and thus it was created a vast amount of software which filled also this segment of the professional video editing market. For Amiga there were available animation programs like: Aegis Animator, Lights!Camera!Action!, DeLuxe Video, Disney Animation Studio, versions later than 3 of Deluxe Paint, The Director (a BASIC-like language oriented to animation), Scala, Vision from Commodore itself, VisualFX from ClassX, Adorage Multi Effect program from proDAD, Millennium from Nova Design. ImageFX, and Art Department Pro. There been various animation software on the Amiga Comic Setter was an interesting tools to create printed comics by arranging brushes representing comic characters, joining it with background images and superimposing it the right frames and "balloons" with their own text speech and captions. It could then print in color the comics that were created. Disney Animation Studio was one of the most powerful 2D programs for realizing Animation. released on Amiga, this program, equipped also with a complete cell-frame preview feature was used by many animation studios worldwide at its age and still used by some several studios in Europe as a useful preview tool, The software is mainly used for Independent and amateur animators. Authoring and VideoFX In its golden age Amiga can count on a vast range of animation and video authoring software; Aegis Animator, Lights!Camera!Action!, DeLuxe Video, Disney Animation Studio, versions later than 3 of Deluxe Paint, The Director (a BASIC-like language oriented to animation), Scala, Amiga Vision from Commodore itself, VisualFX from ClassX, Adorage Multi Effect program from proDAD, Wildfire by Andreas Maschke (ported by author to Java later), Millennium from Nova Design, ImageFX from Nova Design, and Art Department Pro. 3D modeling, rendering and animation 3D rendering and animation software includes Sculpt 3D, TurboSilver, Aladdin4D, Videoscape 3D, Caligari, Maxon Cinema4D, Imagine, LightWave from Newtek, Real3D from Realsoft, Vista Pro, World Construction Set 3D terrain rendering programs, and Tornado3D by the Italian company Eyelight. Amateur and professional video editing Amiga was one of the first commercial computer platform to allow amateur and professional video editing, due to its capability in connecting to TV sets, Video codecs and deal with Chroma-Key, Genlock signal, at full screen with overscan features, and a good noise-gain ratio. Amiga and its video peripherals (mainly Genlock boxes and digitizing boxes) in the nineties were available at reasonable prices and then this made the Amiga to become one of the professional video market leader platforms. It was also capable of dealing with broadcast video production (Newtek VideoToaster), and in the age around the 1992–1994, despite of the Commodore demise, Amiga knew its golden age as a professional video platform and there were available for Amiga a vast amount of any kind of video software, graphic facilities and reselling of any of GFX and image gallery data files that could be applied to video productions. Amongst these software it is worth mentioning the main Amiga video-editing programs for desktop video with both linear and non linear editing with 4.2.2 capabilities as the ones from Newtek available with VideoToaster Flyer external module for Video Toaster and just called NLE! (Non Linear Editing), Amiga MainActor, Broadcaster 32 and Elite (with Producer software), Wildfire by Andreas Maschke for vfx (now in Java), expansion Amiga cards PAR, VLab Motion (with Movishop software) and VLab Pro. Word processing and page layout While desktop video proved to be a major market for the Amiga, a surge of word processing, page layout and graphic software filled out the professional needs starting from the first Amiga text program Textcraft which was a mix between a real word processor and an advanced text editor, but capable of changing page layouts, fonts, enlarging or reducing their width, changing their colors and adding color images to the text. Notable word processing programs for Amiga included the then-industry standard WordPerfect up to version 4.1, Shakespeare, Excellence, Maxon Word, Final Writer, Amiga Writer, Scribble!, ProWrite, Wordworth and the little Personal Write by Cloanto. The page layout software included Page Setter and Professional Page from Gold Disk Inc., and PageStream by Soft-Logik, known today as Grasshopper LLC. Only PageStream was ported to other platforms and continues to be developed and supported by the developers. Graphic software included vector drawing applications like Art Expression from Soft-Logik, ProVector by Stylus, Inc. (formerly Taliesin), Draw Studio, and Professional Draw from Gold Disk Inc. Amiga lacked an office suite as the term is meant now, but integrated software was available. Pen Pal was a word processor integrated with a database and a form editor. Scribble!, Analyze! and Organize! were bundled together as the Works! suite combining a word processor, spreadsheet and database. Despite the similarity in name, it had no connection to Microsoft Works. The page layout language LaTeX was available in two ports: AmigaTeX, which is no longer available (the first LaTeX can be edited with a front end program), and PasTEX, available on Aminet repository. Modern software AbiWord is available today on AmigaOS 4.0 through the AmiCygnix X11 graphical engine, Scriba and Papyrus Office pre-release is available for MorphOS. Text editors Text editors available on Amiga include Vim, Emacs and MicroEMACS (included), Cygnus Editor also known as CED, and GoldED, which then evolved in 2006 into Cubic IDE. The UNIX ne editor and the vi-clone Vim were initially developed on the Amiga. Development of Text editors never stopped in Amiga. Since 2001, in MorphOS, a limited edition version of GoldEd called MorphEd is available, and since 2008 Cinnamon Writer and NoWin ED, a universal editor which runs on any Amiga-like platform, are available. Cinnamon Writer is increasing new features to all new releases and aspires to become a full-featured WordProcessor. Database and spreadsheets In the first age of Amiga (1986–1989) there were cross-platform spreadsheets available, such as MaxiPlan, which was available also for MS-DOS and Macintosh. Logistix (real name LoGisTiX), one of the first spreadsheets for Amiga, Microfiche Filer Plus was a database which gave the user the experience of exploring data as using microfilms. SuperBase was one of the finest programs available for C64. It was then ported on Atari, Amiga, and later on PC. But on Amiga, it would become a standard reference, available in two versions Superbase Personal and SuperBase Professional It could handle SQL databases and had a query internal language such as BASIC. It was capable of creating forms and masks on records and handling multimedia files into its records years before Microsoft Access. Superbase also featured VCR control style buttons to browse records of any database. Softwood File II was another simple multimedia database which then evolved into Final Data, a good database available for Amiga from Softwood Inc. From the same firm there was Final Calc, a very powerful spreadsheet, similar to TurboCalc from the German company Schatztruhe. ProChart was a tool to draw flow charts and diagrams. Analyze! was a fairly full featured (for the time) spreadsheet developed for the Amiga. Organize! was a flat file database package. Gnumeric spreadsheet has also been ported on Amiga through an X11 engine called AmiCygnix. In recent times MUIbase was born and mainly cross-platform MySQL database language became a reference on Amiga also. SQLite, a self-contained, embeddable, zero-configuration SQL database engine, can also be found available on AmigaOS 4 and MorphOS. In February 2010, Italian programmer Andrea Palmatè ported IODBC standard to AmigaOS 4. Science, entertainment and special use programs Maple V is one of the best general purpose mathematics software (a.k.a. Mathematic-CAD) ever made. It was available for Amiga also, and appreciated by many scientists using Amiga in its time. Distant Suns, Galileo, Digital Almanac and Amiga Digital Universe (from Bill Eaves for the OS4) were stellar sky exploring programs and astronomic calculators. During the age of CDTV many historic, science, and art CDs like Timetable of Science, Innovation, Timetable of Business, Politics, Grolier's Encyclopedia, Guinness Disk of Records, Video Creator, American Heritage Dictionary, Illustrated Holy Bible, Illustrated Works of Shakespeare, etc. were available. Entertainment For Amiga there were literally hundreds of entertainment software. Some notable programs for kids and learning were: Adventures in Math series of floppy disks, from Free Spirit Software, Animal Kingdom series of disks from Unicorn Software, Art School all the series of Barney Bear software, the Discovery series, including Discovery trivia, Donald's Alphabet Chase, Mickey's 123's and Mickey's ABC's by Disney Software, the Electric Crayon and Ferngully series of educational coloring book software (Ferngully was taken from the animated feature film), Fun School series of disks, Kid Pix set of disks from the well known Broderbund Software house, which was famous in the nineties, Miracle Piano Teaching System to teach music to kids, various tales of Mother Goose, and World Atlas by Centaur Software. Fractals, virtual reality, artificial intelligence ZoneXplorer from Elena Novaretti is considered amongst Amiga users one of the best fractal experience programs ever made on Amiga, if not on any platform. In 1989 the X-Specs 3D Glasses from Haitex Resources, one of the first interactive 3D solutions for home computers were created. Also created on Amiga, were the multimedia interactive TV non immersive Virtual reality exploring software Mandala from Vivid Group Inc., and the Virtuality System Virtuality 1000 CS 3D VRML all-immersive simulator from W-Industries (then Virtuality Inc.), for game entertainment in big arcade installations and theme parks, based on A3000. Magellan v.1.1 (Artificial Intelligence Software), not to be confused with Directory Opus Magellan, was a program to emulate Artificial intelligence responses on Amiga, by creating heuristic programmed rules based on machine learning in its form of supervised learning. The user would choose into decision trees and decision tables system of AI featured by the Magellan program, in which to input objects, and desired outputs and describe all associate conditions and rules which the machine should follow in order to output pseudo-intelligent solutions to given problems. Route planning AmiATLAS v.6, was a complete Route planner tool for Amiga computers. It provided worldwide interactive maps and found optimal routes for traveling from one place to another. It also featured multiple map loading, an integrated CityGuide-System with information to interesting towns, places or regions, some even with pictures, and information about many parks and points of interest. Personal organizer, notebook, diary software Digita Organizer v.1.1 from Digita International was the best Amiga program to let the user to note about dates, meetings, remember expiry dates, etcetera. PolyOrga for MorphOS by Frédéric Rignault. Personal budget, home banking, accounts Easy Banker, Home Accounts, Small Business Accounts, Small Business Manager, Account Master, Accountant, AmigaMoney, Banca Base III, HomeBank, CashMaster, Counting House, etc. Software for special purposes AVT (Amiga Video Transceiver), was a software and hardware Slow-scan television system originally developed by "Black Belt Systems" (USA) around 1990 for the Amiga home computer popular all over the world before the IBM PC family gained sufficient audio quality with the help of special sound cards. Richmond Sound Design (RSD) created both show control (a.k.a. MSC or "MIDI Show Control") and theatre sound design software which was used extensively in the theatre, theme park, display, exhibit, stage managing, show and themed entertainment industries in the 1980s and 1990s and at one point in the mid 90s, there were many high-profile shows at major theme parks around the world being controlled by Amigas through software simply called Stage Manager which then evolved into its Microsoft Windows version called ShowMan. There were dozens at Walt Disney World alone and more at all other Disney, Universal Studios, Six Flags and Madame Tussauds properties as well as in many venues in Las Vegas including The Mirage hotel Volcano and Siegfried and Roy show, the MGM Grand EFX show, Broadway theatre, London's West End, the Royal Shakespeare Company's many venues, most of Branson, Missouri's theatres, and scores of theatres on cruise ships, amongst hundreds of others. RSD purchased used Amigas on the web and reconditioned them to provide enough systems for all the shows that specified them and only stopped providing new Amiga installations in 2000. There are still an unknown number of shows on cruise ships and in themed venues being run by Amigas. References An interesting interview to Italian Cartoon Studio Strane Mani is available at Amiworld.it (In Italian language). Info on Virtuality at Amiga Hardware site Amiga Productivity Lists of software Multimedia New media Animation software Multimedia software
1000474
https://en.wikipedia.org/wiki/PLATO%20%28computer%20system%29
PLATO (computer system)
PLATO (Programmed Logic for Automatic Teaching Operations) was the first generalized computer-assisted instruction system. Starting in 1960, it ran on the University of Illinois' ILLIAC I computer. By the late 1970s, it supported several thousand graphics terminals distributed worldwide, running on nearly a dozen different networked mainframe computers. Many modern concepts in multi-user computing were originally developed on PLATO, including forums, message boards, online testing, e-mail, chat rooms, picture languages, instant messaging, remote screen sharing, and multiplayer video games. PLATO was designed and built by the University of Illinois and functioned for four decades, offering coursework (elementary through university) to UIUC students, local schools, prison inmates, and other universities. Courses were taught in a range of subjects, including Latin, chemistry, education, music, Esperanto, and primary mathematics. The system included a number of features useful for pedagogy, including text overlaying graphics, contextual assessment of free-text answers, depending on the inclusion of keywords, and feedback designed to respond to alternative answers. Rights to market PLATO as a commercial product were licensed by Control Data Corporation (CDC), the manufacturer on whose mainframe computers the PLATO IV system was built. CDC President William Norris planned to make PLATO a force in the computer world, but found that marketing the system was not as easy as hoped. PLATO nevertheless built a strong following in certain markets, and the last production PLATO system was in use until 2006. Innovations PLATO was either the first or an earlier example of many now-common technologies: Hardware . Donald Bitzer . Donald Bitzer Display Graphics storing in downloadable fonts. . Online communities ,. Notesfiles (precursor to newsgroups), 1973. Term-talk (1:1 chat) Screen software sharing: , used by instructors to help students, precursor of Timbuktu. Common Computer Game Genres, including many of the early (first?) real time multi-player games Multiplayer Games . Rick Bloome Dungeon Games . Included the first video game boss. , likely the first graphical dungeon computer game. . Space combat Flight Simulation: ; this probably inspired UIUC student Bruce Artwick to start Sublogic which was acquired and later became Microsoft Flight Simulator. Military simulations: . 3D Maze games: , based on a story by J. G. Ballard, the first PLATO 3-D walkthru maze game. Quest Simulation: , like Trek with monsters, trees, treasures. Solitaire: solitaire, Educational . Training systems; an ambitious ICAI programming system featuring partial-order plans, used to train Con Edison steam plant operators. History Impetus Before the 1944 G.I. Bill that provided free college education to World War II veterans, higher education was limited to a minority of the US population, though only 9% of the population was in the military. The trend towards greater enrollment was notable by the early 1950s, and the problem of providing instruction for the many new students was a serious concern to university administrators. To wit, if computerized automation increased factory production, it could do the same for academic instruction. The USSR's 1957 launching of the Sputnik I artificial satellite energized the United States' government into spending more on science and engineering education. In 1958, the U.S. Air Force's Office of Scientific Research had a conference about the topic of computer instruction at the University of Pennsylvania; interested parties, notably IBM, presented studies. Genesis Around 1959 Chalmers W. Sherwin, a physicist at the University of Illinois (U of I), suggested a computerised learning system to William Everett, the engineering college dean, who, in turn, recommended that Daniel Alpert, another physicist, convene a meeting about the matter with engineers, administrators, mathematicians, and psychologists. After weeks of meetings they were unable to agree on a single design. Before conceding failure, Alpert mentioned the matter to laboratory assistant Donald Bitzer, who had been thinking about the problem, suggesting he could build a demonstration system. Bitzer, regarded as the Father of PLATO, recognized that in order to provide quality computer-based education, good graphics were critical. This at a time when 10-character-per-second teleprinters were the norm. In 1960, the first system, PLATO I, operated on the local ILLIAC I computer. It included a television set for display and a special keyboard for navigating the system's function menus; PLATO II, in 1961, featured two users at once. The PLATO system was re-designed, between 1963 and 1969; PLATO III allowed "anyone" to design new lesson modules using their TUTOR programming language, conceived in 1967 by biology graduate student Paul Tenczar. Built on a CDC 1604, given to them by William Norris, PLATO III could simultaneously run up to 20 terminals, and was used by local facilities in Champaign–Urbana that could enter the system with their custom terminals. The only remote PLATO III terminal was located near the state capitol in Springfield, Illinois at Springfield High School. It was connected to the PLATO III system by a video connection and a separate dedicated line for keyboard data. PLATO I, II, and III were funded by small grants from a combined Army-Navy-Air Force funding pool. By the time PLATO III was in operation, everyone involved was convinced it was worthwhile to scale up the project. Accordingly, in 1967, the National Science Foundation granted the team steady funding, allowing Alpert to set up the Computer-based Education Research Laboratory (CERL) at the University of Illinois Urbana–Champaign campus. The system was capable of supporting 20 time-sharing terminals. Multimedia experiences (PLATO IV) In 1972, with the introduction of PLATO IV, Bitzer declared general success, claiming that the goal of generalized computer instruction was now available to all. However, the terminals were very expensive (about $12,000). The PLATO IV terminal had several major innovations: Plasma Display Screen: Bitzer's orange plasma display, incorporated both memory and bitmapped graphics into one display. The display was a 512×512 bitmap, with both character and vector plotting done by hardwired logic. It included fast vector line drawing capability, and ran at 1260 baud, rendering 60 lines or 180 characters per second. . Users could provide their own characters to support rudimentary bitmap graphics. Touch panel: A 16×16 grid infrared touch panel, allowing students to answer questions by touching anywhere on the screen. Microfiche images: Compressed air powered a piston-driven microfiche image selector that permitted colored images to be projected on the back of the screen under program control. A hard drive for Audio snippets: The random-access audio device used a magnetic disc with a capacity to hold 17 total minutes of pre-recorded audio. It could retrieve for playback any of 4096 audio clips within 0.4 seconds. By 1980, the device was being commercially produced by Education and Information Systems, Incorporated with a capacity of just over 22 minutes. A Votrax voice synthesizer The Gooch Synthetic Woodwind (named after inventor Sherwin Gooch), a synthesizer that offered four-voice music synthesis to provide sound in PLATO courseware. This was later supplanted on the PLATO V terminal by the Gooch Cybernetic Synthesizer, which had sixteen voices that could be programmed individually, or combined to make more complex sounds. Bruce Parello, a student at the University of Illinois in 1972, created the first digital emojis on the PLATO IV system. Influence on PARC and Apple Early in 1972, researchers from Xerox PARC were given a tour of the PLATO system at the University of Illinois. At this time, they were shown parts of the system, such as the Insert Display/Show Display (ID/SD) application generator for pictures on PLATO (later translated into a graphics-draw program on the Xerox Star workstation); the Charset Editor for "painting" new characters (later translated into a "Doodle" program at PARC); and the Term Talk and Monitor Mode communications programs. Many of the new technologies they saw were adopted and improved upon, when these researchers returned to Palo Alto, California. They subsequently transferred improved versions of this technology to Apple Inc. CDC years As PLATO IV reached production quality, William Norris (CDC) became increasingly interested in it as a potential product. His interest was twofold. From a strict business perspective, he was evolving Control Data into a service-based company instead of a hardware one, and was increasingly convinced that computer-based education would become a major market in the future. At the same time, Norris was troubled by the unrest of the late 1960s, and felt that much of it was due to social inequalities that needed to be addressed. PLATO offered a solution by providing higher education to segments of the population that would otherwise never be able to afford a university education. Norris provided CERL with machines on which to develop their system in the late 1960s. In 1971, he set up a new division within CDC to develop PLATO "courseware", and eventually many of CDC's own initial training and technical manuals ran on it. In 1974, PLATO was running on in-house machines at CDC headquarters in Minneapolis, and in 1976, they purchased the commercial rights in exchange for a new CDC Cyber machine. CDC announced the acquisition soon after, claiming that by 1985, 50% of the company's income would be related to PLATO services. Through the 1970s, CDC tirelessly promoted PLATO, both as a commercial tool and one for re-training unemployed workers in new fields. Norris refused to give up on the system, and invested in several non-mainstream courses, including a crop-information system for farmers, and various courses for inner-city youth. CDC even went as far as to place PLATO terminals in some shareholder's houses, to demonstrate the concept of the system. In the early 1980s, CDC started heavily advertising the service, apparently due to increasing internal dissent over the now $600 million project, taking out print and even radio ads promoting it as a general tool. The Minneapolis Tribune was unconvinced by their ad copy and started an investigation of the claims. In the end, they concluded that while it was not proven to be a better education system, everyone using it nevertheless enjoyed it, at least. An official evaluation by an external testing agency ended with roughly the same conclusions, suggesting that everyone enjoyed using it, but it was essentially equal to an average human teacher in terms of student advancement. Of course, a computerized system equal to a human should have been a major achievement, the very concept for which the early pioneers in CBT were aiming. A computer could serve all the students in a school for the cost of maintaining it, and wouldn't go on strike. However, CDC charged $50 an hour for access to their data center, in order to recoup some of their development costs, making it considerably more expensive than a human on a per-student basis. PLATO was, therefore, a failure as a profitable commercial enterprise, although it did find some use in large companies and government agencies willing to invest in the technology. An attempt to mass-market the PLATO system was introduced in 1980 as Micro-PLATO, which ran the basic TUTOR system on a CDC "Viking-721" terminal and various home computers. Versions were built for the Texas Instruments TI-99/4A, Atari 8-bit family, Zenith Z-100 and, later, Radio Shack TRS-80 and IBM Personal Computer. Micro-PLATO could be used stand-alone for normal courses, or could connect to a CDC data center for multiuser programs. To make the latter affordable, CDC introduced the Homelink service for $5 an hour. Norris continued to praise PLATO, announcing that it would be only a few years before it represented a major source of income for CDC as late as 1984. In 1986, Norris stepped down as CEO, and the PLATO service was slowly killed off. He later claimed that Micro-PLATO was one of the reasons PLATO got off-track. They had started on the TI-99/4A, but then Texas Instruments pulled the plug and they moved to other systems like the Atari, who soon did the same. He felt that it was a waste of time anyway, as the system's value was in its online nature, which Micro-PLATO lacked initially. Bitzer was more forthright about CDC's failure, blaming their corporate culture for the problems. He noted that development of the courseware was averaging $300,000 per delivery hour, many times what the CERL was paying for similar products. This meant that CDC had to charge high prices in order to recoup their costs, prices that made the system unattractive. The reason, he suggested, for these high prices was that CDC had set up a division that had to keep itself profitable via courseware development, forcing them to raise the prices in order to keep their headcount up during slow periods. PLATO V: multimedia Intel 8080 microprocessors were introduced in the new PLATO V terminals. They could download small software modules and execute them locally. It was a way to augment the PLATO courseware with rich animation and other sophisticated capabilities. Online community Although PLATO was designed for computer-based education, perhaps its most enduring legacy is its place in the origins of online community. This was made possible by PLATO's groundbreaking communication and interface capabilities, features whose significance is only lately being recognized by computer historians. PLATO Notes, created by David R. Woolley in 1973, was among the world's first online message boards, and years later became the direct progenitor of Lotus Notes. PLATO's plasma panels were well suited to games, although its I/O bandwidth (180 characters per second or 60 graphic lines per second) was relatively slow. By virtue of 1500 shared 60-bit variables per game (initially), it was possible to implement online games. Because it was an educational computer system, most of the user community were keenly interested in games. In much the same way that the PLATO hardware and development platform inspired advances elsewhere (such as at Xerox PARC and MIT), many popular commercial and Internet games ultimately derived their inspiration from PLATO's early games. As one example, Castle Wolfenstein by PLATO alum Silas Warner was inspired by PLATO's dungeon games (see below), in turn inspiring Doom and Quake. Thousands of multiplayer online games were developed on PLATO from around 1970 through the 1980s, with the following notable examples: Daleske's Empire a top-view multiplayer space game based on Star Trek. Either Empire or Colley's Maze War is the first networked multiplayer action game. It was ported to Trek82, Trek83, ROBOTREK, Xtrek, and Netrek, and also adapted (without permission) for the Apple II computer by fellow PLATO alum Robert Woodhead (of Wizardry fame), as a game called Galactic Attack. The original Freecell by Alfille (from Baker's concept). Fortner's Airfight, probably the direct inspiration for (PLATO alum) Bruce Artwick's Microsoft Flight Simulator. Haefeli and Bridwell's Panther (a vector graphics-based tankwar game, anticipating Atari's BattleZone). Many other first-person shooters, most notably Bowery's Spasim and Witz and Boland's Futurewar, believed to be the first FPS. Countless games inspired by the role-playing game Dungeons & Dragons, including the original Rutherford/Whisenhunt and Wood dnd (later ported to the PDP-10/11 by Lawrence, who earlier had visited PLATO). and is believed to be the first dungeon crawl game and was followed by: Moria, Rogue, Dry Gulch (a western-style variation), and Bugs-n-Drugs (a medical variation)—all presaging MUDs (Multi-User Domains) and MOOs (MUDs, Object Oriented) as well as popular first-person shooters like Doom and Quake, and MMORPGs (Massively multiplayer online role-playing game) like EverQuest and World of Warcraft. Avatar, PLATO's most popular game, is one of the world's first MUDs and has over 1 million hours of use.. The games Doom and Quake can trace part of their lineage back to PLATO programmer Silas Warner. PLATO's communication tools and games formed the basis for an online community of thousands of PLATO users, which lasted for well over twenty years. PLATO's games became so popular that a program called "The Enforcer" was written to run as a background process to regulate or disable game play at most sites and times – a precursor to parental-style control systems that regulate access based on content rather than security considerations. In September 2006 the Federal Aviation Administration retired its PLATO system, the last system that ran the PLATO software system on a CDC Cyber mainframe, from active duty. Existing PLATO-like systems now include NovaNET and Cyber1.org. By early 1976, the original PLATO IV system had 950 terminals giving access to more than 3500 contact hours of courseware, and additional systems were in operation at CDC and Florida State University. Eventually, over 12,000 contact hours of courseware was developed, much of it developed by university faculty for higher education. PLATO courseware covers a full range of high-school and college courses, as well as topics such as reading skills, family planning, Lamaze training and home budgeting. In addition, authors at the University of Illinois School of Basic Medical Sciences (now, the University of Illinois College of Medicine) devised a large number of basic science lessons and a self-testing system for first-year students. However the most popular "courseware" remained their multi-user games and role-playing video games such as dnd, although it appears CDC was uninterested in this market. As the value of a CDC-based solution disappeared in the 1980s, interested educators ported the engine first to the IBM PC, and later to web-based systems. Custom Character Sets In the early 1970s, some people working in the modern foreign languages group at the University of Illinois began working on a set of Hebrew lessons, originally without good system support for leftward writing. In preparation for a PLATO demo in Teheran, that Bruce Sherwood would participate in, Sherwood worked with Don Lee to implement support for leftward writing, including Persian (Farsi), for which the writing system is based on that of Arabic. There was no funding for this work, which was undertaken only due to Sherwood's personal interest, and no curriculum development occurred for either Persian or Arabic. However, Peter Cole, Robert Lebowitz, and Robert Hart used the new system capabilities to re-do the Hebrew lessons. The PLATO hardware and software supported the design and use of one's own 8-by-16 characters, so most languages could be displayed on the graphics screen (including those written right-to-left). University of Illinois School of Music PLATO Project (Technology and Research-based Chronology) A PLATO-compatible music language known as OPAL (Octave-Pitch-Accent-Length) was developed for these synthesizers, as well as a compiler for the language, two music text editors, a filing system for music binaries, programs to play the music binaries in real time, and print musical scores, and many debugging and compositional aids. A number of interactive compositional programs have also been written. Gooch's peripherals were heavily used for music education courseware as created, for example, by the University of Illinois School of Music PLATO Project. From 1970 to 1994, the University of Illinois (U of I) School of Music explored the use of the Computer-based Education Research Laboratory (CERL) PLATO computer system to deliver online instruction in music. Led by G. David Peters, music faculty and students worked with PLATO’s technical capabilities to produce music-related instructional materials and experimented with their use in the music curriculum. Peters began his work on PLATO III. By 1972, the PLATO IV system made it technically possible to introduce multimedia pedagogies that were not available in the marketplace until years later. Between 1974 and 1988, 25 U of I music faculty participated in software curriculum development and more than 40 graduate students wrote software and assisted the faculty in its use. In 1988, the project broadened its focus beyond PLATO to accommodate the increasing availability and use of microcomputers. The broader scope resulted in renaming the project to The Illinois Technology-based Music Project. Work in the School of Music continued on other platforms after the CERL PLATO system shutdown in 1994. Over the 24-year life of the music project, its many participants moved into educational institutions and into the private sector. Their influence can be traced to numerous multimedia pedagogies, products, and services in use today, especially by musicians and music educators. Significant early efforts Pitch recognition/performance judging In 1969, G. David Peters began researching the feasibility of using PLATO to teach trumpet students to play with increased pitch and rhythmic precision. He created an interface for the PLATO III terminal. The hardware consisted of (1) filters that could determine the true pitch of a tone, and (2) a counting device to measure tone duration. The device accepted and judged rapid notes, two notes trilled, and lip slurs. Peters demonstrated that judging instrumental performance for pitch and rhythmic accuracy was feasible in computer-assisted instruction. Rhythm notation and perception By 1970, a random access audio device was available for use with PLATO III. In 1972, Robert W. Placek conducted a study that used computer-assisted instruction for rhythm perception. Placek used the random access audio device attached to a PLATO III terminal for which he developed music notation fonts and graphics. Students majoring in elementary education were asked to (1) recognize elements of rhythm notation, and (2) listen to rhythm patterns and identify their notations. This was the first known application of the PLATO random-access audio device to computer-based music instruction. Study participants were interviewed about the experience and found it both valuable and enjoyable. Of particular value was PLATO’s immediate feedback. Though participants noted shortcomings in the quality of the audio, they generally indicated that they were able to learn the basic skills of rhythm notation recognition. These PLATO IV terminal included many new devices and yielded two notable music projects: Visual diagnostic skills for instrumental music educators By the mid-1970s, James O. Froseth (University of Michigan) had published training materials that taught instrumental music teachers to visually identify typical problems demonstrated by beginning band students. For each instrument, Froseth developed an ordered checklist of what to look for (i.e., posture, embouchure, hand placement, instrument position, etc.) and a set of 35mm slides of young players demonstrating those problems. In timed class exercises, trainees briefly viewed slides and recorded their diagnoses on the checklists which were reviewed and evaluated later in the training session. In 1978, William H. Sanders adapted Froseth’s program for delivery using the PLATO IV system. Sanders transferred the slides to microfiche for rear-projection through the PLATO IV terminal’s plasma display. In timed drills, trainees viewed the slides, then filled in the checklists by touching them on the display. The program gave immediate feedback and kept aggregate records. Trainees could vary the timing of the exercises and repeat them whenever they wished. Sanders and Froseth subsequently conducted a study to compare traditional classroom delivery of the program to delivery using PLATO. The results showed no significant difference between the delivery methods for a) student post-test performance and b) their attitudes toward the training materials. However, students using the computer appreciated the flexibility to set their own practice hours, completed significantly more practice exercises, and did so in significantly less time. Musical instrument identification In 1967, Allvin and Kuhn used a four-channel tape recorder interfaced to a computer to present pre-recorded models to judge sight-singing performances. In 1969, Ned C. Deihl and Rudolph E. Radocy conducted a computer-assisted instruction study in music that included discriminating aural concepts related to phrasing, articulation, and rhythm on the clarinet. They used a four-track tape recorder interfaced to a computer to provide pre-recorded audio passages. Messages were recorded on three tracks and inaudible signals on the fourth track with two hours of play/record time available. This research further demonstrated that computer-controlled audio with four-track tape was possible. In 1979, Williams used a digitally controlled cassette tape recorder that had been interfaced to a minicomputer (Williams, M.A. "A comparison of three approaches to the teaching of auditory-visual discrimination, sight singing and music dictation to college music students: A traditional approach, a Kodaly approach, and a Kodaly approach augmented by computer-assisted instruction," University of Illinois, unpublished). This device worked, yet was slow with variable access times. In 1981, Nan T. Watanabe researched the feasibility of computer-assisted music instruction using computer-controlled pre-recorded audio. She surveyed audio hardware that could interface with a computer system. Random-access audio devices interfaced to PLATO IV terminals were also available. There were issues with sound quality due to dropouts in the audio. Regardless, Watanabe deemed consistent fast access to audio clips critical to the study design and selected this device for the study. Watanabe’s computer-based drill-and-practice program taught elementary music education students to identify musical instruments by sound. Students listened to randomly selected instrument sounds, identified the instrument they heard, and received immediate feedback. Watanabe found no significant difference in learning between the group who learned through computer-assisted drill programs and the group receiving traditional instruction in instrument identification. The study did, however, demonstrate that use of random-access audio in computer-assisted instruction in music was feasible. The Illinois Technology-based music project By 1988, with the spread of micro-computers and their peripherals, the University of Illinois School of Music PLATO Project was renamed The Illinois Technology-based Music Project. Researchers subsequently explored the use of emerging, commercially available technologies for music instruction until 1994. Influences and impacts Educators and students used the PLATO System for music instruction at other educational institutions including Indiana University, Florida State University, and the University of Delaware. Many alumni of the University of Illinois School of Music PLATO Project gained early hands-on experience in computing and media technologies and moved into influential positions in both education and the private sector. The goal of this system was to provide tools for music educators to use in the development of instructional materials, which might possibly include music dictation drills, automatically graded keyboard performances, envelope and timbre ear-training, interactive examples or labs in musical acoustics, and composition and theory exercises with immediate feedback. One ear-training application, Ottaviano, became a required part of certain undergraduate music theory courses at Florida State University in the early 1980s. Another peripheral was the Votrax speech synthesizer, and a "say" instruction (with "saylang" instruction to choose the language) was added to the Tutor programming language to support text-to-speech synthesis using the Votrax. Other Efforts One of CDC's greatest commercial successes with PLATO was an online testing system developed for National Association of Securities Dealers (now the Financial Industry Regulatory Authority), a private-sector regulator of the US securities markets. During the 1970s Michael Stein, E. Clarke Porter and PLATO veteran Jim Ghesquiere, in cooperation with NASD executive Frank McAuliffe, developed the first "on-demand" proctored commercial testing service. The testing business grew slowly and was ultimately spun off from CDC as Drake Training and Technologies in 1990. Applying many of the PLATO concepts used in the late 1970s, E. Clarke Porter led the Drake Training and Technologies testing business (today Thomson Prometric) in partnership with Novell, Inc. away from the mainframe model to a LAN-based client server architecture and changed the business model to deploy proctored testing at thousands of independent training organizations on a global scale. With the advent of a pervasive global network of testing centers and IT certification programs sponsored by, among others, Novell and Microsoft, the online testing business exploded. Pearson VUE was founded by PLATO/Prometric veterans E. Clarke Porter, Steve Nordberg and Kirk Lundeen in 1994 to further expand the global testing infrastructure. VUE improved on the business model by being one of the first commercial companies to rely on the Internet as a critical business service and by developing self-service test registration. The computer-based testing industry has continued to grow, adding professional licensure and educational testing as important business segments. A number of smaller testing-related companies also evolved from the PLATO system. One of the few survivors of that group is The Examiner Corporation. Dr. Stanley Trollip (formerly of the University of Illinois Aviation Research Lab) and Gary Brown (formerly of Control Data) developed the prototype of The Examiner System in 1984. In the early 1970s, James Schuyler developed a system at Northwestern University called HYPERTUTOR as part of Northwestern's MULTI-TUTOR computer assisted instruction system. This ran on several CDC mainframes at various sites. Between 1973 and 1980, a group under the direction of Thomas T. Chen at the Medical Computing Laboratory of the School of Basic Medical Sciences at the University of Illinois at Urbana Champaign ported PLATO's TUTOR programming language to the MODCOMP IV minicomputer. Douglas W. Jones, A.B. Baskin, Tom Szolyga, Vincent Wu and Lou Bloomfield did most of the implementation. This was the first port of TUTOR to a minicomputer and was largely operational by 1976. In 1980, Chen founded Global Information Systems Technology of Champaign, Illinois, to market this as the Simpler system. GIST eventually merged with the Government Group of Adayana Inc. Vincent Wu went on to develop the Atari PLATO cartridge. CDC eventually sold the "PLATO" trademark and some courseware marketing segment rights to the newly formed The Roach Organization (TRO) in 1989. In 2000 TRO changed their name to PLATO Learning and continue to sell and service PLATO courseware running on PCs. In late 2012, PLATO Learning brought its online learning solutions to market under the name Edmentum. CDC continued development of the basic system under the name CYBIS (CYber-Based Instructional System) after selling the trademarks to Roach, in order to service their commercial and government customers. CDC later sold off their CYBIS business to University Online, which was a descendant of IMSATT. University Online was later renamed to VCampus. The University of Illinois also continued development of PLATO, eventually setting up a commercial on-line service called NovaNET in partnership with University Communications, Inc. CERL was closed in 1994, with the maintenance of the PLATO code passing to UCI. UCI was later renamed NovaNET Learning, which was bought by National Computer Systems (NCS). Shortly after that, NCS was bought by Pearson, and after several name changes now operates as Pearson Digital Learning. Other versions In South Africa During the period when CDC was marketing PLATO, the system began to be used internationally. South Africa was one of the biggest users of PLATO in the early 1980s. Eskom, the South African electrical power company, had a large CDC mainframe at Megawatt Park in the northwest suburbs of Johannesburg. Mainly this computer was used for management and data processing tasks related to power generation and distribution, but it also ran the PLATO software. The largest PLATO installation in South Africa during the early 1980s was at the University of the Western Cape, which served the "native" population, and at one time had hundreds of PLATO IV terminals all connected by leased data lines back to Johannesburg. There were several other installations at educational institutions in South Africa, among them Madadeni College in the Madadeni township just outside Newcastle. This was perhaps the most unusual PLATO installation anywhere. Madadeni had about 1,000 students, all of them who were original inhabitants i.e. native population and 99.5% of Zulu ancestry. The college was one of 10 teacher preparation institutions in kwaZulu, most of them much smaller. In many ways Madadeni was very primitive. None of the classrooms had electricity and there was only one telephone for the whole college, which one had to crank for several minutes before an operator might come on the line. So an air-conditioned, carpeted room with 16 computer terminals was a stark contrast to the rest of the college. At times the only way a person could communicate with the outside world was through PLATO term-talk. For many of the Madadeni students, most of whom came from very rural areas, the PLATO terminal was the first time they encountered any kind of electronic technology. Many of the first-year students had never seen a flush toilet before. There initially was skepticism that these technologically illiterate students could effectively use PLATO, but those concerns were not borne out. Within an hour or less most students were using the system proficiently, mostly to learn math and science skills, although a lesson that taught keyboarding skills was one of the most popular. A few students even used on-line resources to learn TUTOR, the PLATO programming language, and a few wrote lessons on the system in the Zulu language. PLATO was also used fairly extensively in South Africa for industrial training. Eskom successfully used PLM (PLATO learning management) and simulations to train power plant operators, South African Airways (SAA) used PLATO simulations for cabin attendant training, and there were a number of other large companies as well that were exploring the use of PLATO. The South African subsidiary of CDC invested heavily in the development of an entire secondary school curriculum (SASSC) on PLATO, but unfortunately as the curriculum was nearing the final stages of completion, CDC began to falter in South Africa—partly because of financial problems back home, partly because of growing opposition in the United States to doing business in South Africa, and partly due to the rapidly evolving microcomputer, a paradigm shift that CDC failed to recognize. Cyber1 In August 2004, a version of PLATO corresponding to the final release from CDC was resurrected online. This version of PLATO runs on a free and open-source software emulation of the original CDC hardware called Desktop Cyber. Within six months, by word of mouth alone, more than 500 former users had signed up to use the system. Many of the students who used PLATO in the 1970s and 1980s felt a special social bond with the community of users who came together using the powerful communications tools (talk programs, records systems and notesfiles) on PLATO. The PLATO software used on Cyber1 is the final release (99A) of CYBIS, by permission of VCampus. The underlying operating system is NOS 2.8.7, the final release of the NOS operating system, by permission of Syntegra (now British Telecom [BT]), which had acquired the remainder of CDC's mainframe business. Cyber1 runs this software on the Desktop Cyber emulator. Desktop Cyber accurately emulates in software a range of CDC Cyber mainframe models and many peripherals. Cyber1 offers free access to the system, which contains over 16,000 of the original lessons, in an attempt to preserve the original PLATO communities that grew up at CERL and on CDC systems in the 1980s. The load average of this resurrected system is about 10–15 users, sending personal and notesfile notes, and playing inter-terminal games such as Avatar and Empire (a Star Trek-like game), which had both accumulated more than 1.0 million contact hours on the original PLATO system at UIUC. See also PLATO games The Mother of All Demos, in 1968 References Further reading External links . Discusses his relationship with Control Data Corporation (CDC) during the development of PLATO, a computer-assisted instruction system. He describes the interest in PLATO of Harold Brooks, a CDC salesman, and his help in procuring a 1604 computer for Bitzer's use. Recalls the commercialization of PLATO by CDC and his disagreements with CDC over marketing strategy and the creation of courseware for PLATO. . A program officer at the National Science Foundation (NSF) describes the impact of Don Bitzer and the PLATO system, grants related to the classroom use of computers, and NSF's Regional Computing Program. . . . Archival collection containing internal reports and external reports and publications related to the development of PLATO and the operations of CERL. . The CBE series documents CDC’s objective of creating, marketing and distributing PLATO courseware internally within various CDC departments and divisions, and externally. . : online preservation of the PLATO system. Computer-based Education Research Laboratory PLATO CDC software History of electronic engineering
308054
https://en.wikipedia.org/wiki/Presentation%20program
Presentation program
In computing, a presentation program (also called presentation software) is a software package used to display information in the form of a slide show. It has three major functions: an editor that allows text to be inserted and formatted a method for inserting and manipulating graphic images and media clips a slide-show system to display the content Presentation software can be viewed as enabling a functionally-specific category of electronic media, with its own distinct culture and practices as compared to traditional presentation media (such as blackboards, whiteboards and flip charts). Presentations in this mode of delivery have become pervasive in many aspects of business communication, especially in business planning, as well as in academic-conference and professional conference settings, and in the knowledge economy generally, where ideas are a primary work output. Presentations may also feature prominently in political settings, especially in workplace politics, where persuasion is a central determinant of group outcomes. Most modern meeting-rooms and conference halls are configured to include presentation electronics, such as overhead projectors suitable for displaying presentation slides, often driven by the presenter's own laptop, under direct control of the presentation program used to develop the presentation. Often a presenter will present a lecture using the slides as a visual aid both for the presenter (to track the lecture's coverage) and for the audience (especially when an audience member mishears or misunderstands the verbal component). Generally in presentations, the visual material is considered supplemental to a strong aural presentation that accompanies the slide show, but in many cases, such as statistical graphics, it can be difficult to convey essential information other than by visual means; additionally, a well-designed infographic can be extremely effective in a way that words are not. Endemic over-reliance on slides with low information density and with a poor accompanying lecture has given presentation software a negative reputation as sometimes functioning as a crutch for the poorly informed or the poorly prepared. Autographix, and Dicomed. It became quite easy to make last-minute changes compared to traditional typesetting and pasteup. It was also a lot easier to produce a large number of slides in a small amount of time. However, these workstations also required skilled operators, and a single workstation represented an investment of $50,000 to $200,000 (in 1979 dollars). In the mid-1980s developments in the world of computers changed the way presentations were created. Inexpensive, specialized applications now made it possible for anyone with a PC to create professional-looking presentation graphics. Originally these programs were used to generate 35 mm slides, to be presented using a slide projector. As these programs became more common in the late 1980s several companies set up services that would accept the shows on diskette and create slides using a film recorder or print transparencies. In the 1990s dedicated LCD-based screens that could be placed on the projectors started to replace the transparencies, and by the early 2000s they had almost all been replaced by video projectors. The first commercial computer software specifically intended for creating WYSIWYG presentations was developed at Hewlett Packard in 1979 and called BRUNO and later HP-Draw. The first microcomputer-based presentation software was Cromemco's Slidemaster, developed by John F. Dunn and released by Cromemco in 1981. The first software displaying a presentation on a personal computer screen was VCN ExecuVision, developed in 1982. This program allowed users to choose from a library of images to accompany the text of their presentation. Harvard Graphics was introduced for MS-DOS and Lotus Freelance Graphics was introduced for DOS and OS/2 in 1986. PowerPoint was introduced for the Macintosh computer in 1987. Features A presentation program is supposed to help both the speaker with an easier access to his ideas and the participants with visual information which complements the talk. There are many different types of presentations including professional (work-related), education, entertainment, and for general communication. Presentation programs can either supplement or replace the use of older visual-aid technology, such as pamphlets, handouts, chalkboards, flip charts, posters, slides and overhead transparencies. Text, graphics, movies, and other objects are positioned on individual pages or "slides" or "foils". The "slide" analogy is a reference to the slide projector, a device that has become somewhat obsolete due to the use of presentation software. Slides can be printed, or (more usually) displayed on-screen and navigated through at the command of the presenter. An entire presentation can be saved in video format. The slides can also be saved as images of any image file formats for any future reference. Transitions between slides can be animated in a variety of ways, as can the emergence of elements on a slide itself. Typically a presentation has many constraints and the most important being the limited time to present consistent information. Many presentation programs come with pre-designed images (clip art) and/or have the ability to import graphic images, such as Visio and Edraw Max. Some tools also have the ability to search and import images from Flickr or Google directly from the tool. Custom graphics can also be created in other programs such as Adobe Photoshop or GIMP and then exported. The concept of clip art originated with the image library that came as a complement with VCN ExecuVision, beginning in 1983. With the growth of digital photography and video, many programs that handle these types of media also include presentation functions for displaying them in a similar "slide show" format, for example iPhoto. These programs allow groups of digital photos to be displayed in a slide show with options such as selecting transitions, choosing whether or not the show stops at the end or continues to loop, and including music to accompany the photos. Similar to programming extensions for an operating system or web browser, "add ons" or plugins for presentation programs can be used to enhance their capabilities. Apps can enable a smartphone to be a remote control for slideshow presentations, including slide previews, speaker notes, timer, stop watch, pointer, going directly to a given slide, blank screen and more. Presentation programs also offer an interactive integrated hardware element designed to engage an audience (e.g. audience response systems, second screen applications) or facilitate presentations across different geographical locations through the internet (e.g. web conferencing). Hardware devices such as laser pointers and interactive whiteboards can ease the job of a live presenter . See also Office suite Productivity software References Further reading Farkas, David K. (2006) "Toward a Better Understanding of PowerPoint Deck Design" Information Design Journal + Document Design 4(2): pp 162–171. Good, Lance & Bederson, Benjamin B. (2002) "Zoomable User Interfaces as a Medium for Slide Show Presentations" Journal on Information Visualization 1(1): pp 35–49. Gross, Alan G. & Harmon, Joseph E. (2009) "The Structure of PowerPoint Presentations: The Art of Grasping Things Whole" IEEE Transactions on Professional Communication 52(2): pp 121–137. Knoblauch, Hubert. (2014) "PowerPoint, Communication, and the Knowledge Society". Cambridge University Press. Tufte, Edward R. (2006) "The Cognitive Style of PowerPoint: Pitching Out Corrupts Within" 'Graphics Press LLC'', Cheshire, USA. External links
19763287
https://en.wikipedia.org/wiki/VP8
VP8
VP8 is an open and royalty-free video compression format created by On2 Technologies as a successor to VP7 and owned by Google from 2010. In May 2010, after the purchase of On2 Technologies, Google provided an irrevocable patent promise on its patents for implementing the VP8 format, and released a specification of the format under the Creative Commons Attribution 3.0 license. That same year, Google also released libvpx, the reference implementation of VP8, under the revised BSD license. Opera, Firefox, Chrome, and Chromium support playing VP8 video in HTML5 video tag. Internet Explorer officially supports VP8 with a separate codec. According to Google VP8 is mainly used in connection with WebRTC and as a format for short looped animations, as a replacement for the Graphics Interchange Format (GIF). VP8 can be multiplexed into the Matroska-based container format WebM along with Vorbis and Opus audio. The image format WebP is based on VP8's intra-frame coding. VP8's direct successor, VP9, and the emerging royalty-free internet video format AV1 from the Alliance for Open Media (AOMedia) are based on VP8. Features VP8 only supports progressive scan video signals with 4:2:0 chroma subsampling and 8 bits per sample. In its first public version, On2's VP8 implementation supports multi-core processors with up to 64 cores simultaneously. At least in the implementation (from August 2011), VP8 is comparatively badly adapted to high resolutions (HD). With only three reference frame buffers needed, VP8 enables for decoder implementations with a relatively small memory footprint. The format features a pure intra mode, i.e. using only independently coded frames without temporal prediction, to enable random access in applications like video editing. Technology VP8 is a traditional block-based transform coding format. It has much in common with H.264, e.g. some prediction modes. At the time of first presentation of VP8, according to On2 the in-loop filter and the Golden Frames were among the novelties of this iteration. The first definition of such a filter is already found in the H.263 standard, though, and Golden Frames were already in use in VP5 and VP7. The discrete cosine transform (DCT) on 4×4 blocks and the Hadamard transform (WHT) serve as basic frequency transforms. A maximum of three frames can be referenced for temporal prediction: the last Golden Frame (may be an intra frame), alternate reference frame, and the directly preceding frame. The so-called alternate reference frames (altref) can serve as reference-only frames for displaying them can be deactivated. In this case the encoder can fill them with arbitrary useful image data, even from future frames, and thereby serve the same purpose as the b-frames of the MPEG formats. Similar macroblocks can be assigned to one of up to four (even spatially disjoint) segments and thereby share parameters like the reference frame used, quantizer step size, or filter settings. VP8 offers two different adjustable deblocking filters that are integrated into the codec loops (in-loop filtering). Many coding tools use probabilities that are calculated continuously from recent context, starting at each intra frames. Macro blocks can comprise 4×4, 8×8, or 16×16 samples. Motion vectors have quarter-pixel precision. History VP8 was first released by On2 Technologies on September 13, 2008, as On2 TrueMotion VP8, replacing its predecessor, VP7. After Google acquired On2 in February 2010, calls for Google to release the VP8 source code were made. Most notably, the Free Software Foundation issued an open letter on March 12, 2010, asking Google to gradually replace the usage of Adobe Flash Player and H.264 on YouTube with a mixture of HTML5 and a freed VP8. Word of an impending open-source release announcement got out on April 12, 2010. On May 19, at its Google I/O conference, Google released the VP8 codec software under a BSD-like license and the VP8 bitstream format specification under an irrevocable free patent license. This made VP8 the second product from On2 Technologies to be opened, following their donation of the VP3 codec in 2002 to the Xiph.Org Foundation, from which they derived the Theora codec. In February 2011, MPEG LA invited patent holders to identify patents that may be essential to VP8 in order to form a joint VP8 patent pool. As a result, in March the United States Department of Justice (DoJ) started an investigation into MPEG LA for its role in possibly attempting to stifle competition. In July 2011, MPEG LA announced that 12 patent holders had responded to its call to form a VP8 patent pool, without revealing the patents in question, and despite On2 having gone to great lengths to avoid such patents. In November 2011, the Internet Engineering Task Force published the informational RFC 6386, VP8 Data Format and Decoding Guide. In March 2013, MPEG LA announced that it had dropped its effort to form a VP8 patent pool after reaching an agreement with Google to license the patents that it alleges "may be essential" for VP8 implementation, and granted Google the right to sub-license these patents to any third-party user of VP8 or VP9. This deal has cleared the way for possible MPEG standardisation as its royalty-free internet video codec, after Google submitted VP8 to the MPEG committee in January 2013. In March 2013, Nokia asserted a patent claim against HTC and Google for the use of VP8 in Android in a German court; however, on August 5, 2013 the webm project announced that the German court has ruled that VP8 does not infringe Nokia's patent. Nokia has made an official intellectual property rights (IPR) declaration to the IETF with respect to the VP8 Data Format and Decoding Guide listing 64 granted patents and 22 pending patent applications. Implementations libvpx The reference implementation of a VP8 (and VP9) codec is found in the programming library libvpx which is released as free software. It has a mode for one-pass and two-pass encoding, respectively, while the one-pass mode is known as being broken and not offering effective control over the target bitrate. Currently, libvpx is the only software library capable of encoding VP8 video streams. An encoder based on the x264 framework called xvp8 is under development by the x264 team. Encoding A Video for Windows wrapper of the VP8 codec based on the Google VP8 library (FourCC: VP80) is available. The WebM Project hardware team in Finland released an RTL hardware encoder for VP8 that is available at no cost for semiconductor manufacturers. The Nvidia Tegra mobile chipsets have full VP8 hardware encoding and decoding (since Tegra 4). Nexus 5 could use hardware encoding Decoding libvpx is capable of decoding VP8 video streams. On July 23, 2010, Fiona Glaser, Ronald Bultje, and David Conrad of the FFmpeg Team announced the ffvp8 decoder. Through testing they determined that ffvp8 was faster than Google's own libvpx decoder. The WebM Project hardware team released an RTL hardware decoder for VP8, that is releasable to semiconductor companies at zero cost. TATVIK Technologies announced a VP8 decoder that is optimized for the ARM Cortex-A8 processor. Marvell's ARMADA 1500-mini chipset has VP8 SD and HD hardware decoding support (used in Chromecast). Intel has full VP8 decoding support built into their Bay Trail chipsets. Intel Broadwell also adds VP8 hardware decoding support. Operating system support Related formats WebM Also on May 19, 2010, the WebM Project was launched, featuring contributions from "Mozilla, Opera, Google and more than forty other publishers, software and hardware vendors" in a major effort to use VP8 as the video format for HTML5. In the WebM container format, the VP8 video is used with Vorbis or Opus audio. Internet Explorer 9 will support VP8 video playback if the proper codec is installed. Android is WebM-enabled from version 2.3 - Gingerbread. Since Android 4.0, VP8 could be read inside mkv and WebM could be streamed. Adobe also announced that the Flash Player will support VP8 playback in a future release. WebP On September 30, 2010 Google announced WebP, their new image format, on the Chromium blog. WebP is based on VP8's intra-frame coding and uses a container based on Resource Interchange File Format (RIFF). Comparison with H.264 While H.264/MPEG-4 AVC contains patented technology and requires licenses from patent holders and limited royalties for hardware, Google has irrevocably released the VP8 patents it owns under a royalty-free public license. According to a comparison of VP8 (encoded with the initial release of libvpx) and H.264 conducted by StreamingMedia, it was concluded that "H.264 may have a slight quality advantage, but it's not commercially relevant" and that "Even watching side-by-side (which no viewer ever does), very few viewers could tell the difference". They also stated that "H.264 has an implementation advantage, not a technology advantage." Google claims that VP8 offers the "highest quality real-time video delivery" and Libvpx includes a mode where the maximum CPU resources possible will be used while still keeping the encoding speed almost exactly equivalent to the playback speed (realtime), keeping the quality as high as possible without lag. On the other hand, a review conducted by streamingmedia.com in May 2010 concluded that H.264 offers slightly better quality than VP8. In September 2010 Fiona Glaser, a developer of the x264 encoder, gave several points of criticism for VP8, claiming that its specification was incomplete, and the performance of the encoder's deblocking filter was inferior to x264 in some areas. In its specification, VP8 should be a bit better than H.264 Baseline Profile and Microsoft's VC-1. Encoding is somewhere between Xvid and VC-1. Decoding is slower than FFmpeg's H.264, but this aspect can hardly be improved due to the similarities to H.264. Compression-wise, VP8 offers better performance than Theora and Dirac. According to Glaser, the VP8 interface lacks features and is buggy, and the specification is not fully defined and could be considered incomplete. Much of the VP8 code is copy-pasted , and since the source constitutes the actual specification, any bugs will also be defined as something that has to be implemented to be in compliance. In 2010, it was announced that the WebM audio/video format would be based on a profile of the Matroska container format together with VP8 video and Vorbis audio. See also List of open source codecs References Further reading External links RFC 6386: VP8 Data Format and Decoding Guide (specification) RFC 6386: VP8 Data Format and Decoding Guide (November 2011) The WebM Project Technical Overview Of VP8, An Open Source Video CODEC for the Web – Paper written by Google developers. Fiona Glaser's technical analysis of VP8 esp. as compared to H.264. The VP8 video codec: High compression+low complexity Diary Of An x264 Developer: Announcing the world's fastest VP8 decoder Free video codecs Formerly proprietary software Google software
3451154
https://en.wikipedia.org/wiki/Acorn%20MOS
Acorn MOS
The Machine Operating System (MOS) or OS is a discontinued computer operating system (OS) used in Acorn Computers' BBC computer range. It included support for four-channel sound, graphics, file system abstraction, and digital and analogue input/output (I/O) including a daisy-chained expansion bus. The system was single-tasking, monolithic and non-reentrant. Versions 0.10 to 1.20 were used on the BBC Micro, version 1.00 on the Electron, version 2 was used on the B+, and versions 3 to 5 were used in the BBC Master series. The final BBC computer, the BBC A3000, was 32-bit and ran RISC OS, which kept on portions of the Acorn MOS architecture and shared a number of characteristics (e.g. "star commands" CLI, "VDU" video control codes and screen modes) with the earlier 8-bit MOS. Versions 0 to 2 of the MOS were 16 KiB in size, written in 6502 machine code, and held in read-only memory (ROM) on the motherboard. The upper quarter of the 16-bit address space (0xC000 to 0xFFFF) is reserved for its ROM code and I/O space. Versions 3 to 5 were still restricted to a 16 KiB address space, but managed to hold more code and hence more complex routines, partly because of the alternative 65C102 central processing unit (CPU) with its denser instruction set plus the careful use of paging. User interface The original MOS versions, from 0 to 2, did not have a user interface per se: applications were expected to forward operating system command lines to the OS on its behalf, and the programming language BBC BASIC ROM, with 6502 assembler built in, supplied with the BBC Micro is the default application used for this purpose. The BBC Micro would halt with a Language? error if no ROM is present that advertises to the OS an ability to provide a user interface (called language ROMs). MOS version 3 onwards did feature a simple command-line interface, normally only seen when the CMOS memory did not contain a setting for the default language ROM. Application programs on ROM, and some cassette and disc-based software also, typically provide a command line, useful for working with file storage such as browsing the currently inserted disc. The OS provides the line entry facility and obeys the commands entered, but the application oversees running the command prompt. Cassette and disc based software typically relies on BBC BASIC's own user interface in order to be loaded, although it is possible to configure a floppy disk to boot up without needing to have BASIC commands executed, this was rarely used in practice. In BBC BASIC, OS commands are preceded with an asterisk or passed via the OSCLI keyword, to instruct BASIC to forward that command directly to the OS. This led to the asterisk being the prompt symbol for any software providing an OS command line; MOS version 3 onwards officially uses the asterisk as the command prompt symbol. When referring to an OS command, they generally include the asterisk as part of the name, for example , , etc., although only the part after the asterisk is the command. The asterisk was called a "star" and the commands were called "star commands". Unrecognised commands are offered to any service (extension) ROMs; filing system ROMs will often check to see if a file on disc matches that name, the same most other command-line interfaces do. The operating system call OSWORD with accumulator = 0 does however offer programs single line input (with ctrl-U for clear line and the cursor copying keys enabled) with basic character filtering and line length limit. The MOS command line interpreter features a rather unusual idea: abbreviation of commands. To save typing a dot could be used after the first few characters, such as for and for . was abbreviated to alone. , the command to catalogue (list) a cassette or disc, can be abbreviated down to . Service ROMs 3rd party ROMs generally also support command abbreviation, leading to ambiguity where two service ROMs provide commands which are very similar in name but possibly different in function. In this case, the MOS would prioritise the command from the ROM in the higher numbered ROM slot, e.g., 7 has precedence over 6. Some 3rd party suppliers would get around this by prefixing their star commands with other letters. For example, Watford Electronics ROMS would have their star commands prefixed with W thus making them unique. Extension The lower 16 KiB of the ROM map (0x8000 to 0xBFFF) is reserved for the active Sideways address space paged bank. The Sideways system on the BBC Micro allows for one ROM at a time from sockets on the motherboard (or expansion boards) to be switched into the main memory map. Software can be run from ROM this way (leaving the RAM free of user program code, for more workspace) and the OS can be extended by way of such ROMs. The most prevalent sideways ROM after BASIC is the Acorn Disc Filing System used to provide floppy disc support to the machine. During a reset, every paged ROM is switched in and asked how much public and private workspace it needs. Each ROM is allocated a chunk of private workspace that remains allocated at all times, and a single block of public workspace, equal to the size of the largest request, is made available to the active ROM. During operation, the paged area is rapidly switched between ROMs when file system commands are issued and unrecognised commands are put to the OS. MOS allocates a 3.5 KiB block of memory (0x0000 to 0x0DFF) from the bottom of the memory map for operating system and language ROM workspace: On a cassette-only machine, 0x0E00 is the start of user program memory. With OS extension ROMs fitted such as the a filing system ROM, more memory is allocated above this point; DFS ROMs generally use another 2.75 KiB to cache the disc catalogue and manage random access buffers. A network filing system ROM (for Econet) allocates another 0.5 KiB on top of this. This is a serious problem because MOS does not support relocation of machine code, which must be run from the address at which it was assembled, so some programs which assumed a fixed start of user program memory could overwrite MOS workspace. The problem was alleviated in versions 3 to 5 by allowing ROMs to allocate workspace in an alternative RAM bank at 0xC000 to 0xDFFF which was present in Master series computers, though old ROMs could continue to allocate blocks of main memory. The OS also maintains a vector table of all its calls which can be updated to hook any OS calls for user extension. By altering or 'hooking' these vectors, developers could substitute their own routines for those provided as defaults by the MOS. Text, graphics, printing The MOS permits textual output intended for the screen to be directed instead to the printer, or both at once, allowing for very trivial printing support for plain text. Graphics printing is not supported and has to be written separately. Graphics and in general all screen output is handled in a very unusual way. The ASCII control characters are almost entirely given new significance under MOS: known as the "VDU drivers", because the documentation described them in relation to the VDU statement in BBC BASIC, they are interpreted as video control characters. (i.e. ASCII 30) moves the cursor to (0, 0), VDU 4 and 5 select whether text should be drawn at the graphics or text cursor, VDU 12 clears the screen and VDU 14 and 15 turn scroll lock on and off. Thus, pressing ctrl-L will clear the screen and ctrl-N will enable scroll lock. VDU 2 and 3 toggle whether screen output is echoed to the printer. The BBC BASIC VDU statement is equivalent to the conventional BASIC and many of the control codes (such as 12 for "clear screen" and 7 for "beep") have the same functions as on other contemporary machines. Many more control characters take parameters: one or more characters that follow are used solely for their bit value as a parameter and not as a control code. VDU 19 handles palette remap; the following five bytes represent the palette entry, the desired colour and three reserve bytes. VDU 31 locates the text cursor to the location held in the following two bytes. VDU 17 sets the text colour and 18 the graphics colour. VDU 25 uses the succeeding five bytes to move the graphics cursor and plot solid and dashed lines, dots and filled triangles, the documented extent of graphics in MOS 0 and 1. The first byte is the command code, followed by the x and y co-ordinates as two byte pairs. Other graphic functions such as horizontal line fill bounded by a given colour were available by use of undocumented or poorly documented command codes. BBC BASIC contained aliases for the commonly used VDU codes (such as GCOL for VDU 18 or PLOT for VDU 25). Some statements were direct equivalents to VDU codes, such as CLS for VDU 12. Some statements were less exact equivalents as they incorporated functionality specific to BASIC as well as calling the OS routines; for example the statement would set screen mode and adjust the BASIC system variable HIMEM according to the amount of memory the new mode left available for BASIC, while would set the screen mode only, without altering HIMEM. This allowed a programmer to allocate a block of memory from BASIC for example to load machine code routines into by lowering the value of HIMEM at the start of a program, and still be free to switch screen modes without deallocating it as a side effect. There is one operating system command to write a character, OSWRCH, which is responsible for all text and graphics. For example, to move the cursor to (10, 15), needed, in 6502 assembler: LDA #31: JSR OSWRCH \ move text cursor LDA #10: JSR OSWRCH \ x-coordinate LDA #15: JSR OSWRCH \ y-coordinate (LDA loads a value into the accumulator; JSR is "jump to subroutine".) On the third OS call, the cursor will move. The following code would draw a line from (0, 0) to (0, +100): LDA #25: JSR OSWRCH \ begin "PLOT" (ASCII 25) command LDA #4: JSR OSWRCH \ command k=4, or move absolute LDA #0: JSR OSWRCH: JSR OSWRCH: JSR OSWRCH: JSR OSWRCH \ send (0, 0) as low, high byte pairs LDA #25: JSR OSWRCH \ begin PLOT LDA #1: JSR OSWRCH \ k=1 - draw relative LDA #0: JSR OSWRCH: JSR OSWRCH \ x = 0 LDA #100: JSR OSWRCH \ y = 100 (low byte) LDA #0: JSR OSWRCH \ high byte BBC BASIC allows performing the above as any of the following: VDU 25, 4, 0; 0; 25, 4, 100; 0; PRINT CHR$(25); CHR$(4); CHR$(0); ... etc. PLOT 4, 0, 0: PLOT 1, 0, 100 MOVE 0, 0: DRAW 0, 100: REM absolute co-ords only! OSWRCH=&FFEE: A%=25: CALL OSWRCH: A%=4: CALL OSWRCH: A%=0: CALL OSWRCH ... etc. Graphics in the Acorn MOS use a virtual graphics resolution of 1280×1024, with pixel positions mapped to the nearest equivalent pixel in the current graphics mode. Switching video resolution will not affect the shape, size or position of graphics drawn even with completely different pixel metrics in the new mode, because this is all accounted for by the OS. MOS does provide two other OS calls that handle text output: OSNEWL and OSASCI. OSNEWL writes a line feed and carriage return to the current output stream. OSASCI forwards all characters directly to OSWRCH except for carriage return, which triggers a call to OSNEWL instead. The precise code for OSASCI and OSNEWL five lines of 6502 assembler is documented in the BBC Micro User Guide. MOS implements character recognition so that text printed on screen in the system font can be selected with the arrow keys and input with the key as though it was being typed. To activate screen editing the user moves the hardware cursor to the text to be read and the OS displays a second cursor in software at the original position. Pressing copies one character from the hardware cursor to the software cursor and advances both, so that holding the key down copies a section of the text, the cursors wrapping around the vertical edges of the screen as necessary. If the screen scrolls during editing, the hardware cursor's position is adjusted to follow the text. The user can make changes to the text during the copy, and user-defined characters are recognised in graphics modes. Screen editing is terminated when or are pressed, which have their usual effects. Character recognition is made available to users in the API with a call to read the character at the current cursor position. Sound Sound generation is carried out through another OS call, OSWORD, which handles a variety of tasks enumerated via a task code placed into the accumulator. All OSWORD calls bear a parameter block used to send and receive multiple data; the address of this block is passed in the X and Y registers, with the low byte in X and the high byte in Y. There are four buffered sound channels three melodic and one noise-based on the sound chip found in the BBC Micro. There is only one waveform for melodic channels; the supported note parameters are pitch, duration, amplitude, envelope selection and various control options. For the amplitude parameter, a zero or negative sets a static amplitude, and a positive value select an amplitude and pitch envelope (a predefined temporal variation) to apply to the note. Control parameters was passed through the channel parameter, and include flush (the buffer is cleared and the channel silenced before the note is played), synchronise count (as soon as the same sync count is received for that many channels, all the synchronised notes are played together), and control over the Speech system upgrade where fitted. OSWORD handles many functions other than sound, many of which do not have direct support in BASIC. They may be accessed from BASIC by setting up the parameter block, loading its address into X% and Y% and the task code into A%, and then calling the routine. Other I/O and second processor support The BBC Micro had support for a second processor connected via the Tube, which allowed direct access to the system bus. The driver code for the Tube interface is not held in the MOS, usually being supplied by an external service ROM. The OS has calls to handle reading and writing to all I/O (ports and screen memory) and programmers are strongly advised to use these by the Acorn documentation. The reason for this is that when a second processor is installed, user software is run from the separate memory map on the far side of the Tube processor bus, and direct access to memory-mapped I/O registers and video memory is impossible. However, for the sake of performance, many apps including many games, write directly to main address space for I/O, and hence crash or give a blank screen if a 6502 second processor is attached. One such performance-critical area is sprite support: BBC Micro hardware does not support sprites, and games must implement sprites in software. In practice, the widespread use of direct access in place of the OS calls very rarely caused problems. Second processor units were expensive and very little software was written to make use of them, so few people bought them, and those who did have them could simply switch them off or unplug the cable if a problem arose. The MOS contains two built-in file systems: cassette and ROM. These are quite similar (try , , with a suitable ROM installed) and share a great deal of code. They feature a rudimentary copy protection mechanism where a file with a certain flag set cannot be loaded except to execute it. (Before Amstrad's launch of a mass-market twin cassette recorder in 1987, most home users did not have facilities to dub cassettes without loading the files into the computer for re-saving.) The Advanced Disc Filing System (ADFS), installed as standard in the Master series, has a similar mechanism. Versions Releases 0 and 1 Versions for the BBC Micro family, starting at 0.10 and finishing at 1.20. Confusingly the Electron shipped with version 1.00 despite being released after the BBC Micro's version 1.20, because it was the first release of a ROM for the electron. The MOS version number was not intended as an API definition: the Electron ROM was not "based on" the BBC Micro ROM version 1.0 in any sense. Release 2 This version is for the BBC Model B+, essentially the same as MOS 1.20 except with the addition of support for the sideways and shadow RAM present on the B+. Releases 3 to 5 MOS 3 to MOS 5 shipped with the BBC Master Series systems, in the Master 128, Master ET, and Master Compact models respectively. The initial release of MOS 3 expanded upon the facilities provided in MOS 2 on the B+ to support additional hardware, provide a command line facility and extend the VDU driver code with enhanced graphics plotting abilities. Two notable versions were made public: version 3.20 being the most common, and version 3.50 (although this had more functionality and bug fixes it was not 100% compatible with some popular applications software so was offered as an optional upgrade only). MOS 4 was a stripped down version of MOS 3 intended for the similarly minimized Master ET, and a few minor bugs fixed. MOS 5 shipped with the Master Compact, and was much altered with some functions removed or highly amended. Credits With the exception of MOS 3.50 where the space was reclaimed for more code, the area normally hidden by the input/output memory locations (the 768 bytes from 0xFC00-0xFEFF inclusive) in the MOS ROM contained a list of names of contributors to the system. This could be recovered by extracting the ROM and reading its contents in an EPROM programmer. Those who did not have such a device could access the ROM on a Master by setting a test bit of an access control register, then using a machine-code program to copy the ROM directly to text-mode screen memory. The full text of the credit string in MOS 1.20 is as follows; no spaces occur after the commas to save memory: "(C) 1981 Acorn Computers Ltd.Thanks are due to the following contributors to the development of the BBC Computer (among others too numerous to mention):- David Allen,Bob Austin,Ram Banerjee,Paul Bond,Allen Boothroyd,Cambridge,Cleartone,John Coll,John Cox,Andy Cripps,Chris Curry,6502 designers,Jeremy Dion,Tim Dobson,Joe Dunn,Paul Farrell,Ferranti,Steve Furber,Jon Gibbons,Andrew Gordon,Lawrence Hardwick,Dylan Harris,Hermann Hauser,Hitachi,Andy Hopper,ICL,Martin Jackson,Brian Jones,Chris Jordan,David King,David Kitson,Paul Kriwaczek,Computer Laboratory,Peter Miller,Arthur Norman,Glyn Phillips,Mike Prees,John Radcliffe,Wilberforce Road,Peter Robinson,Richard Russell,Kim Spence-Jones,Graham Tebby,Jon Thackray,Chris Turner,Adrian Warner,Roger Wilson,Alan Wright." Reception In interviews in 1993 and 2001, Acorn cofounder Hermann Hauser recounted that Microsoft's Bill Gates, having noticed that 1.5 million BBC Micros were sold, tried to sell MS-DOS to Acorn, but Hauser considered that adopting MS-DOS would have been a "retrograde step" compared to retaining Acorn's system. References Notes Watford Electronics, "The Advanced Reference Manual for the BBC Master Series", 1988 Acorn operating systems Discontinued operating systems 1981 software
58403779
https://en.wikipedia.org/wiki/Turochamp
Turochamp
Turochamp is a chess program developed by Alan Turing and David Champernowne in 1948. It was created as part of research by the pair into computer science and machine learning. Turochamp is capable of playing an entire chess game against a human player at a low level of play by calculating all potential moves and all potential player moves in response, as well as some further moves it deems considerable. It then assigns point values to each game state, and selects the move resulting in the highest point value. Turochamp is the earliest known computer game to enter development, but was never completed by Turing and Champernowne, as its algorithm was too complex to be run by the early computers of the time such as the Automatic Computing Engine. Turing attempted to convert the program into executable code for the 1951 Ferranti Mark 1 computer in Manchester, but was unable to do so. Turing played a match against computer scientist Alick Glennie using the program in the summer of 1952, executing it manually step by step, but by his death in 1954 had still been unable to run the program on an actual computer. Champernowne did not continue the project, and the original program design was not preserved. Despite never being run on a computer, the program is a candidate for the first chess program; several other chess programs were designed or proposed around the same time, including another one which Turing unsuccessfully tried to run on the Ferranti Mark 1. The first successful program in 1951, also developed for the Mark 1, was directly inspired by Turochamp, and was capable only of solving "mate-in-two" problems. A recreation of Turochamp was constructed in 2012 for the Alan Turing Centenary Conference. This version was used in a match with chess grandmaster Garry Kasparov, who gave a keynote at the conference. Gameplay Turochamp simulates a game of chess against the player by accepting the player's moves as input and outputting its move in response. The program's algorithm uses a heuristic to determine the best move to make, calculating all potential moves that it can make, then all of the potential player responses that could be made in turn, as well as further "considerable" moves, such as captures of undefended pieces, recaptures, and the capture of a piece of higher value by one of lower value. The program then assigns a point value to each resulting state, then makes the move with the highest resulting points, employing a minimax algorithm to do so. Points are determined based on several criteria, such as the mobility of each piece, the safety of each piece, the threat of checkmate, the value of the player's piece if taken, and several other factors. Different moves are given different point values; for example taking the queen is given 10 points but a pawn only one point, and placing the king in check is given a point or half of a point based on the layout of the board. According to Champernowne, the algorithm is primarily designed around the decision to take a piece or not; according to Turing, the resulting gameplay produces a low level game of chess, which he considered commensurate with his self-described average skill level at the game. History Alan Turing was an English mathematician, computer scientist, logician, cryptanalyst, philosopher and theoretical biologist. Turing was highly influential in the development of theoretical computer science, providing a formalisation of the concepts of algorithm and computation with the Turing machine, which can be considered a model of a general-purpose computer. Turing is widely considered to be the father of theoretical computer science and artificial intelligence. Beginning in 1941, while working in wartime cryptanalysis at Bletchley Park, Turing began to discuss with his colleagues the possibility of a machine being able to play chess or perform other "intelligent" tasks, as well as the idea of a computer solving a problem by searching through all possible solutions using a heuristic or algorithm. Some of Turing's cryptanalysis work, such as on the Bombe, was done through this model of a computing machine searching through possibilities for a solution. He continued to discuss the idea with his colleagues throughout the war, such as with economic statistician D. G. Champernowne in 1944, and by 1945 he was convinced that a machine capable of performing general computations would be theoretically capable of replicating anything a human brain could do, including playing chess. After World War II, Turing worked at the National Physical Laboratory (NPL), where he designed the Automatic Computing Engine (ACE), among the first designs for a stored-program computer. In 1946, Turing wrote a report for the NPL entitled "Proposed Electronic Calculator" that described several projects that he planned to use the ACE for; one of these was a program to play chess. He gave a reading at the London Mathematical Society the following year in which he presented the idea that a machine programmed to play chess could learn on its own and acquire its own experience. Subsequently, in 1948, he wrote a new report for the NPL, entitled "Intelligent Machinery", which suggested a form of imitation chess. In the late summer of 1948 Turing and Champernowne, then his colleague at King's College, Cambridge, devised a system of theoretical rules to determine the next strokes of a chess game. They designed a program that would enact an algorithm that would follow these rules, though the program was too complex to able to be run on the ACE or any other computer of the time. The program was named Turochamp, a combination of their surnames. It is sometimes misreported as "Turbochamp". According to Champernowne, his wife played a simulated game against the program, nicknamed the "paper machine", and lost. Turing attempted to convert the program into executable code for the 1951 Ferranti Mark 1 computer in Manchester, but was unable to do so due to the complexity of the code. According to Jack Copeland, author of several books on Turing, he was not concerned that the program could not be run, as he was convinced that the speed and sophistication of computers would soon rise to make it possible. In the summer of 1952, Turing played a match against computer scientist Alick Glennie using the program, executing it manually step by step. The match, which was recorded, had the Turochamp program losing to Glennie in 29 moves, with each of the program's moves taking up to 30 minutes to evaluate. Although the match demonstrated that the program could viably play against a human in a full game, it was not run on an actual computer before Turing's death in 1954. Legacy Turochamp is a candidate for the first chess program, though the original program was never run on a computer. Several other chess programs were designed and attempted around the same time, such as in Claude Shannon's 1950 article Programming a Computer for Playing Chess, Konrad Zuse's chess routines developed from 1941 to 1945 for his proposed programming language Plankalkül, and Donald Michie and Shaun Wylie's chess program Machiavelli, which Turing unsuccessfully tried to run on the Ferranti Mark I at the same time as Turochamp. In November 1951 Dietrich Prinz, who worked at Ferranti and was inspired by Turing's work on Turochamp, developed the first runnable computer-based chess program for the Ferranti Mark I, which could solve "mate-in-two" problems. The original code and algorithm written by Turing and Champernowne has not been preserved. In 1980, Champernowne described the way Turochamp worked, but he was not able to recall all of the details of the game's rules. A version of Turochamp was developed in 2012 from descriptions of the game's algorithm as a symbolic recreation. After the initial recreation was unable to recreate Turing's simulated match against Glennie, several computer chess experts and contemporaries of Turing were consulted in interpreting Turing and Champernowne's descriptions of the program, including Ken Thompson, creator of the 1983 Belle chess machine and the Unix operating system. They were unable to find the explanation for the deviation until they consulted with Donald Michie, who suggested that Turing had not been concerned with meticulously working out exactly which move Turochamp would recommend. With this in mind they were able to prove that from the very first move of the game Turing had incorrectly deviated from moves that appeared suboptimal without working out their point value. The resulting recreation was presented at the Alan Turing Centenary Conference on 22–25 June 2012, in a match with chess grandmaster and former world champion Garry Kasparov. Kasparov won the match in 16 moves, and complimented the program for its place in history and the "exceptional achievement" of developing a working computer chess program without being able to ever run it on a computer. See also List of chess software List of things named after Alan Turing Notes References Sources External links Video of chess match between Garry Kasparov and the Turochamp recreation Alan Turing vs Alick Glennie (1952) "Turing Test" at Chessgames.com Turochamp (Computer) vs Garry Kasparov (2012) at Chessgames.com — Open Source Python implementation of Turochamp Turochamp in a web browser, based on this Nim version: 1948 in computing 1948 in chess Alan Turing Chess in England Computer chess Department of Computer Science, University of Manchester Video games developed in the United Kingdom
18649840
https://en.wikipedia.org/wiki/Microwindows
Microwindows
In computing, Nano-X is a windowing system which is full featured enough to be used on a PC, an embedded system or a PDA. It is an Open Source project aimed at bringing the features of modern graphical windowing environments to smaller devices and platforms. The project was renamed from Microwindows due to legal threats from Microsoft regarding the Windows trademark. Overview The Nano-X Window System is extremely portable, and completely written in C. It has been ported to the Intel 16, 32 and 64 bit CPUs, the Broadcom BCM2837 ARM Cortex-A53, as well as MIPS R4000 (NEC Vr41xx) StrongARM and PowerPC chips found on handheld and pocket PCs. The Nano-X Window System currently runs on Linux systems with kernel framebuffer support, or using an X11 driver that allows Microwindows applications to be run on top of the X Window desktop. This driver emulates all of Microwindows' truecolor and palette modes so that an application can be previewed using the target system's display characteristics directly on the desktop display, regardless of the desktop display characteristics. In addition, it has been ported to Windows, Emscripten, Android (based on the Allegro library), and MS-DOS. Microwindows screen drivers have been written based on the SDL1 and SDL2 libraries plus the Allegro and SVGAlib libraries. There are also a VESA and a VGA 16 color 4 planes driver. Architecture Layered Design Microwindows is essentially a layered design that allows different layers to be used or rewritten to suit the needs of the implementation. At the lowest level, screen, mouse/touchpad and keyboard drivers provide access to the actual display and other user-input hardware. At the mid level, a portable graphics engine is implemented, providing support for line draws, area fills, polygons, clipping and color models. At the upper level, three API's are implemented providing access to the graphics applications programmer. Currently, Microwindows supports the Xlib, Nano-X and Windows Win32/WinCE GDI APIs. These APIs provide close compatibility with the Win32 and X Window systems, however, with reduced functionality. These APIs allow programs to be ported from other systems easily. Device Drivers The device driver interfaces are defined in device.h. A given implementation of Microwindows will link at least one screen, mouse and keyboard driver into the system. The mid level routines in the device-independent graphics engine core then call the device driver directly to perform the hardware-specific operations. This setup allows varying hardware devices to be added to the Microwindows system without affecting the way the entire system works. Application programmer interfaces Microwindows currently supports three different application programming interfaces (APIs). This set of routines handles client–server activity, window manager activities like drawing title bars, close boxes, etc., as well as handling the programmer's requests for graphics output. These APIs run on top of the core graphics engine routines and device drivers. NX11 API The NX11 API is compliant with the X Window API. It is based on the Nano-X API and provides Xlib functions using the functions available in the Nano-X API. It can be compiled as a separate library or together with Nano-X library as a single library called libPX11. In all it provides 180 Xlib functions and stubs for additional functions not implemented. Based on the NX11 API the FLTK graphical user interface library can be used to provide a GUI for application programs. The Nanolinux distribution uses the NX11 API and FLTK to implement a Linux operating system using 19 MB of disk space. Nano-X API The Nano-X API is modeled after the mini-x server written initially by David Bell, which was a reimplementation of X on the MINIX operating system. It loosely follows the X Window System Xlib API, but the names all being with GrXXX() rather than X...(). The basic model of any API on top of Microwindows is to initialize the screen, keyboard and mouse drivers, then hang in a select() loop waiting for an event. When an event occurs, if it is a system event like keyboard or mouse activity, then this information is passed to the user program converted to an expose event, paint message, etc. If it is a user requesting a graphics operation, then the parameters are decoded and passed to the appropriate GdXXX engine routine. Note that the concept of a window versus raw graphics operations is handled at this API level. That is, the API defines the concepts of what a window is, what the coordinate systems are, etc., and then the coordinates are all converted to "screen coordinates" and passed to the core GdXXX engine routines to do the real work. This level also defines graphics or display contexts and passes that information, including clipping information, to the core engine routines. Microwindows API The API which tries to be compliant with the Microsoft Win32 and WinCE GDI standard is the Microwindows API. Currently, there is support for most of the graphics drawing and clipping routines, as well as automatic window title bar drawing and dragging windows for movement. The Microwindows API is message-based, and allows programs to be written without regard to the eventual window management policies implemented by the system. The Microwindows API is not currently client/server. The fundamental communications mechanism in the Microwindows API is the message. A message consists of a well-known message number, and two parameters, known as wParam and lParam. Messages are stored in an application's message-queue, and retrieved via the GetMessage function. The application blocks while waiting for a message. There are messages that correspond to hardware events, like WM_CHAR for keyboard input or WM_LBUTTONDOWN for mouse button down. In addition, events signaling window creation and destruction WM_CREATE and WM_DESTROY are sent. In most cases, a message is associated with a window, identified as an HWND. After retrieving the message, the application sends the message to the associated window's handling procedure using DispatchMessage. When a window class is created, its associated message handling procedure is specified, so the system knows where to send the message. The message-passing architecture allows the core API to manage many system functions by sending messages on all sorts of events, like window creation, painting needed, moving, etc. By default, the associated window handling function gets a "first pass" at the message, and then calls the DefWindowProc function, which handles default actions for all the messages. In this way, all windows can behave the same way when dragged, etc., unless specifically overridden by the user. Major window management policies can be redefined by merely re-implementing DefWindowProc, rather than making changes throughout the system. The basic unit of screen organization in Microwindows API is the window. Windows describe an area of the screen to draw onto, as well as an associate "window procedure" for handling messages destined for this window. Applications programmers can create windows from pre-defined classes, like buttons, edit boxes, and the like, or define their own window classes. In both cases, the method of creating and communicating with the windows remains exactly the same. History Nano-X's origin is with NanoGUI. NanoGUI was created by Alex Holden by taking David Bell's mini-X server and Alan Cox's modifications and adding client/server networking. Gregory Haerr then took interest in the NanoGUI project and began making extensive enhancements and modifications to NanoGUI. Around version 0.5, Gregory Haerr added support for multiple API's, and began distributing Microwindows. In Microwindows 0.84, all previous NanoGUI changes were incorporated and since then has been the combined NanoGUI/Microwindows distribution. In January 2005, the system changed its name to the Nano-X Window System. Because Nano-X just loosely follows the X Window System Xlib API, an additional interface was developed named NXlib, which provides an Xlib compatible API based on Nano-X. References External links Microwindows on GitHub for the latest version Microwindows.org website Introduction to Microwindows Programming Introduction to Microwindows Programming, Part 2 Introduction to Microwindows Programming, Part 3 Application programming interfaces X Window System C (programming language) libraries Programming tools
12073736
https://en.wikipedia.org/wiki/Jive%20%28software%29
Jive (software)
Jive (formerly known as Clearspace, then Jive SBS, then Jive Engage) is a commercial Java EE-based Enterprise 2.0 collaboration and knowledge management tool produced by Jive Software. It was first released as "Clearspace" in 2006, then renamed SBS (for "Social Business Software") in March 2009, then renamed "Jive Engage" in 2011, and renamed simply to "Jive" in 2012. Jive integrates the functionality of online communities, microblogging, social networking, discussion forums, blogs, wikis, and IM under one unified user interface. Content placed into any of the systems (blog, wiki, documentation, etc.) can be found through a common search interface. Other features include RSS capability, email integration, a reputation and reward system for participation, personal user profiles, JAX-WS web service interoperability, and integration with the Spring Framework. The product is a pure-Java server-side web application and will run on any platform where Java (JDK 1.5 or higher) is installed. It does not require a dedicated server - users have reported successful deployment in both shared environments and multiple machine clusters. As of Jive 8, released March 30, 2015, there is a Jive-n version which is for internal use (hosted by the consumer or hosted by Jive as a service) and a Jive-x version which is an external version hosted as a service. Jive no longer supports wiki markup language. Server requirements for Jive 8-n The following are the server requirements for Jive 8-n Operating systems: RHEL version 6 or 7 for x86_64, CentOS version 6 or 7 for x86_64 or SuSE Enterprise Linux Server (SLES) 11 and 12 for x86_64 Application Servers: Jive ships with its own embedded Apache HTTPD and Tomcat servers as part of the install package. It is not possible to deploy the application onto other appservers. Databases: MySQL (5.1, 5.5, 5.6) Oracle (11gR2, 12c) Postgres (9.0, 9.1, 9.2, 9.3, 9.4 - 9.2 or higher recommended) Microsoft SQL Server (2008R2, 2012, 2014) Environment: Jive recommends a server with at least 4GB of RAM and a dualcore 2 GHz processor with x86_64 architecture The product integrates with an LDAP repository or Active Directory For optimal deployment with a large community Jive Software recommends: using dedicated cache and document-conversion servers hosting the application and database servers separately Releases Jive 8, released on March 30, 2015 Jive 7, released in October 2013 Jive 9.0.x, released in November 2016 Jive 9, released in November 2016, supported now See also Comparison of wiki software List of wiki software Collaborative software References Infoworld PCMagazine External links Jive homepage Blog software Content management systems Groupware Internet forum software Proprietary wiki software Web applications
32669131
https://en.wikipedia.org/wiki/Cybil%20%28programming%20language%29
Cybil (programming language)
Cybil (short for the Cyber Implementation Language of the Control Data Network Operating System) was a Pascal-like language developed at Control Data Corporation for the Cyber computer family. Cybil was used as the implementation language for the NOS/VE operating system on the CDC Cyber series and was also used to write the eOS operating system for the ETA10 supercomputer in the 1980s. References Control Data mainframe software Pascal programming language family Systems programming languages
8733477
https://en.wikipedia.org/wiki/20th%20Air%20Division
20th Air Division
The 20th Air Division is an inactive United States Air Force organization. Its last assignment was with Tactical Air Command at Tyndall Air Force Base, Florida where it was inactivated on 1 March 1983. During most of the division's history it served with Air Defense Command as a regional command and control headquarters. Between 1955 and 1967 the division controlled air defense units in the central United States. It controlled a slightly different areas of the midwestern US from 1955 to 1960 and again from 1966 to 1967. Its area of responsibility shifted to the east coast if the United States from 1969 to 1983. It was shifted to its final station on paper in 1983 and was immediately inactivated. History The 20th Air Division was assigned to Air Defense Command (ADC) for most of its existence. It served as a regional command and control headquarters, controlling fighter interceptor and radar units over several areas of responsibility during the Cold War. For three years it also commanded a surface-to-air missile squadron. The division was initially activated as an intermediate command organization under Central Air Defense Force at Grandview Air Force Base (later Richards-Gebaur Air Force Base) in June 1955. The division was responsible for the interceptor and radar units within an area that covered parts of Nebraska, Oklahoma, Arkansas, Illinois, Iowa, and virtually all of Kansas and Missouri. On 1 October 1959 ADC activated the Sioux City Air Defense Sector and its Semi Automatic Ground Environment (SAGE) DC-22 Direction Center and assigned it to the division. The 20th also operated a Manual Control Center (MCC-2) at Richards-Gebaur. The division was inactivated in 1960 when ADC reorganized its regional air defense units, and the 33d Air Division assumed command of most of its former units. The division was reactivated in 1966 under Tenth Air Force as a SAGE organization, replacing the Chicago Air Defense Sector when ADC discontinued its air defense sectors and replaced them with air divisions. The 20th provided air defense from the Truax Field, Wisconsin DC-7/CC-2 SAGE blockhouse for parts of Wisconsin, Minnesota, Iowa, Missouri, Arkansas, Tennessee, Kentucky, Indiana, and all of Illinois. The division also acted as the 20th NORAD Region after activation of the North American Air Defense Command (NORAD) Combat Operations Center at the Cheyenne Mountain Complex, Colorado. Operational control of the division was transferred to NORAD from ADC. In addition to the active duty interceptor and radar units, the division supervised Air National Guard units that flew interception sorties using (among other aircraft) McDonnell F-101 Voodoos and Convair F-106 Delta Darts. At the same time the division controlled numerous radar squadrons. It was inactivated in 1967 as part of an ADC consolidation of intermediate level command and control organizations, driven by budget reductions required to fund USAF operations in Southeast Asia. The 20th Air Division was activated for a third time in November 1969 under Aerospace Defense Command (ADCOM). The division provided air defense for virtually all of the southeastern United States, except for most of Louisiana from the SAGE DC-4 blockhouse at Fort Lee Air Force Station, Virginia. The division also controlled a CIM-10 Bomarc surface-to-air anti-aircraft missile squadron near Langley Air Force Base until the squadron's inactivation in October 1972. ADCOM was inactivated on 1 October 1979. The atmospheric defense resources (interceptors and warning radars) of ADCOM were reassigned to Tactical Air Command, which formed Air Defense, Tactical Air Command as the headquarters to control them. After 1981, the division controlled units equipped with McDonnell Douglas F-15 Eagle aircraft. Its subordinate units continued to participate in intensive academic training, numerous multi-region simulated (non-flying) exercises, and flying exercises. The division moved to Tyndall Air Force Base, Florida in March 1983 where it was inactivated and its mission, personnel and equipment were transferred to the Southeast Air Defense Sector. Lineage Established as the 20 Air Division (Defense) on 8 June 1955 Activated on 8 October 1955 Inactivated on 1 January 1960 Activated on 20 January 1966 (not organized) Organized on 1 April 1966 Discontinued and inactivated, on 31 December 1967 Activated on 19 November 1969 Inactivated on 1 March 1983 Assignments Central Air Defense Force, 8 October 1955 – 1 January 1960 Air Defense Command, 20 January 1966 Tenth Air Force, 1 April 1966 – 31 December 1967 Aerospace Defense Command, 19 November 1969 Air Defense, Tactical Air Command, 1 October 1979 – 1 March 1983 Stations Grandview Air Force Base (later, Richards Gebaur Air Force Base0, Missouri, 8 October 1955 – 1 January 1960 Truax Field, Wisconsin, 1 April 1966 – 31 December 1967 Fort Lee Air Force Station, Virginia, 19 November 1969 Tyndall Air Force Base, Florida, 19 November 1969 – 1 March 1983 Components Sector Sioux City Air Defense Sector: 1 October 1959 – 1 January 1960 Groups 53d Fighter Group: 1 March 1956 – 1 January 1960 Sioux Gateway Airport, Iowa 327th Fighter Group: 1 April 19–25 April June 1966 Truax Field, Wisconsin 328th Fighter Group: 1 March 1956 – 1 January 1960 Richards-Gebaur Air Force Base, Missouri 678th Air Defense Group: 1 March 1970 – 1 March 1983 Tyndall Air Force Base, Florida 701st Air Defense Group: 1 March 1970 – 17 January 1974 Fort Fisher Air Force Station, North Carolina Squadrons Fighter-Interceptor 48th Fighter-Interceptor Squadron: 19 November 1969 – 1 March 1983 Langley Air Force Base, Virginia 85th Fighter-Interceptor Squadron: 1 March 1956 – 1 July 1959 Scott Air Force Base, Illinois 95th Fighter-Interceptor Squadron: 19 November 1969 – 31 January 1973 Dover Air Force Base, Delaware Missile 22d Air Defense Missile Squadron (BOMARC): 19 November 1969 – 31 October 1972 Langley Air Force Base, Virginia Radar 20th Air Defense Squadron (SAGE), 1 January 1975 – 1 October 1979 630th Radar Squadron, 1 August 1972 – 31 December 1977 Houston Intercontinental Airport, Texas 632d Radar Squadron, 19 November 1969 – 30 September 1978 Roanoke Rapids Air Force Station, North Carolina 634th Radar Squadron, 1 January 1973 – 1 July 1974 Lake Charles Air Force Station, Louisiana 635th Radar Squadron, 1 January 1973 – 1 June 1974 Dauphin Island Air Force Station, Alabama 644th Radar Squadron, 19 November 1969 – 1 April 1978 Homestead Air Force Base, Florida 645th Radar Squadron, 19 November 1969 – 1 April 1976 Patrick Air Force Base, Florida 649th Radar Squadron, 19 November 1969 – 30 June 1975 Bedford Air Force Station, Virginia 650th Aircraft Control and Warning Squadron, 1 March 1956 – 8 October 1957 Dallas Center Air Force Station, Iowa 657th Radar Squadron, 19 November 1969 – 30 September 1970 Bedford Air Force Station, Virginia 660th Radar Squadron, 19 November 1969 – 15 November 1980 MacDill Air Force Base, Florida 671st Radar Squadron, 19 November 1969 – 30 September 1979 NAS Key West, Florida 676th Radar Squadron, 1 April 1966 – 1 December 1967 Antigo Air Force Station, Wisconsin 678th Radar Squadron, 19 November 1969 – 1 March 1970 Tyndall Air Force Base, Florida 679th Radar Squadron, 19 November 1969 – 1 February 1974 NAS Jacksonville, Florida 680th Radar Squadron, 19 November 1969 – 30 May 1970 Palermo Air Force Station, New Jersey 691st Radar Squadron, 19 November 1969 – 30 September 1970 Cross City Air Force Station, Florida 693d Radar Squadron, 19 November 1969 – 30 September 1970 Dauphin Island Air Force Station, Florida 701st Radar Squadron, 1 April 1966 – 1 March 1970 Fort Fisher Air Force Station, North Carolina 702d Radar Squadron, 19 November 1969 – 9 June 1979 Savannah Air Force Station, Georgia 725th Aircraft Control and Warning Squadron, 1 March 1956 – 1 January 1960 Walnut Ridge Air Force Station, Arkansas 738th Aircraft Control and Warning Squadron, 1 March 1956 – 1 January 1960 Olathe Air Force Station, Kansas 755th Radar Squadron, 1 April 1966 – 1 December 1967 Williams Bay Air Force Station, Wisconsin 770th Radar Squadron, 19 November 1969 – 1 February 1974 Fort George G. Meade, Maryland 771st Radar Squadron, 19 November 1969 – 1 February 1974 Cape Charles Air Force Station, Virginia 782d Radar Squadron, 1 April-25 June 1966 Rockville Air Force Station, Indiana 787th Aircraft Control and Warning Squadron, 1 January 1959 – 1 January 1960 Chandler Air Force Station, Minnesota 788th Aircraft Control and Warning (later Radar) Squadron, 1 March 1956 – 15 October 1958; 1 April 1966 – 1 December 1967 Waverly Air Force Station, Iowa 789th Aircraft Control and Warning Squadron, 1 March 1956 – 1 January 1960 Omaha Air Force Station, Nebraska 790th Aircraft Control and Warning (later Radar) Squadron, 1 March 1956 – 1 April 1959; 1 April 1966 – 1 December 1967 Kirksville Air Force Station, Missouri 791st Aircraft Control and Warning (later Radar) Squadron, 1 March 1956 – 15 October 1958; 1 April 1966 – 1 December 1967 Hanna City Air Force Station, Illinois 792d Radar Squadron, 19 November 1969 – 8 December 1978 North Charleston Air Force Station, South Carolina 793d Aircraft Control and Warning Squadron, 1 March 1956 – 1 January 1960 Hutchinson Air Force Station, Kansas 796th Aircraft Control and Warning Squadron, 1 March 1956 – 1 June 1961 Bartlesville Air Force Station, Oklahoma 797th Aircraft Control and Warning Squadron, 1 March 1956 – 1 June 1961 Fordland Air Force Station, Missouri 798th Aircraft Control and Warning (later Radar) Squadron, 1 March 1956 – 1 January 1960 Belleville Air Force Station, Illinois 810th Radar Squadron, 19 November 1969 – 31 July 1978 Winston-Salem Air Force Station, North Carolina 861st Radar Squadron, 19 November 1969 – 30 June 1975 Aiken Air Force Station, South Carolina 4638th Air Defense Squadron (SAGE), 1 January 1972 – 1 January 1975 Aircraft and Missiles North American F-86 Sabre, 1956–1959 Northrop F-89 Scorpion, 1956–1957 McDonnell F-101 Voodoo, 1959–1960 Convair F-102 Delta Dagger, 1957–1960 Convair F-106 Delta Dart, 1969–1981 CIM-10 BOMARC, 1969–1972; F-15, 1981–1983 See also F-89 Scorpion units of the United States Air Force List of Sabre and Fury units in the US military List of F-106 Delta Dart units of the United States Air Force List of United States Air Force Aerospace Defense Command Interceptor Squadrons United States general surveillance radar stations References Notes Bibliography Further reading "ADCOM's Fighter Interceptor Squadrons". The Interceptor (January 1979) Aerospace Defense Command, (Volume 21, Number 1) 020 020 Military units and formations established in 1955 Military units and formations disestablished in 1983 1955 establishments in Missouri 1983 disestablishments in Florida
51606519
https://en.wikipedia.org/wiki/Gillian%20Lovegrove
Gillian Lovegrove
Gillian Lovegrove (born 1942) is a retired computer scientist and academic. She was Dean of the School of Informatics at Northumbria University, president of the Conference of Professors and Heads of Computing and was Higher Education consultant to the British Computer Society and manager of its Education and Training Forum. She is known for her interest in gender imbalance in computer education and employment, and her public discussion of possible solutions to a shortage of information technology graduates in the UK. Early life and education Gillian Lesley Lowther, now Gillian Lovegrove, was born in Yorkshire on 28 October 1942 and grew up in the Hull area. She went to Malet Lambert School and then to Newnham College, Cambridge to study mathematics. After her first degree in 1964, she did a Masters-equivalent Diploma at Cambridge University in Numerical Analysis and Automatic Computing. Career She was a lecturer in mathematics at Portsmouth Polytechnic from 1965–1968 and then went as a research fellow to Southampton University, where she started maths lecturing in 1969. Her career had to be part-time as she combined it with motherhood responsibilities for a few years in the 1970s. In 1974 she got her PhD with a dissertation on modular operating systems, after studying under David Barron, and in 1980 she began full-time lecturing in computer studies at Southampton. Her next research interest was object-oriented computing. She co-wrote two papers about girls and computer education: Where Are All the Girls? (1987) and Where Are the Girls Now? (1991) with Wendy Hall, a colleague at Southampton. Lovegrove also organised "Women into Computing" conferences in the late 1980s where one of the themes that emerged was "dismay at the low number of women taking computing courses or following computing careers". In 1992 she went to the University of Staffordshire's School of Computing where she was associate dean and head of information systems. She was also in the "IT EQUATE" team exploring ways of encouraging more girls at school to consider IT as an area of study and as a future career. In 1995 a reviewer said her chapter, Women in Computing, in Professional Awareness in Software Engineering "grapples with the very difficult policy issues in the areas of legislation, institutional culture, and positive action". She was concerned not only about a shortage of women in computing but also more generally about a shortage of information technology graduates in the UK and gave evidence on this subject to the Parliamentary Information Technology Committee in 2001. She suggested ways for universities to help create "a culture which does not exclude women" from computing. She continued making similar points about under-representation of women and an inadequate supply of IT graduates at conferences and elsewhere. The Times Higher Education Supplement said her field had become "the image of computing and what more computing departments can do for the UK economy". At a "Build Britain's Brainpower" conference in 2002 she proposed "joint teaching schemes between employers and universities", even though she felt teaching staff were already over-stretched after a period of rapid expansion in student numbers. She was head-hunted by Northumbria University in 1999 to be Head of the School of Informatics, which grew under her leadership. She has also been chair of the Council of Professors and Heads of Computing (CPHC) and chaired the CPHC Information Strategy Group. She was manager of the British Computer Society (BCS) Education and Training Forum, and a Higher Education consultant for the BCS. References Academics of Northumbria University British women computer scientists Living people 1942 births
46199897
https://en.wikipedia.org/wiki/Gemini%20Guidance%20Computer
Gemini Guidance Computer
The Gemini Guidance Computer (sometimes Gemini Spacecraft On-Board Computer (OBC)) was a digital, serial computer designed for Project Gemini, America's second human spaceflight project. The computer, which facilitated the control of mission maneuvers, was designed by the IBM Federal Systems Division. Functionality Project Gemini was the first with an on-board computer, as Project Mercury was controlled by computers on Earth. The Gemini Guidance Computer was responsible for the following functions: Ascent – serves as a backup guidance system. The switchover is manually controlled by the astronauts Orbital flight – provides a navigation capability to the astronauts to determine the time of retrofire and to select the landing site for safe reentry in the case of an emergency. (on extended missions ground data may become unavailable when ground data network rotates out of the orbital plane.) Rendezvous – serves as primary reference by providing guidance information to the astronauts. The orbit parameters are determined by the ground tracking which are then sent to the spacecraft; the guidance computer was responsible for processing the information along with sensed spacecraft attitude. The information was presented to the astronauts in terms of spacecraft coordinates. Reentry – feeds commands directly to the reentry control system for automatic reentry or provides the guidance information to the astronauts for manual reentry. Specs The computer was architecturally similar to the Saturn Launch Vehicle Digital Computer, in particular in the instruction set; however its circuit integration was less advanced. The GGC weighed 58.98 pounds (26.75 kg) and was powered by 28V DC. During a short power outage it could be powered by the Auxiliary Computer Power Unit (ACPU) 39-bit words memory, each composed of three 13-bit syllables Ferrite core memory of 4096 words Two's complement integer arithmetic 7.143 hertz clock (140 ms per instruction); all instructions took a single cycle except for multiplication and division See also Apollo Guidance Computer References External links Gemini Spacecraft On-Board Computer (OBC) Gemini Program Overview IBM and the Gemini Program Project Gemini Guidance computers IBM avionics computers Spacecraft navigation instruments
11728132
https://en.wikipedia.org/wiki/Muvee%20Technologies
Muvee Technologies
muvee Technologies is the Singapore-based inventor of the world's first automatic video editing software for Windows. In 2001, muvee launched the award-winning autoProducer for PCs and in 2005 became the first to offer mobile video editing software on the Symbian 3 platform for the world's first videocamera phone, the Nokia 7610. In 2006 it announced another world's first with the Nikon Coolpix S5, where it embedded its automatic slideshow creation engine into the S-series point-and-shoot digital camera. In 2014, muvee launched Action Studio, the world’s first mobile video editing app for action cam users and followed up with ReAction, an app which specifically creates dramatic slomo video sequences for both iOS and Android. muvee's technologies have shipped on over 750 million devices. Strategic partnerships with leading brands in PCs, mobile and imaging devices include Samsung, LG, HTC, Sony, Alcatel, Nikon, Nokia, HP, Dell and Olympus. muvee’s automatic video editing solutions are delivered to handset OEMs, social networks and partners as an SDK or complete applications and various apps are available in Android and iOS app stores globally. muvee also developed the Action Cam App for Sony's Action Cam series, available on both the iOS AppStore and Google Play Products muvee Reveal muvee Reveal Business Action Studio (iOS and Android) ReAction Slomo Video Creator (iOS and Android) muvee 360 Video Stitcher for Samsung Gear 360 (Mac) Turbo Video Stabilizer Turbo Video Cutter Technology The Artistic Intelligence™ engine built in to muvee automatically creates movies that are called "muvees". Using advanced signal processing techniques, input video and photos are analyzed automatically for scene boundaries, human faces and other proprietary metrics. Chosen music tracks are analyzed for the beats of the music, and the Emotional Index™ of the song. Users can choose one of several editing Styles and a muvee is automatically generated along with beautiful effects and transitions all synched to the beat and emotional contours of the music. Editing Style templates are authored by actual professional film producers and a rich library is currently available. Each Style contains a mix of effects and transitions that are applied to video and photos in synchronization with the music. muvee announced CODEN in 2010 to enable fast trimming of HD videos in under-powered phones. Previously, it was deemed impossible to edit video in low-mid range feature phones. In higher end Smartphones, although there is more memory and processor power, video is captured in up to 4K, hence the pixels involved are growing exponentially faster than the CPU power. BUT consumers want to do MORE with the video that they have captured on the phone. Taking ideas from minimally invasive heart surgery, MUVEE engineers created new methods to surgically manipulate video files WITHOUT having to decompress them. CODEN Compressed Domain Editing Ngine Basic editing: TRIM/JOIN/REPLACE AUDIO features Low heap memory requirements Low CPU requirements MTK friendly This patented technology is now available to Device Makers as an Application Development Kit (ANSI C) along with the corresponding Video Renderers, Codecs and GUI abstraction layers for application development in device maker’s proprietary environments and chipsets. In 2014 muvee released their Android mAMS (muvee Advanced Multimedia SDK) targeted at software developers and mobile handset makers to quickly create multimedia applications for Android. It contains modules to support all the basic video, image and audio manipulation operations typically needed in any multimedia application, including trim, split, joining of video clips, balancing audio and music, sound effects, fast transcoding, adding overlays with transparencies, generating thumbnails etc. Company history muvee Technologies was founded by Terence Swee, a Singaporean electronic engineer and Dr. Pete Kellock, a Scotsman with a doctorate in electronic music. The founders began working together in 1999 at Kent Ridge Digital Labs (KRDL), a technology business incubator sponsored by the Government of Singapore. See also Video editing software Comparison of video editing software List of video editing software References External links muvee Technologies muvee Reveal X Internet in Singapore
1008271
https://en.wikipedia.org/wiki/GameSpy
GameSpy
GameSpy was an American provider of online multiplayer and matchmaking middleware for video games founded in 1996 by Mark Surfas. After the release of a multiplayer server browser for the game, QSpy, Surfas licensed the software under the GameSpy brand to other video game publishers through a newly established company, GameSpy Industries, which also incorporated his Planet Network of video game news and information websites, and GameSpy.com. GameSpy merged with IGN in 2004; by 2014, its services had been used by over 800 video game publishers and developers since its launch. In August 2012, the GameSpy Industries division (which remained responsible for the GameSpy service) was acquired by mobile video game developer Glu Mobile. IGN (then owned by News Corporation) retained ownership of the GameSpy.com website. In February 2013, IGN's new owner, Ziff Davis, shut down IGN's "secondary" sites, including GameSpy's network. This was followed by the announcement in April 2014 that GameSpy's service platform would be shut down on May 31, 2014. History The 1996 release of id Software's video game Quake, one of the first 3D multiplayer action games to allow play over the Internet, furthered the concept of players creating and releasing "mods" or modifications of games. Mark Surfas saw the need for hosting and distribution of these mods and created PlanetQuake, a Quake-related hosting and news site. The massive success of mods catapulted PlanetQuake to huge traffic and a central position in the burgeoning game website scene. Quake also marked the beginning of the Internet multiplayer real-time action game scene. However, finding a Quake server on the Internet proved difficult, as players could only share IP addresses of known servers between themselves or post them on websites. To solve this problem, a team of three programmers (consisting of Joe "QSpy" Powell, Tim Cook, and Jack "morbid" Matthews) formed Spy Software and created QSpy (or QuakeSpy). This allowed the listing and searching of Quake servers available across the Internet. Surfas licensed QSpy and became the official distributor and marketer while retaining the original programming team. QSpy became QuakeSpy and went on to be bundled with its QuakeWorld update - an unprecedented move by a top tier developer and huge validation for QuakeSpy. With the release of the Quake Engine-based game Hexen II, QuakeSpy added this game to its capabilities and was renamed GameSpy3D. In 1997 Mark Surfas licensed GameSpy 3D from Spy Software, and created GameSpy Industries. In 1999, GameSpy received angel investment funding from entrepreneur David Berkus. The company released MP3Spy.com (later renamed RadioSpy.com), a software browser allowing people to browse and connect to online radio feeds, such as those using Nullsoft's ShoutCast. GameSpy received $3 million in additional funding from the Yucaipa Companies, an investment group headed by Hollywood agent Michael Ovitz and Southern California supermarket billionaire Ronald Burkle. The expanding of the company's websites included the games portal, GameSpy.com, created in October 1999; the Planet Network (also known as the GameSpy Network), a collection of "Planet" websites devoted to popular video games (such as Planet Quake, Planet Half-Life and Planet Unreal) as well as the genre-related websites, 3DActionPlanet, RPGPlanet, SportPlanet and StrategyPlanet; ForumPlanet, the network's extensive message board system; and FilePlanet, which was one of the largest video game file download sites. It also included platform-specific sites (e.g., Planet PS2, Planet Xbox, Planet Nintendo and Planet Dreamcast), but these were consolidated into GameSpy.com; only Classic Gaming remains separate. ForumPlanet and FilePlanet were services offered by GameSpy, and were not part of the Planet Network. In 2000, GameSpy received additional investment funding from the Ziff Davis publishing division ZDNet.com and from Guillemot Corporation. GameSpy shut down its RadioSpy division, backing away from the online music market which was dominated by peer-to-peer applications such as Napster and Gnutella. In 2001, GameSpy's corporate technology business grew to include software development kits and middleware for video game consoles, such as Sony's PlayStation 2, Sega's Dreamcast and Microsoft's Xbox. In March 2007, IGN and GameSpy Industries merged, and was briefly known as IGN/GameSpy before formalizing their corporate name as IGN Entertainment. Also in 2000, GameSpy turned GameSpy3D into GameSpy Arcade and purchased RogerWilco, MPlayer.com and various assets from HearMe; the MPlayer service was shut down and the RogerWilco technology is improved and incorporated into GameSpy Arcade. GameSpy Arcade was the company's flagship matchmaking software, allowing users to find servers for different online video games (whether they be free or purchased) and connect the user to game servers of that game. GameSpy also published the Roger Wilco voice chat software, primarily meant for communication and co-ordination in team-oriented games, where users join a server to chat with other users on the server using voice communication. This software rivaled the other major voice chat software Ventrilo and Teamspeak. The company's "Powered by GameSpy" technology enabled online functionality in over 300 PC and console games. In 2005, GameSpy added the PlayStation Portable, and Nintendo DS to its stable supported platforms. In March 2007, GameSpy added the Wii as another supported platform. Shutdown GameSpy Industries (the entity responsible for GameSpy multiplayer services) was bought from IGN Entertainment by Glu Mobile in August 2012, and proceeded in December to raise integration costs and shut down servers for many older games, including Star Wars: Battlefront, Sniper Elite, Microsoft Flight Simulator X, Saints Row 2, and Neverwinter Nights, with no warning to developers or players, much to the outrage of communities of those games. GameSpy Technologies remained operational as a separate entity since. In February 2013, following the acquisition of IGN Entertainment by Ziff Davis, IGN's "secondary" sites were shut down, ending GameSpy's editorial operations. In April 2014, Glu announced that it would shut down the GameSpy servers on May 31, 2014, so its developers could focus on work for Glu's own services. Games that still used GameSpy are no longer able to offer online functionality or multiplayer services through GameSpy. While some publishers announced plans to migrate GameSpy-equipped games to other platforms (such as Steam or in-house servers), some publishers, such as Nintendo (who used the GameSpy servers as the basis of its Nintendo Wi-Fi Connection platform for DS and Wii games) did not, particularly due to the age of the affected games. Electronic Arts, in particular, announced 24 PC games, including titles such as Battlefield 2, the Crysis series, Saints Row 2 and the Star Wars: Battlefront series, would be affected by the end of GameSpy service. Fan-created mods restored online functionality with alternative servers. One such mod for the PC version of Halo was officially incorporated into a patch for the game released by Bungie in May 2014, and Disney helped developers create a similar mod for Battlefront II (2005) in 2017. By contrast, in 2017, Electronic Arts demanded the takedown of modified versions of Battlefield 2 and Battlefield 2142 on alternate servers, distributed by a group known as "Revive Network", as infringement of their copyrights. The GameSpy Debriefings The GameSpy Debriefings was a party-style discussion between editors of GameSpy and IGN Entertainment on (purportedly) that week's gaming news. The GameSpy Debriefings was the 25th most popular podcast under the category “Games and Hobbies” on iTunes (as of May 1, 2011). It was however infamous for the crew's frequent propensity to de-rail the conversation from video games into explicit content or in-depth discussions about nerd culture. The main crew at the show's conclusion of The GameSpy Debriefings consisted of: Anthony Gallegos, then of IGN Entertainment, previously of 1UP.com, Electronic Gaming Monthly, and GameSpy Ryan Scott, then of GameSpy, previously the executive editor for the 1UP.com Network's reviews department, and the reviews editor for both Computer Gaming World and Games for Windows: The Official Magazine Scott Bromley, formerly of IGN Entertainment Brian Altano, Humor Editor and graphic designer for IGN.com/GameSpy Frequent guests included: Arthur Gies, formerly of IGN Entertainment Brian Miggels, formerly of IGN Entertainment and GameSpy Will Tuttle, former Editor-In-Chief of GameSpy Jack DeVries, former Editor of GameSpy On July 30, 2011, The GameSpy Debriefings ended with an episode consisting of only the main crew. Following its conclusion, they launched a fundraising drive on Kickstarter which resulted in the release of their own popular podcast, The Comedy Button. The Comedy Button is similar in content to the later GameSpy Debriefings, with a renewed focus on humorous discussions and listener e-mails rather than the in-depth discussion of recent video games like the early Debriefings. As of March 14, 2021, The Comedy Button has produced 480 episodes. References External links GameSpy GameSpy Arena Download websites IGN Internet properties established in 1996 Internet properties disestablished in 2013 Video game news websites Webby Award winners Defunct websites
22706183
https://en.wikipedia.org/wiki/Enterprise%20bookmarking
Enterprise bookmarking
Enterprise bookmarking is a method for Web 2.0 users to tag, organize, store, and search bookmarks of both web pages on the Internet and data resources stored in a distributed database or fileserver. This is done collectively and collaboratively in a process by which users add tag (metadata) and knowledge tags. In early versions of the software, these tags are applied as non-hierarchical keywords, or terms assigned by a user to a web page, and are collected in tag clouds. Examples of this software are Connectbeam and Dogear. New versions of the software such as Jumper 2.0 and Knowledge Plaza expand tag metadata in the form of knowledge tags that provide additional information about the data and are applied to structured and semi-structured data and are collected in tag profiles. History Enterprise bookmarking is derived from Social bookmarking that got its modern start with the launch of the web site del.icio.us in 2003. The first major announcement of an enterprise bookmarking platform was the IBM Dogear project developed in Summer 2006. Version 1.0 of the Dogear software was announced at Lotusphere 2007, and shipped later that year on June 27 as part of IBM Lotus Connections. The second significant commercial release was Cogenz in September 2007. Since these early releases, Enterprise bookmarking platforms have diverged considerably. The most significant new release was the Jumper 2.0 platform with expanded and customizable knowledge tagging fields. Differences Versus social bookmarking In a social bookmarking system, individuals create personal collections of bookmarks and share their bookmarks with others. These centrally stored collections of Internet resources can be accessed by other users to find useful resources. Often these lists are publicly accessible, so that other people with similar interests can view the links by category or by the tags themselves. Most social bookmarking sites allow users to search for bookmarks which are associated with given "tags", and rank the resources by the number of users which have bookmarked them. Enterprise bookmarking is a method of tagging and linking any information using an expanded set of tags to capture knowledge about data. It collects and indexes these tags in a web-infrastructure knowledge base server residing behind the firewall. Users can share knowledge tags with specified people or groups, shared only inside specific networks, typically within an organization. Enterprise bookmarking is a knowledge management discipline that embraces Enterprise 2.0 methodologies to capture specific knowledge and information that organizations consider proprietary and are not shared on the public Internet. Tag management Enterprise bookmarking tools also differ from social bookmarking tools in the way that they often face an existing taxonomy. Some of these tools have evolved to provide Tag management which is the combination of uphill abilities (e.g. faceted classification, predefined tags, etc.) and downhill gardening abilities (e.g. tag renaming, moving, merging) to better manage the bottom-up folksonomy generated from user tagging. See also Enterprise search Enterprise 2.0 Social bookmarking Knowledge management Knowledge tagging Web 2.0 Collaborative intelligence Comparison of enterprise bookmarking platforms Bookmark manager Collaborative tagging List of social bookmarking websites List of social software Semantic Web Social networking Social software Notes and references World Wide Web Collaboration Social networking services Social information processing
34369266
https://en.wikipedia.org/wiki/Data2map
Data2map
data2map is a presentation mapping service provider based in Saalfelden am Steinernen Meer, Salzburg, Austria. Company history The privately owned company is owner managed and was founded in 1999 in Frankfurt, Germany by Manfred Guntz as meridian consult. In 2003 the company was renamed to data2map. In 2007 its head office moved to Salzburg, Austria. data2map was registered by Deutsches Patent- und Markenamt, Munich, and became a Registered Trademark on October 20, 2005 (Reg. No. 304 59 255, Akz.: 304 59 255.2/42). Product history In the early 1990s, when desktop mapping and presentation graphics became accessible to the average office user, data2map hired software engineers and GIS-specialists to develop several vector-map series for easy customization by the end user. The prime objective was to enable office users and professional graphics artists to visualize geo-referenced information on pre-designed country- and world-maps within their favorite standard off-the-shelf software. Modern Company Today data2map offers specially optimized digital maps for customization by a wide range of clients in industries such as education, travel, television and gas and water. These maps allow the unrestricted use of all relevant functions of standard software for the creation and design of individual mapping presentations. Product Range Maps for PowerPoint: vector maps in the file format ppt or pptx, ready made Microsoft PowerPoint slides enabling customization of colours, text, symbols etc.; Business Series: vector graphic maps in the file format .ai or fully editable pdf, recommended software Adobe Illustrator, Inkscape; Professional Series: vector maps with cartographically very detailed borderlines, including topographic, infrastructural and population content. File formats .ai or fully editable pdf, recommended Software Adobe Illustrator, Inkscape. Topographic- and raster-maps, including satellite images, seamlessly fitting the respecting vector maps; Digital flags of all countries and major international organizations, file format ppt, pptx, .ai or fully editable pdf, optimized for PowerPoint and Adobe Illustrator. This includes an add-on to Adobe Illustrator's Symbols Palette making the fully editable vector flags permanently available on Illustrator's desktop panel. Standard map projections include Miller-, Robinson- and Mercator projection also Gall–Peters- or Hobo-Dyer projection. The digital maps and flags are available for immediate download through the Online-Map-Shop, e.g. world maps, continent- or country maps, post code- as well as topographic maps. All raster- and vector maps are fully editable. They can be customized and redesigned to suit individual requirements using standard software like MS PowerPoint or Adobe Illustrator and Photoshop. References External links data2map - official site data2map - about us data2map - thematic maps data2map - YouTube Video Cartography
2867456
https://en.wikipedia.org/wiki/PlayStation%20Broadband%20Navigator
PlayStation Broadband Navigator
PlayStation Broadband Navigator (also referred to as BB Navigator and PSBBN) is software for Japanese PlayStation 2 consoles that formats a hard disk drive for use with those consoles and provides an interface for manipulating data on that hard disk drive. It only works with official PlayStation 2 HDD units. The PlayStation Broadband Navigator installation disc is reported to have a more strict region lock on it than normal PlayStation 2 software, as the software will only boot on NTSC-J systems with a model number ending in 0, meaning they are sold in Japan, making the software unusable on Korean and Asian NTSC-J PlayStation 2 consoles. Online services pertaining to the software closed on March 31, 2016. Versions PSBBN version 0.10 Prerelease was released and bundled with Japanese PlayStation 2 BB Units (Network Adaptor and HDD bundle packs) in early 2002, replacing HDD Utility Disc 1.00. It lacked the ability to store and manage game saves on the HDD that HDD Utility Disc had. PSBBN version 0.20 was released in late 2002. It added functions to the interface of the software, including the ability to update itself to new versions over a broadband internet connection and management for game saves. PSBBN version 0.30 was released in mid-2003. It added access to Sony's feega service (which is used to bill the monthly fee for some online games) and an e-mail program. Version 0.31 was released in late 2003, fixing an exploit. PSBBN version 0.32 was released in early 2004, and is the current version as of early 2014. The only change appears to be the removal of the Audio Player option inside the Music Channel, which allows to transfer music between the HDD and a MiniDisc player in earlier PSBBN versions. Features PlayStation Broadband Navigator offers many features that are not available with the original HDD Utility Disc software. Some Japanese releases take advantage of the features, and may even require a specific version (or higher) of the software. The features of the software include: Game Channel Access to online sites, similar to web pages, for various ISPs and software publishers (only in version 0.20 and higher) Downloadable game demos or full games (ex. Pop'n Taisen Puzzle-dama Online demo, Star Soldier BB full game, Milon's Secret Castle full game) Downloadable picture and movie files Information pages on past, current, and future releases and services A launching point for bootable games installed to the HDD NetFront 3.0, a Linux-based web browser Music Channel Provides a tool to convert an audio CD to audio files on the HDD Provides an organization system for audio files stored on the HDD and a means to play them Provides a means to transfer audio files between a MiniDisc player and the HDD over a USB connection (only in versions 0.20 through 0.31) Photo Channel Provides an organization system for picture files stored on the HDD and a means to view them Provides a means to transfer picture files between most of USB storage devices and the HDD Movie Channel Provides an organization system for video files stored on the HDD and a means to view them feega account management (only in versions 0.30 and up; required for Net de Bomberman and Minna no Golf Online) Non-Japanese Release Sony Computer Entertainment America released the HDD on March 23, 2004 with HDD Utility Disc 1.01 and bundled with Final Fantasy XI. Consumers that knew about PlayStation Broadband Navigator were confused as to why it wasn't included with the HDD Utility Disc. SCEA's response was always that PlayStation Broadband Navigator would be released in North America "at a later date." This date now appears to be never in light that SCEA has switched to only manufacturing the slim, HDD-incompatible PlayStation 2 models and stopping manufacture of HDD units for their region. Sony Computer Entertainment Europe and Sony Computer Entertainment Australia never released the HDD outside of the Linux Kit before switching to only manufacturing the slim PlayStation 2 model, so it appears that neither HDD Utility Disc or PlayStation Broadband Navigator will be released in those regions. Broadband Navigator can later be used on any PS2 by using a modified disc. Compatible Software A few games require PlayStation Broadband Navigator either for certain features to work, or for the game to work at all. Known games are: Energy Airforce (Taito) Lets players replace in-game music with music stored on the HDD (PSBBN 0.20 or higher) Minna no Golf Online (Sony Computer Entertainment) - Online servers no longer active Requires a feega account (PSBBN 0.30 or higher) Net de Bomberman (Hudson) - Online servers no longer active the game is still available for download using a modified version of PSBBN Requires a feega account (PSBBN 0.30 or higher) Nobunaga's Ambition Online downloadable game from the company's Koei way is still supported, you can download via Gamesity Channel available Trial version PSBBN requires version 0.32 Front Mission Online PSBBN requires to run, and also establishes its network services and Play Online Viever PSBBN requires version 0.32 Kingdom Hearts Final Mix (Square) Allows specific info to be stored on the HDD for shorter load times. See also XrossMediaBar (XMB) References External links Unofficial English Manual for PSBBN 0.20-0.32 Official SCEA PlayStation 2 HDD Message Board (Closed as of June 14, 2005) Broadband Navigator
6082538
https://en.wikipedia.org/wiki/Notre%20Dame%20Fighting%20Irish%20football%20rivalries
Notre Dame Fighting Irish football rivalries
Notre Dame Fighting Irish football rivalries refers to rivalries of the University of Notre Dame in the sport of college football. Because the Notre Dame Fighting Irish are independent of a football conference, they play a national schedule, which annually includes historic rivals University of Southern California and Navy, more recent rival Stanford, and five games with ACC teams. Current annual rivalries USC Notre Dame's main rival is the University of Southern California. The Notre Dame–USC football rivalry has been played annually since 1926, except from 1943 to 1945 and 2020, and is regarded as the greatest intersectional series in college football. The winner of the annual rivalry game is awarded the coveted Jeweled Shillelagh, a war club adorned with emerald-emblazoned clovers signifying Fighting Irish victories and Ruby-emblazoned Trojan warrior heads for Trojan wins. When the original shillelagh ran out of space for the Trojan heads and shamrocks after the 1989 game, it was retired and is permanently displayed at Notre Dame. A new shillelagh was introduced for the 1997 season. Through the 2017 season, Notre Dame leads the series 47–37–5. The origin of the series is quite often recounted as a "conversation between wives" of Notre Dame head coach Knute Rockne and USC athletic director Gywnn Wilson. In fact, many sports writers often cite this popular story as the main reason the two schools decided to play one another. As the story goes, the rivalry began with USC looking for a national rival. USC dispatched Wilson and his wife to Lincoln, Nebraska, where Notre Dame was playing Nebraska on Thanksgiving Day. On that day (Nebraska 17, Notre Dame 0) Knute Rockne resisted the idea of a home-and-home series with USC because of the travel involved, but Mrs. Wilson was able to persuade Mrs. Rockne that a trip every two years to sunny Southern California was better than one to snowy, hostile Nebraska. Mrs. Rockne spoke to her husband and on December 4, 1926, USC became an annual fixture on Notre Dame's schedule. However, several college football historians, including Murray Sperber, have uncovered evidence that somewhat contradicts this story. Of the most contradictory parts is the idea that Rockne was resistant to playing out west. Sperber documents that USC offered to play Notre Dame back in 1925 at the Rose Bowl. Notre Dame ultimately played Stanford that year because they were the Pacific Coast Conference champs. But due to the large alumni support for an annual season ending game in Los Angeles and the still existing interest for a home-and-home series, Notre Dame and USC started playing the series the following year in 1926. The series creation was also likely aided by USC coach Howard Jones, whom Rockne recommended USC hire due to their long friendship. Since 1961, the game has alternated between Notre Dame Stadium in South Bend in mid-October and the Los Angeles Memorial Coliseum, which serves as USC's home field, in late November. Originally the game was played in both locations in late November, but because of poor weather during that time of the year at South Bend, USC insisted on having the game moved to October in 1961. The 2020 game, scheduled to be played in Los Angeles on Saturday November 28, was canceled because of the COVID-19 pandemic, along with all other non-conference games involving Pac-12 schools. Notre Dame's rivalry games with Navy and Stanford were also canceled. Navy The Navy–Notre Dame series was played annually between 1927 and 2019, which was the longest uninterrupted intersectional series in college football. The 2020 game was canceled due to the COVID-19 pandemic, though the series resumed in 2021. Before Navy won a 46–44 triple-overtime thriller in 2007, Notre Dame had a 43-game winning streak that was the longest series win streak between two annual opponents in the history of Division I FBS football. Navy's previous win came in 1963, 35–14 with future Heisman Trophy winner and NFL QB Roger Staubach at the helm. Navy had come close to winning on numerous occasions before 2007. They subsequently won again in 2009, 2010, and 2016. Despite the one-sided result the last few decades, most Notre Dame and Navy fans consider the series a sacred tradition for historical reasons. Both schools have strong football traditions going back to the beginnings of the sport. Notre Dame, like many colleges, faced severe financial difficulties during World War II. The US Navy made Notre Dame a training center and paid enough for usage of the facilities to keep the University afloat. Notre Dame has since extended an open invitation for Navy to play the Fighting Irish in football and considers the game annual repayment on a debt of honor. The series is marked by mutual respect, as evidenced by each team standing at attention during the playing of the other's alma mater after the game, a tradition that started in 2005. Navy's athletic director, on renewing the series through 2016, remarked "...it is of great interest to our collective national audience of Fighting Irish fans, Naval Academy alumni, and the Navy family at large." The series is scheduled to continue indefinitely; renewals are a mere formality. Shortly before the start of the 2014 season, ESPN polled the head coaches in the so-called "Power Five" football conferences, plus Notre Dame's Brian Kelly, as to whether they would favor a schedule consisting only of "Power 5" opponents. Kelly was adamantly opposed to such a requirement if it meant taking Navy off the schedule, specifically calling a potential loss of the Navy game "a deal-breaker." The series is a "home and home" series with the schools alternating the home team. Due to the relatively small size of the football stadium in Annapolis, the two teams have never met there. Instead, Navy usually hosts the game at larger facilities such as Baltimore's old Memorial Stadium or current M&T Bank Stadium, FedExField in Landover, Maryland, Veterans Stadium and later Lincoln Financial Field in Philadelphia, or at Giants Stadium in East Rutherford, New Jersey. During the 1960s, the Midshipmen hosted the game at John F. Kennedy Memorial Stadium in Philadelphia. In 1996 the game was played at Croke Park in Dublin, Ireland. The game returned to Dublin in 2012, where the Aviva Stadium hosted the event won by Notre Dame 50–10. The game was also occasionally played at old Cleveland Stadium. In years when Navy hosts (even-numbered), it is one of few non-Southeastern Conference games aired on CBS. In years when Notre Dame hosts (odd-numbered), it is carried on NBC as are other Notre Dame home games. Stanford The Fighting Irish have a rivalry with the Stanford Cardinal for the Legends Trophy, a combination of Fighting Irish crystal with California redwood. The two teams first met in the 1925 Rose Bowl, then played each other in 1942 and again in 1963–64. The modern series began in 1988 when Notre Dame sought out a school to play out west over Thanksgiving weekend during the years that USC plays in South Bend. The series has been played annually except in 1995–96. The rivalry has become more competitive in recent years, during the tenures of Stanford coaches Jim Harbaugh and David Shaw. Notre Dame and Stanford are regularly ranked in the U.S. News & World Report top 20 best colleges in America, and both share a mission to develop student athletes that can compete in the classroom and on the football field. As a result, both schools often compete for similar types of athletes in recruiting. Notre Dame leads the series 19–11. When the game is played in Palo Alto, it is usually the last game on Stanford's schedule (as has been the case since 1999), one week after the Cardinal plays archrival Cal in The Big Game. All but two of the games in South Bend have been played in October; the only exceptions, in 2010 and 2018, were on the last Saturday of September. The 2020 game, scheduled to be played in South Bend on Saturday October 10, was canceled because of the COVID-19 pandemic, along with all other non-conference games involving Pac-12 schools. Notre Dame's rivalry games with USC and Navy were also canceled. Historical rivalries Notre Dame has traditionally played Division I FBS football independent from any conference affiliation. In its early years joining a conference, in particular the geographically-contiguous Big Ten Conference, would have provided stability and scheduling opportunities. Conferences have periodically approached Notre Dame about joining, most notably the Big Ten in 1999. Notre Dame elected to keep its independent status in football feeling that it has contributed to Notre Dame's unique place in college football lore. Subsequently, Notre Dame joined the ACC for other sports and agreed to 5 football games with ACC teams per year. The following is a list of historical rivalries in order of games played and a synopsis of each series history. Purdue This in-state rivalry began in 1896. From 1946 to 2014, the Fighting Irish played Purdue Boilermakers every year without interruption. The series is scheduled to resume on a non-annual basis in 2020 with Notre Dame leading the series 58–26–2. The two teams play for the Shillelagh Trophy. The series has been marked by a number of key upsets. Purdue ended Notre Dame's 39-game unbeaten streak in 1950 and posted upsets in 1954, 1967 and 1974. They also hold the record for the most points scored in one game by an opponent in Notre Dame Stadium with 51 in 1960. In addition, Purdue holds records for the most points scored against Notre Dame in the first (24 in 1974) and second quarters (31 in 1960). On September 28, 1968, #1 Purdue defeated #2 Notre Dame 37–22 behind the effort of Leroy Keyes, a two-way player for the Boilermakers. It was the eleventh 1 vs 2 game (and the sixth involving Notre Dame). Michigan State Notre Dame also has a rivalry with Michigan State University that began in 1897. From 1959 to 2013 the Fighting Irish played Michigan State every year without interruption, except for a two-year hiatus in 1995 and 1996. The next scheduled game is in 2026. The 1966 Notre Dame vs. Michigan State football game is regarded as one of the Games of the Century and is still talked about to this day because of its ending - a 10–10 tie. Since polls began in 1936, this game marked the 10th matchup that paired the #1 team against the #2 team, with Notre Dame having been involved in five of these ten games up to that point. Notre Dame leads the series 48–28–1. Pittsburgh The Fighting Irish's longtime series with the Pittsburgh Panthers, Notre Dame's fifth most played football opponent, began in 1909, and there have been no more than two consecutive seasons without two teams meeting each other except between 1913 and 1929, 1938–42, and 1979–81. Since 1982, the Panthers have remained a relative fixture on the schedule. Notre Dame leads the series 49–21–1. The longest game in Notre Dame history occurred between the two schools in 2008, when Pitt defeated ND in a record 4 overtimes by a field goal. The 2012 contest saw Notre Dame erase a 20–6 deficit in the fourth quarter and force overtime. The Irish won 29–26 in triple overtime after the Panthers narrowly missed a game-winning field goal in the second overtime. In 2013 both schools joined the ACC (Pitt for all sports including football, and Notre Dame for non-football sports), which led to the schools playing at least once every three years. Their ACC matches began in 2013 in Pittsburgh with a 28–21 Panthers win; the most recent game was a 45-3 Irish win in 2020. Army While Notre Dame and Army aren't exactly rivals in a modern sense, it was Army that helped Notre Dame gain a national following by agreeing to schedule them during the Rockne years while Notre Dame was boycotted by the Big Ten. The first Army–Notre Dame matchup in 1913 is generally regarded as the game that put the Fighting Irish on the college football map. In that game, Notre Dame revolutionized the forward pass in a stunning 35–13 victory. For years it was "The Game" on Notre Dame's schedule, played at the first Yankee Stadium in New York. During the 1940s, the rivalry with the U.S. Military Academy Cadets (now Black Knights) reached its zenith. This was because both teams were extremely successful and met several times in key games (including one of the Games of the Century, a scoreless tie in the 1946 Army vs. Notre Dame football game). In 1944, the Cadets administered the worst defeat in Notre Dame football history, crushing the Fighting Irish 59–0. The following year, it was more of the same, a 48–0 blitzkrieg. The 1947 game was played in South Bend for the first time and the Fighting Irish prevailed, 27–7. The annual game then went on hiatus for 10 years, after occurring every year since 1919. Since then, there have been infrequent meetings over the past several decades, with Army's last win coming in 1958. Like Navy, due to the small capacity of Army's Michie Stadium, the Black Knights would play their home games at a neutral site, which for a number of years was Yankee Stadium and before that, the Polo Grounds. In 1957, the game was played in Philadelphia's Municipal (later John F. Kennedy Memorial) Stadium while in 1965, the teams met at Shea Stadium in New York. They last met at Yankee Stadium in 2010. The 1973 contest was played at West Point with the Fighting Irish prevailing, 62–3. In more recent times, games in which Army was the host have been played at Giants Stadium in East Rutherford, New Jersey. Notre Dame leads the series 39–8–4, most recently defeating Army 44–6 at the Alamodome in San Antonio in 2016. Northwestern It began in 1889, one of the oldest in Fighting Irish football annals. It has been suggested that the nickname, "Fighting Irish," originated during that first meeting when Northwestern fans chanted, "Kill those Irish! Kill those fighting Irish!" at halftime. Northwestern and Notre Dame had a yearly contest from 1929 to 1948, with the winner taking home a shillelagh, much like the winner of the Notre Dame–USC contest now receives. The Northwestern-Notre Dame shillelagh was largely forgotten by the early 1960s. Northwestern ended the series after 1948, as did several other schools who were getting tired of being beaten year in and year out by Notre Dame, and the two schools would not meet again until 1959. By then, Ara Parseghian was coaching the Wildcats, who notched four consecutive victories over Notre Dame between 1959 and 1962. After Ara came to Notre Dame, he posted a 9–0 docket against his old team. In fact, the Fighting Irish did not lose to Northwestern again until September 1995, which was the beginning of a Rose Bowl season for the Wildcats and the two teams' last meeting for nearly 20 years. The series was renewed in 2014 when the Wildcats traveled to South Bend for the first time since 1995, defeating the Irish 43–40 in overtime. The Irish repaid the visit in 2018 when they traveled to Evanston and defeated Wildcats 31–21. Notre Dame leads the series 38–9–2. Michigan Notre Dame and Michigan first played in 1887 in Notre Dame's introduction to football. The Wolverines proceeded to win the first eight contests, before losing in 1909, the final game in the series until 1942, when the Wolverines defeated the Fighting Irish. On October 9, 1943, top-ranked Notre Dame defeated second-ranked Michigan in the first matchup of top teams since the institution of the AP Poll in 1936. The rivalry then froze at 11 games played until 1978, when it launched an evenly matched 15–15–1 run through 2014 (skipping only 1983–84, 1995–96, and 2000–01). In the aftermath of Notre Dame's 5 game ACC schedule and Michigan's expanded Big 10 schedule, the series was put on a three-year hiatus after the 2014 game, a 31-0 Notre Dame victory. The series resumed in South Bend in 2018, and another game is scheduled at Ann Arbor for 2019. The rivalry is heightened by the two schools' competitive leadership atop the college football all-time winning percentage board, as well as its competition for the same type of student-athletes. Michigan leads the series 24-18-1. Boston College Boston College is considered to be a rival with Notre Dame based on both institutions' connection to the Roman Catholic Church. The Fighting Irish and Boston College Eagles first met in 1975 in Dan Devine's debut as head coach. They met in the 1983 Liberty Bowl and during the regular season in 1987, then played each other annually from 1992 to 2004. The Fighting Irish and Eagles play for the Frank Leahy Memorial Bowl and Ireland Trophy. The matchup has become relatively popular and gained several nicknames including the "Holy War", "The Bingo Bowl" and "The Celtic Bowl". In 1993, the Eagles ruined Notre Dame's undefeated season with a 41–39 victory on a 41-yard field goal by David Gordon as time ran out, overshadowing a furious comeback from a 38–17 fourth quarter deficit by Notre Dame. Notre Dame leads the series 14–9, winning the last five after the Eagles won the prior six meetings. The series was scheduled to end after the 2010 season due in part to BC's move to the ACC; however, it was renewed in 2010. With Notre Dame's move to the ACC, they will continue to meet at least semi-regularly. The first meeting after Notre Dame's arrival in the ACC was held at Fenway Park in 2015 as part of Notre Dame's Shamrock Series, with the Irish winning 19–16; the next was in 2017 with Notre Dame winning 49–20. Notre Dame prevailed in 2020 by a score against BC of 45–31. Significant series The following is a list of other significant series in order of games played and a synopsis of each series history. Florida State Notre Dame and Florida State have met eleven times, first when both were strong independents and now as part of Notre Dame's commitment to scheduling ACC schools. This series began in South Bend, in 1981. Florida State won 19–13. Notre Dame won the second contest, however, in 1993, in South Bend, by a score of 31–24. The contest was referred to by some as "The Game of the Century." Florida State was, at the time, ranked #1 and Notre Dame was ranked #2. Although Notre Dame beat FSU again, in Tallahassee, in 2002 (by a score of 34–24), Florida State then won five of the next six meetings (1994, 1995, 2003, 2011, 2014), including two bowl victories (the January 1996 Orange Bowl and the 2011 Champs Sports Bowl). But Notre Dame rebounded with three straights wins with two convincing victories in South Bend (2018 and 2020) and an overtime thriller in Tallahassee (2021). The two teams are currently scheduled to meet again, during the regular season in 2024. Florida State leads the series 6–5. Georgia Tech This series began in 1922. The Yellow Jackets were a longtime rival of the Fighting Irish and the two teams met periodically on an annual basis over the years, particularly from 1963 to 1981 when both schools were independents following Tech's departure from the Southeastern Conference. The 1975 Georgia Tech-Notre Dame game marked the sole appearance in an Irish uniform of Rudy Ruettiger, the subject of the film Rudy. When Georgia Tech joined the Atlantic Coast Conference beginning in 1982, they were forced to end the series after 1981 because of scheduling difficulties. Consequently, the two teams have met very infrequently since then. Georgia Tech was the opponent in the inaugural game in the newly expanded Notre Dame Stadium in 1997, then a year later they met again in the Gator Bowl. The Fighting Irish and Yellow Jackets met in the 2006 and 2007 season openers and split both games. Notre Dame leads the series 29–6–1. The rivalry resumed in 2015 with a 30–22 Irish win in South Bend, and will continue on a semi-regular basis thereafter, due to Notre Dame's current commitment to scheduling several ACC opponents each season. Miami (FL) The series with the University of Miami Hurricanes began in 1955. They met three times in Miami during the 1960s (1960, 1965 and 1967), then played each other annually from 1971 to 1990, except in 1986, during a period when the two were among college football's strongest independents. Throughout the 1970s, this series was dominated by Notre Dame. Traditionally, it was the season-ending game for the Fighting Irish in odd-numbered years, as they sought to end each season at a warm-weather site. Miami holds the distinction of being the only team to shut out Notre Dame during the Ara Parseghian (0–0 in 1965), Gerry Faust (20–0 in 1983) and Lou Holtz (24–0 in 1987) eras. During the 1980s, this once-docile rivalry intensified. Both teams were national contenders in the later part of the decade, and both teams cost each other at least one national championship. Hostilities were fueled when the Hurricanes routed the Fighting Irish in the 1985 season finale 58–7, with Miami widely accused of running up the score in the second half. The rivalry gained national attention and both teams played their most famous games from 1988 to 1990. The infamous game known as Catholics vs. Convicts was won by the Fighting Irish 31–30, with Miami ending Notre Dame's record 23-game winning streak the following year, 27–10. The rivalry ended after the Fighting Irish dashed #2 Miami's hopes for a repeat national championship with a 29–20 victory in South Bend. Notre Dame dropped Miami from the schedule due to the intensified rivalry. The Fighting Irish and Hurricanes met again, in the 2010 Sun Bowl in El Paso, Texas, where Notre Dame defeated Miami 33–17. In 2012, Notre Dame defeated Miami 41–3 at Soldier Field. Notre Dame leads the series 17–8–1. The teams met most recently in 2017, with Miami winning 41–8. Nebraska The Fighting Irish and Nebraska Cornhuskers first met in 1915 and played each other annually through 1925. During the years of Notre Dame's famed Four Horsemen backfield from 1922 to 1924, the Fighting Irish compiled a record of 27–2–1, with their only losses coming to Nebraska in Lincoln (1922 and 1923). The Fighting Irish won in 1924 in South Bend and Nebraska won in 1925 in Lincoln, evening up the series at 5–5–1 (the 0–0 tie occurring in 1918). The Huskers were replaced on Notre Dame's schedule with USC. They met twice during the Frank Leahy era in 1947 and 1948 (with the Fighting Irish winning 31–0 and 44–13, respectively) and squared off in the 1973 Orange Bowl, a game in which the Huskers handed the Fighting Irish their worst defeat under Ara Parseghian, 40–6. More recently, there was a home-and-home series in 2000–01 (with the Huskers winning 27–24 and 27–10, respectively). The 2000 game was a memorable one, as #1 Nebraska escaped a Fighting Irish defeat in overtime on a touchdown run by Heisman winner Eric Crouch. Nebraska leads the series 8–7–1. Penn State Notre Dame and Penn State first met in 1913. After subsequent games in 1925, 1926 and 1928, the two schools would not meet again until the 1976 Gator Bowl, by which time an annual home-and-home series beginning in 1981 had been agreed upon. The Fighting Irish held a 4–0–1 edge going into 1981, but Penn State won 6 of the next 7. The coaches were one source of the rivalry. Lou Holtz and Joe Paterno were both long serving and successful coaches. Their friendly rivalry helped expand the Notre Dame–Penn State rivalry to new dimensions. The series ended after the 1992 season, coinciding with formerly independent Penn State's affiliation with the Big Ten. It had been scheduled to continue through 1994 and Notre Dame approached Penn State about extending it even further, but Penn State's admittance to the Big Ten in 1990 made it more difficult to fit the games on the schedule. However, the Fighting Irish and Nittany Lions recent successes and other factors led to the renewal of the rivalry in 2006–07, in which the teams split both games. The series is tied 9–9–1. References Army Black Knights football Boston College Eagles football Florida State Seminoles football Georgia Tech Yellow Jackets football Michigan Wolverines football Michigan State Spartans football Navy Midshipmen football Nebraska Cornhuskers football North Carolina Tar Heels football Northwestern Wildcats football Notre Dame Fighting Irish football Pittsburgh Panthers football Purdue Boilermakers football Stanford Cardinal football USC Trojans football
1814110
https://en.wikipedia.org/wiki/Sun%20Ray
Sun Ray
The Sun Ray was a stateless thin client computer (and associated software) aimed at corporate environments, originally introduced by Sun Microsystems in September 1999 and discontinued by Oracle Corporation in 2014. It featured a smart card reader and several models featured an integrated flat panel display. The idea of a stateless desktop was a significant shift from, and the eventual successor to, Sun's earlier line of diskless Java-only desktops, the JavaStation. Predecessor The concept began in Sun Microsystems Laboratories in 1997 as a project codenamed NetWorkTerminal (NeWT). The client was designed to be small, low cost, low power, and silent. It was based on the Sun Microelectronics MicroSPARC IIep. Other processors initially considered for it included Intel's StrongARM, Philips Semiconductors' TriMedia, and National Semiconductor's Geode. The MicroSPARC IIep was selected because of its high level of integration, good performance, low cost, and general availability. NeWT included 8 MiB of EDO DRAM and 4 MiB of NOR flash. The graphics controller used was the ATI Rage 128 because of its low power, 2D rendering performance, and low cost. It also included an ATI video encoder for TV-out (removed in the Sun Ray 1), a Philips Semiconductor SAA7114 video decoder/scaler, Crystal Semiconductor audio CODEC, Sun Microelectronics Ethernet controller, PCI USB host interface with 4 port hub, and I²C smart card interface. The motherboard and daughtercard were housed in an off-the-shelf commercial small form-factor PC case with internal +12/+5VDC auto ranging power supply. NeWT was designed to have feature parity with a modern business PC in every way possible. Instead of a commercial operating system. the client ran a real-time operating system called "exec", which was originally developed in Sun Labs as part of an Ethernet-based security camera project codenamed NetCam. Less than 60 NeWTs were ever built and very few survived; one is in the collection of the Computer History Museum in Mountain View, California. In July 2013, reports circulated that Oracle was ending the development of the Sun Ray and related products. Scott McNealy (long-time CEO of Sun) tweeted about this. An official announcement was made August 1, 2013, with a last order in February 2014. Support and hardware maintenance were available until 2017. Design In contrast to a thick client, the Sun Ray is only a networked display device, with applications running on a server elsewhere, and the state of the user's session being independent of the display. This enables another feature of the Sun Ray, portable sessions: a user can go from one Sun Ray to another and continue his work without closing any programs. With a smart card, all the user has to do is insert the card and he will be presented with his session. Reauthentication requirement depends on the mode of operation. Without the smart card, the procedure is almost identical, except the user must specify his username as well as password to get his session. In either case, if a session does not yet exist, a new one will be created the first time the user connects. Sun Ray clients are connected via an Ethernet network to a Sun Ray Server. Sun Ray Software (SRS) is available for the Solaris and Linux operating systems. Sun developed a separate network display protocol, Appliance Link Protocol (ALP), for the Sun Ray system. VMware announced support for the protocol by VMware View in 2008. The Sun Ray Software has two basic modes of operation: generic session or kiosk mode. In a generic session, the user will see the Solaris or Linux login screen of the operating system that is running SRS. In kiosk mode, the login screen varies depending on the session type in use. Kiosk mode can be used for a number of different desktop or applications. Oracle has integrated a RDP client, VMware View client into the Sun Ray software that can be used in Kiosk mode to start a full screen Windows session. In this mode, no window manager or Unix desktop is started. The Windows environment can be any OS that supports RDP. In 2007, Sun and UK company Thruput integrated the Sun Ray 2FS with 28" (2048 × 2048), 30" (2560 × 1600) and 56" (3840 × 2160) displays; in 2008 they trialed an external graphics accelerator that enables the Sun Ray to be used with any high resolution display. Models NetWork Terminal (NeWT) – Original Sun Labs prototype, no display Sun Ray 1 – supports displays up to 1280×1024 at 85 Hz Sun Ray 1G – supports displays up to 1920×1200 at 75 Hz Sun Ray 100 – integrated into a 17" CRT monitor Sun Ray 150 – integrated into a 15" LCD monitor Sun Ray 170 – integrated into a 17" LCD monitor Sun Ray 2 – small footprint, low power (4 watts). 2 Versions exist, the original based on DDR memory, the newer one based on DDR2. Firmware is not compatible between the DDR and the DDR2 models and SRSS needs patches to work correctly with the newer variant. Sun Ray 2FS – support for dual heads, 100BASE-FX Sun Ray 270 – integrated into a 17" LCD, mountable Sun Ray 3 – Supports graphics resolutions of up to 1920 × 1200, five Universal Serial Bus (USB) 2.0 ports, one serial port (DB9), One single-DVI-I video connector, 10/100/1000 Mbit/s (RJ45) Ethernet Sun Ray 3i – Full HD 1920 × 1080 maximum resolution 16:9 widescreen 21.5" LCD display, five USB 2.0 ports, built-in smart card reader, VESA 100 × 100 mm mount and removable stand. Sun Ray 3 plus – support for dual head Dual-Link DVI maximum resolution up to 2560 x 1600 30" LCD display, four Universal Serial Bus (USB) 2.0 ports, built-in smart card reader, one serial port (DB9), Gigabit Ethernet (RJ-45 and SFP), Energy Star 5.0 qualified (14.15 W in use), headphone and mic jacks. The Sun Ray 3 models were the last in production; last order date February 28, 2014; last ship date August 31, 2014. Sun's OEM partners produced Wi-Fi notebook versions of Sun Ray: Comet 12 – Sun Ray 12" notebook produced by General Dynamics Comet 15 – Sun Ray 15" notebook produced by General Dynamics Jasper 320 – Sun Ray 2 notebook produced by Naturetech Amber 808 – Sun Ray 2 tablet produced by Naturetech Opal 608 – Sun Ray 2 tablet produced by Naturetech Gobi 7 – Sun Ray 2 notebook produced by Aimtec Gobi 8 – Sun Ray 2 notebook with 3G support produced by Aimtec Ultra ThinPad – Sun Ray 2 notebook produced by Arima Ultra ThinTouch – Sun Ray 2 tablet produced by Arima UltraSlim – Sun Ray 2 variant produced by Arima Tadpole M1400 – Sun Ray 2 notebook with 3G support produced by Tadpole Hardware The Sun Ray 1 clients initially used a 100Mhz MicroSPARC IIep processor, followed by a custom SoC version codenamed Copernicus (US 6,993,617 B2), which was based on the MicroSPARC IIep core, but added 4 MiB of on-chip DRAM, USB, and a smart card interface in addition to the memory controller and PCI interface already on the MicroSPARC IIep. The Sun Ray 2 and 3 clients use the MIPS architecture-based RMI Alchemy Au1550 processor. Software-only client A pure software implementation, Sun Desktop Access Client, was introduced as part of Sun Ray Software 5 (SRS5). This was later rebranded by Oracle as Oracle Virtual Desktop Client; it was discontinued along with the Sun Ray product line in 2014. Microsoft Windows access In commercial environments, Sun Rays were most commonly deployed as a thin client to access a Microsoft Windows desktop using the SRSS built-in RDP client uttsc. The desktop can be a Terminal Server session or a Virtual Machine (VDI). This setup is flexible and works well in many environments because the intermediate Sun Ray Server layer is transparent to the Windows desktop. At the same time however, this transparency can also become an issue for software that is location dependent. If location dependent information needs to be added it is possible to extend the functionality of the Sun Ray software with additional custom scripts. The Sun Ray Wiki offers a "Follow Me Printing" setup as an example, e.g. a user always gets the nearest printer as default printer when going from room-to-room or location-to-location, also inside their Windows session. It is relatively easy for an administrator to extend and add to this functionality as required. See also Dell FX100 Sun VDI References External links . Sun Ray User Group . , an opensource server for Sun Rays. Thin clients Sun computers SPARC microprocessor products MIPS architecture Computer-related introductions in 1999
611456
https://en.wikipedia.org/wiki/Mambo%20%28software%29
Mambo (software)
Mambo (formerly named Mambo Open Source or MOS) was a free software/open source content management system (CMS) for creating and managing websites through a simple web interface. Its last release was in 2008, by which time all of the developers had left for forks of the project, mainly Joomla and MiaCMS. Features Mambo included features such as page caching to improve performance on busy sites, advanced templating techniques, and a fairly robust API. It could provide RSS feeds and automate many tasks, including web indexing of static pages. Interface features included printable versions of pages, news flashes, blogs, forums, polls, calendars, website searching, language internationalization, and others. Mambo Foundation The rights to the Mambo CMS codebase, name and copyrights, are protected by the Mambo Foundation, a non-profit corporation formed to support and promote the Mambo Open Source project. The Mambo Foundation is a non-profit entity established under the laws of Australia and is controlled by the members of the Foundation via an elected Board of Directors. The Mambo Foundation's brief is to foster the development of the Mambo system and to shelter the project from threats and misuse. As of March 2013, the Foundation's website was no longer serving content, apart from a notice that its DNS service had expired. As of August 2016, the project's website was redirecting to Alcoholics Anonymous Victoria. As of June 2017, the Mambo Foundation site is back up, albeit without forums. According to the home page text, the server had been attacked on a number of occasions but has since been resurrected. Timeline 2000: Miro Construct Pty Ltd, registered in March 2000 in Melbourne, and headed up by CEO Peter Lamont and Junio Souza Martins, a former advertising executive, starts development of Mambo, a closed-source, proprietary content management system. 2001: The company adopted a dual licensing policy, releasing Mambo Site Server under the GPL on SourceForge in April 2001. May 2001: The mamboserver.com domain name is registered. From this time until the middle of 2002, Miro was the only developer of Mambo, contributing bug-fixes and security patches but not really extending the code or adding to the feature sets. 2002: Miro releases the commercial CMS called Mambo 2002. With version 3.0.x, the open source Mambo Site Server becomes "Mambo Open Source" (commonly referred to as "MOS"). Robert Castley becomes Project Director of Mambo Open Source. By the end of 2002, Robert Castley had pulled together a volunteer team of developers. Mambo Open Source 4.0 is released. 2003: Early in 2003, Miro hands off the responsibility of the code fully to the Open Source project Development Team. Miro concentrates on its commercial products and Mambo Open Source builds momentum under the leadership of Robert Castley. Miro released Mambo CMS, a commercial version of Mambo Open Source. Miro claims that Mambo CMS does not contain any source added to Mambo after it was made open source. Miro Construct Pty Ltd goes into voluntary liquidation in February 2003 and in August, Miro International Pty Ltd is formed. Source code for Mambo Open Source shows copyright 2000 - 2003 Miro Construct Pty Ltd. Mambo Open Source 4.5 released in December 2003. By this time, almost all of the original Miro code had disappeared during refactoring. 2004: mamboforge.net starts in March, 2004. Linux Format awards Mambo "Best Free Software Project" of the Year. Linux User and Developer names it "Best Linux or Open Source Software". In late 2004, Mambo was targeted by legal threats concerning the intellectual property rights to certain pieces of code contained in the core. The problem was severe and cost money, man hours, and eventually the loss of some key community leaders. Miro came to the aid of Mambo, offering legal and corporate resources to protect the development team and preserve the program. Robert Castley resigns as Project Director and in November, Andrew Eddie takes on the role. December 2004: the Mambo Steering Committee was established with representatives from both Miro and the Mambo development team. This committee was designed to govern the Mambo project. January 2005: Andrew Eddie announces a joint venture between Mambo and Miro International Pty Ltd, with Miro proposing to offer financial support for the open source project, plus training, commercial support services, and developer certification. At the end of January 2005 Junio Souza Martins abandons the project for personal reasons. February 2005: Discussions begin over the formation of a non-profit foundation for the Mambo project. March 2005: The name "Mambo Open Source" (which was commonly referred to as MOS) was changed to just "Mambo", causing concern in the community over apparent confusion this would cause between the open source, community-developed CMS and Miro's commercial offering, "Mambo CMS". April 2005: The commercial Mambo CMS is renamed "Jango". "Best Open Source Solution" and "Best of Show - Total Industry Solution" at LinuxWorld Boston. "Best Open Source Solution" at LinuxWorld San Francisco. July 2005: mambo-foundation.org domain is established. August 2005: Mambo Foundation, Inc is legally constituted on 8 August 2005. Miro CEO Peter Lamont appoints himself President of the Board of the new Foundation. 12 August: Robert Castley, who is an inaugural member of the Mambo Foundation Board of Regents, states: "The Foundation allows for everything to be placed outside of Miro incl. Domain Names, hosting etc. " and goes on to say that with him, the original founder of Mambo Open Source, and Andrew Eddie both being on the Board of the Mambo Foundation, Mambo would continue as a successful, open source project. He concluded his statement with,"So there you have it: two very key people in the overall success of Mambo are at the helm. Trust me, Mambo is in very, very safe hands!" A few days later, the entire team of core programmers publicly announced they had abandoned Mambo and shortly after this, Robert Castley steps down from the Board of Regents. The former core development team members regroup under the name "Open Source Matters" and the open source community at mamboserver.com fractures over allegations that the Mambo Foundation was formed without community input and with insufficient developer control. People express suspicion over the level of involvement by Miro International. By the end of August, the new project is named Joomla! and most of the former Mambo community has relocated to Open Source Matters. By the end of September, Open Source Matters Inc is a duly constituted non-profit corporation registered in New York. Joomla! positions itself as a "rebranding of Mambo" and releases its first fork of Mambo as Joomla 1.0 in September, 2005. The two code-bases are almost identical at this stage. Mambo forms a new core development team with Martin Brampton appointed as Core Development Team leader. Miro assigns all rights in the copyright of Mambo to the Mambo Foundation. September 2005: Neil Thompson joins the Core Development Team. December 2005: Miro International Pty Ltd is voluntarily deregistered as a company from 31 December 2005. January 2006: The rights to Miro International Pty Ltd are sold by Peter Lamont and a new business entity called Miro Software Solutions is created. Miro Software Solutions continues to develop Jango and other proprietary software under new ownership. March 2006: Mambo named "Best Open Source Software Solution" at LinuxWorld Australia. April 2006: Core developer team leader, Martin Brampton, resigns and leaves the project. Chad Auld takes over the role as Core Developer Team leader. July 2006: The Mambo Foundation websites become independent from Mambo Communities Pty Ltd. Following elections, the new Board of the Mambo Foundation takes office. The Mambo Foundation is now completely independent of any corporate interest. April 2007: Mambo 4.6.2 is released. This is a maintenance release for the 4.6.x branch and enables localisation of Mambo. January 2008: Mambo 4.5.6 is released. This is the final release of the Mambo 4.5 branch. February 2008: Chad Auld leaves the project. March 2008: John Messingham becomes Project Leader. Ozgur Cem Sen becomes core development team leader. Ozgur Cem Sen leaves the project shortly thereafter. Andrés Felipe Vargas Valencia is elected Team Leader. April 2008: Four former Mambo core developers fork Mambo and form MiaCMS. May 2008: Mambo 4.6.4 is released. Codename 'Sunrise', Mambo 4.6.4 is a security and maintenance release that fixes a number of serious security vulnerabilities. June 2008: Mambo 4.6.5 is released. Codename 'Jupiter', Mambo 4.6.5 is a security release that fixes a number of serious security vulnerabilities. September 2008: Core developer Neil Thompson leaves and joins the MiaCMS core development team. Mambo announces end of life for supporting PHP 4. All future releases will require PHP 5.2 or higher. November 2011: Andrés Felipe Vargas noted on the Mambo Foundation's mailing list that the next minor version update was planned to be 4.6.6, followed by a major version release of 4.7. Awards "Best Free Software Project of the Year" - Linux Format Magazine, 2004 "Best Linux or Open Source Software" - LinuxUser & Developer 2004 "Best Open Source Solution" - LinuxWorld, Boston 2005 "Best of Show - Total Industry Solution" - LinuxWorld, Boston 2005 "Best Open Source Solution" - LinuxWorld, San Francisco 2005 "Best Open Source Solution" LinuxWorld, Sydney, Australia 2006 See also List of content management systems References Free content management systems Free software programmed in PHP
11172360
https://en.wikipedia.org/wiki/Liberation%20fonts
Liberation fonts
Liberation is the collective name of four TrueType font families: Liberation Sans, Liberation Sans Narrow, Liberation Serif, and Liberation Mono. These fonts are metrically compatible with the most popular fonts on the Microsoft Windows operating system and the Microsoft Office software package (Monotype Corporation’s Arial, Arial Narrow, Times New Roman and Courier New, respectively), for which Liberation is intended as a free substitute. The fonts are default in LibreOffice. Characteristics Liberation Sans, Liberation Sans Narrow, and Liberation Serif closely match the metrics of Monotype Corporation fonts Arial, Arial Narrow, and Times New Roman, respectively. This means that the letters and symbols width and height between the Liberation fonts and the corresponding Monotype fonts are identical, and the Monotype fonts can be substituted by the corresponding Liberation font without changing the document layout. Liberation Mono is styled closer to Liberation Sans than Monotype’s Courier New, though its metrics match with Courier New. The Liberation fonts are intended as free, open-source replacements of the aforementioned proprietary fonts. Unicode coverage All three fonts supported IBM/Microsoft code pages 437, 737, 775, 850, 852, 855, 857, 858, 860, 861, 863, 865, 866, 869, 1250, 1251, 1252, 1253, 1254, 1257, the Macintosh Character Set (US Roman), and the Windows OEM character set, that is, the Latin, Greek, and Cyrillic alphabets, leaving out many writing systems. Extension to other writing systems was prevented by its unique licensing terms. Since the old fonts were replaced by the Croscore equivalents, expanded Unicode coverage has become possible. History The fonts were developed by Steve Matteson of Ascender Corporation as Ascender Sans and Ascender Serif. A variant of this font family, with the addition of a monospaced font and open-source license, was licensed by Red Hat Inc. as the Liberation font family. Liberation Sans and Liberation Serif derive from Ascender Sans and Ascender Serif respectively; Liberation Mono uses base designs from Ascender Sans and Ascender Uni Duo. The fonts were developed in two stages. The first release of May 2007 was a set of fully usable fonts, but they lacked the full hinting capability. The second release, made available in the beginning of 2008, provides full hinting of the fonts. In April 2010, Oracle Corporation contributed the Liberation Sans Narrow typefaces to the project. They are metrically compatible with the popular Arial Narrow font family. With Liberation Fonts 1.06 the new typefaces were officially released. Distribution Version 2.00.0 or above As of December 2018, Liberation Fonts 2.00.0 and above are a fork of the Chrome OS Fonts released under the SIL Open Font License, and all fonts are developed at GitHub. Older versions Red Hat licensed these fonts from Ascender Corp under the GNU General Public License with a font embedding exception, which states that documents embedding these fonts do not automatically fall under the GNU GPL. As a further exception, any distribution of the object code of the Software in a physical product must provide the right to access and modify the source code for the Software and to reinstall that modified version of the Software in object code form on the same physical product on which it was received. Thus, these fonts permit free and open-source software (FOSS) systems to have high-quality fonts that are metric-compatible with Microsoft software. The Fedora Project, as of version 9, was the first major Linux distribution to include these fonts by default and features a slightly revised versions of the Liberation fonts contributed by Ascender. These include a dotted zero and various changes made for the benefit of internationalization. Some other Linux distributions (such as Ubuntu, OpenSUSE and Mandriva Linux) included Liberation fonts in their default installations. The open source software LibreOffice, OpenOffice.org and Collabora Online included Liberation fonts in their installation packages for all supported operating systems. Due to licensing concerns with fonts released under a GPL license, some projects looked for alternatives to the Liberation fonts. Starting with Apache OpenOffice 3.4, Liberation Fonts were replaced with the Chrome OS Fonts – also known as Croscore fonts: Arimo (sans), Cousine (monospace), and Tinos (serif) – which are made available by Ascender Corporation under the Apache License 2.0. Unsupported features Unlike modern versions of Times New Roman, Arial, and Courier New, Liberation fonts do not support OpenType advanced typography features like ligatures, old style numerals, or fractions. See also Typefaces Croscore fonts – fonts which formed the basis for Liberation fonts Droid – a font family by the same font designer Gentium – an Open Font License font which defines roughly 1,500 glyphs covering almost all the range of Latin characters used worldwide Linux Libertine – another free software serif typeface with OpenType features support Nimbus Roman No. 9 L, Nimbus Sans L and Nimbus Mono L – another series of free software fonts also designed to be substituted for Times New Roman, Arial and Courier. FreeFont, derived from Nimbus, but with a better Unicode support. Other Open-source Unicode typefaces References External links at github.com Liberation font files at the releases page Liberation Sans Narrow files at the releases page . Unified serif and sans-serif typeface families Free software Unicode typefaces Open-source typefaces Typefaces and fonts introduced in 2007
89834
https://en.wikipedia.org/wiki/Karl%20Koch%20%28hacker%29
Karl Koch (hacker)
Karl Werner Lothar Koch (July 22, 1965 – c. May 23, 1989) was a German hacker in the 1980s, who called himself "hagbard", after Hagbard Celine. He was involved in a Cold War computer espionage incident. Biography Koch was born in Hanover. He grew up under difficult circumstances. His mother died of cancer in 1976, his father had alcohol problems. Koch was interested in astronomy as a teenager and was involved in the state student's council. In 1979 Karl's father gave him the 1975 book, Illuminatus! – The Golden Apple by Robert Anton Wilson and Robert Shea, which had a very strong influence to him. From his income as a member of the state students' council, he bought his first Atari ST computer in 1982 and named him: "FUCKUP" ("First Universal Cybernetic-Kinetic Ultra-Micro Programmer") after The Illuminatus! Trilogy. In August 1984 his father also died of cancer. In 1985 Koch and some other hackers founded the Computer-Stammtisch in a pub of the Hanover-Oststadt, which developed later into the Chaos Computer Club Hanover. During this time Koch came into contact with hard drugs more and more often. In February 1987 Koch broke off a vacation in Spain, because of this – and had himself admitted to a Psychiatric clinic in Aachen for rehab treatments. He left the clinic in May 1987. Hacking He worked with the hackers known as DOB (Dirk-Otto Brezinski), Pengo (Hans Heinrich Hübner), and Urmel (Markus Hess), and was involved in selling hacked information from United States military computers to the KGB. Clifford Stoll's book The Cuckoo's Egg gives a first-person account of the hunt and eventual identification of Hess. Pengo and Koch subsequently came forward and confessed to the authorities under the espionage amnesty, which protected them from being prosecuted. Death Koch was found burned to death with gasoline in a forest near Celle, Germany. The death was officially claimed to be a suicide. Koch left his workplace in his car to go for lunch; he had not returned by late afternoon and so his employer reported him as a missing person. German police were alerted of an abandoned car in a forest near Celle on June 1, 1989; upon investigation, it appeared as though it had not moved for years as it was covered in dust. The remains of Koch - at this point just bones - were discovered close by, a patch of scorched and burnt ground surrounding them, shoes missing. The scorched earth itself was controlled in a small circle around the corpse; it had not rained in some time, and the grass was perfectly dry. No suicide note was found with the body. Despite his death being officially ruled a suicide, the unusual circumstances in which Koch's remains were found led to at least some speculation that Koch's death had not been self-inflicted; the patch of scorched ground surrounding the body was a small and seemingly controlled area, ostensibly too much so for death by self-immolation, with no suicide note having ever been found. Karl Koch in media Books Movies A German movie about his life, entitled 23, was released in 1998. While the film was critically acclaimed, it has been harshly criticized as exploitative by real-life witnesses. A corrective to the film's take is the documentation written by his friends. In 1990 a documentary was released titled The KGB, The Computer and Me. Music Koch was memorialized by Clock DVA at the opening of their music video for "The Hacker" and in the liner notes for "The Hacker" on the album Buried Dreams (1989). See also Boris Floricic a.k.a. Tron, a computer hacker who allegedly suffered a similar fate References External links Story of a Grey Hacker WikiLeaks and Karl Werner Lothar Koch 1965 births 1989 suicides German spies for the Soviet Union People from Hanover Suicides by self-immolation Suicides in Germany Hacking (computer security) 1989 deaths
68266753
https://en.wikipedia.org/wiki/Jin-Yi%20Cai
Jin-Yi Cai
Jin-Yi Cai (Chinese: 蔡进一; born 1961) is a Chinese American mathematician and computer scientist. He is a professor of computer science, and also the Steenbock Professor of Mathematical Sciences at the University of Wisconsin–Madison. His research is in theoretical computer science, especially computational complexity theory. In recent years he has concentrated on the classification of computational counting problems, especially counting graph homomorphisms, counting constraint satisfaction problems, and Holant problems as related to holographic algorithms. Early life Cai was born in Shanghai, China. He studied mathematics at Fudan University, graduating in 1981. He earned a master's degree at Temple University in 1983, a second master's degree at Cornell University in 1985, and his Ph.D. from Cornell in 1986, with Juris Hartmanis as his doctoral advisor. Academic career He became a faculty member at Yale University (1986-1989), Princeton University (1989-1993), and SUNY Buffalo (1993-2000), rising from Assistant Professor to Full Professor in 1996. He became a Professor of Computer Science at the University of Wisconsin-Madison in 2000. Awards Cai was a Presidential Young Investigator, Sloan Research Fellow, and a Guggenheim Fellow. He received a Morningside Silver Medal, and a Humboldt Research Award for Senior U.S. Scientists. He was elected a Fellow of the Association for Computing Machinery (2001), American Association for the Advancement of Science (2007), and a foreign member of Academia Europaea (2017). He was jointly awarded the Gödel Prize in 2021, an award in theoretical computer science for his work in the paper titled: Complexity of Counting CSP with Complex Weights. He was also awarded the Fulkerson Prize in Discrete Mathematics awarded by the American Mathematical Society and the Mathemtical Programming Society. References 1961 births Living people Mathematicians from Shanghai American computer scientists Chinese computer scientists Theoretical computer scientists Fudan University alumni Temple University alumni Cornell University alumni Yale University faculty Princeton University faculty University at Buffalo faculty University of Wisconsin–Madison faculty Fellows of the Association for Computing Machinery Fellows of the American Association for the Advancement of Science Members of Academia Europaea
21365348
https://en.wikipedia.org/wiki/Moscow%20State%20University%20of%20Instrument%20Engineering%20and%20Computer%20Science
Moscow State University of Instrument Engineering and Computer Science
Moscow State University of Instrument Engineering and Computer Science (MSUIECS (MGUPI in Russian); is one of the technical universities of Moscow and Russia. Founded in 1936 as the Moscow Correspondence Institute of the metal industry. MSUIECS offers a wide range of educational programs to prepare specialists, masters, bachelors, PhDs and doctors of different sciences. Campus To perform educational and research activities MGUPI unites 9 departments consisting of 41 chairs and ten subsidiaries in Moscow, Tver, Yaroslavl and other regions. Faculties Technological computer science (ТИ) Information Security (BA, MA) Materials Science (BA, MA) Mechanical engineering (BA, MA) Technological machines and equipment Design-engineering software engineering industries (BA, MA) Automation of technological processes and production (BA, MA) Innovation Nanotechnology and Microsystems Art Materials Processing Technology Design of aircraft and rocket engines (specialist) Ground transport and technology tools (specialist) Computer science (ИТ) Instrument making and electronics (ПР) Economics (ЭФ) Economics (BA, MA) Applied Computer Science (BA, MA) Economic Security (specialist) Management and law (УП) Jurisprudence Management (BA, MA) Personnel Management (BA, MA) State and Municipal Management Applied Computer Science (MA) Legal maintenance of national security (specialist) Faculty of Specialized Secondary Education Evening faculty Faculty of Professional Skills Upgrading International cooperation The University maintains close academic and scientific contacts with Germany, Great Britain, France, Finland, Bulgaria, Poland and other countries. The University has signed agreements with Berlin Technical University, the University of Sofia, University of Jyvaskyla (Finland), Varna Technical University (Bulgaria). University senior staff and leading professors take an active part in big international symposiums and workshops, conferences held in Europe, both Americas and Asia; they also deliver lectures and lead joint research with educational establishments and research centers of many countries. Branches The University has also several branches: Dmitrov Kashira Kimry Lytkarino Mozhaysk Sergiyev Posad Serpukhov Stavropol Chekhov Uglich See also Education in Russia List of universities in Russia References External links Official Page (Russian version) Technopark General information Educational institutions established in 1936 Education in the Soviet Union Universities in Moscow 1936 establishments in the Soviet Union Computing in the Soviet Union
10832165
https://en.wikipedia.org/wiki/D-17B
D-17B
The D-17B (D17B) computer was used in the Minuteman I NS-1OQ missile guidance system. The complete guidance system contained a D-17B computer, the associated stable platform, and power supplies. The D-17B weighed approximately , contained 1,521 transistors, 6,282 diodes, 1,116 capacitors, and 504 resistors. These components were mounted on double copper-clad, engraved, gold-plated, glass fiber laminate circuit boards. There were 75 of these circuit boards and each one was coated with a flexible polyurethane compound for moisture and vibration protection. The high degree of reliability and ruggedness of the computer were driven by the strict requirements of the weapons system. Design constraints High reliability was required of the D-17B. It controlled a key weapon that would have just one chance to execute its mission. Reliability of the D-17B was achieved through the use of solid-state electronics and a relatively simple design. Simpler DRL (diode–resistor) logic was used extensively, but less-reliable DTL (diode–transistor) logic was used only where needed. In the late 1950s and early 1960s, when the D-17B was designed, transistors lacked today's reliability. DTL provided, however, either gain or inversion. Reliability was also enhanced by the rotating-disk memory with non-destructive readout (NDRO). In actual real-time situations, Minuteman missiles achieved a mean time between failures (MTBF) of over 5.5 years . The Soviets had much larger rockets and could use vacuum tubes (thermionic valves) in their guidance systems. (The weights of the Minuteman I and II remain classified, but the Minuteman III was 35,000 kg versus the Soviet R-7 missile (1959) of 280,000 kg.) The US planners had to choose either to develop solid state guidance systems (which weigh less) or consider the additional cost and time delay of developing larger rockets. Specifications Minuteman I D-17B computer specifications Year: 1962 The D17B is a synchronous serial general-purpose digital computer. Manufacturer: Autonetics Division of North American Aviation Applications: Guidance and control of the Minuteman I ICBM. Programming and numerical system: Number system: Binary, fixed point, 2's complement Logic levels: 0 or False, 0V; 1 or True, -10V Data word length (bits): 11 or 24 (double precision) Instruction word length (bits): 24 Binary digits/word: 27 Instructions/word: 1 Instruction type: One and half address Number of instructions: 39 types from a 4-bit op code by using five bits of the operand address field for instructions which do not access memory. Execution times: Add (microseconds): 78 1/8 Multiply (µs): 546 7/8 or 1,015 5/8 (double precision) Divide: (software) (Note: Parallel processing such as two simultaneous single precision operations is permitted without additional execution time.) Clock channel: 345.6 Hz Addressing: Direct addressing of entire memory Two-address (unflagged) and three-address (flagged) instructions Memory: Word length (bits): 24 plus 5 timing Type: Ferrous-oxide-coated NDRO disk Cycle time (µs): 78 1/8 (minimal) Capacity (words): 5,454 or 2,727 (double precision) Input/output: Input lines: 48 digital Output lines: 28 digital 12 analog 3 pulse Program: 800 5-bit char/s Instruction word format: +--------+--------+------+--------+---------+--------+--------+ | TP | T24 21 | 20 | 19 13 | 12 8 | 7 1 | 0 | +--------+--------+------+--------+---------+--------+--------+ | Timing | OP | Flag | Next | Channel | Sector | Timing | | | | | Inst. | | | | | | | | Sector | | | | +--------+--------+------+--------+---------+--------+--------+ Registers: Phase and voltage output registers Arithmetic unit (excluding storage access): Add: 78 µs Multiply: 1,016 µs Construction (arithmetic unit only): transistor-diode logic is used. Timing: Synchronous Operation: Sequential Input 48 digital lines (input) 26 specialized incremental inputs -Medium- -Speed- Paper/Mylar Tape 600 chars/sec Keyboard Manual Typewriter Manual OUTPUT -Medium- -Speed- Printer Character 78.5–2,433 ms (Program Control) Phase - Voltage (Program Control) 28 digital lines (output) 12 analog lines (output) 13 pulse lines (output) 25,600 word/s maximum I/O transfer rate Physical characteristics Dimensions: 20 in high, 29 in diameter, 5 in deep Power: 28 VDC at 25 A Circuits: DRL and DTL Weight: Construction: Double copper clad, gold plated, glass fiber laminate, flexible polyurethane-coated circuit boards Software: Minimal delay coding using machine language Modular special-purpose subroutines Reliability: 5.5 years MTBF Checking features: Parity on fill and on character outputs Power, space, weight, and site preparation Power, computer: 0.25 kW Air conditioner: Closed system Volume, computer: Weight, computer: Designed specifically to fit in cylindrical guidance package. The word length for this computer Is 27 bits, of which 24 are used In computation. The remaining 3 bits are spare and synchronizing bits. The memory storage capability consists of a 6000 rpm magnetic disk with a storage capacity of 2985 words of which 2728 are addressable. The contents of memory include 20 cold-storage channels of 128 sectors (words) each, a hot-storage channel of 128 sectors, four rapid access loops (U,F,E,H) of 1, 4, 8, and 16 words respectively, four 1-word arithmetic loops (A, L,H,I), and a two 4-word input buffer input loops (V,R). The outputs that can be realized from the D-17B computer are binary, discrete, single character, phase register status, telemetry, and voltage outputs. Binary outputs are computer generated levels of +1 or −1 available on the binary output lines. Instruction set D-17B Instruction Repertoire Numeric Code Code Description ------------ ---- ----------- 00 20, s SAL Split accumulator left shift 00 22, s ALS Accumulator left shift 00 24, 2 SLL Split left word left shift 00 26, r SLR Split left word right shift 00 30, s SAR Split accumulator right shift 00 32, s ARS Accumulator right shift 00 34, s SRL Split right word left shift 00 36, s SRR Split right word right shift 00 60, s COA Character output A 04 c, S SCL Split Compare and .ivt 10 c, S TMI Transfer on minus 20 c, s SMP Split multiply 24 c, s MPY Multiply 30 c, s SMM Split multiply modified 34 c, s MPM Multiply modified 40 02, s BOC Binary output C 40 10, s BCA Binary output A 40 12, s BOB Binary output B 40 20, s RSD Reset detector 40 22, s HPR Halt and Proceed 40 26, s DOA Discrete output A 40 30, s VOA Voltage output A 40 32, s VOB Voltage output B 40 34, s VOC Voltage output C 40 40, s ANA And to accumulator 40 44, s MIM Minus magnitude 40 46, s COM Complement 40 50, s DIB Discrete input B 40 52, s DIA Discrete input A 40 60, s HFC Halt fine countdown 40 62, s EFC Enter fine countdown 40 70, s LPR Load phase register 44 c, s CIA Clear and Add 50 c, s TRA Transfer 54 c, s STO Store accumulator 60 c, s SAD Split add 64 c, s ADD Add 70 c, s SSU Split subtract 74 c, s SUB Subtract Special features of the D-17B computer include flag store, split-word arithmetic, and minimized access timing. Flag store provides the capability of storing the present contents of the accumulator while executing the next Instruction. Split-word arithmetic is used in performing arithmetic operations on both halves of a split word at the same time. A split word on the D-17B consists of 11 bits. Minimized access timing is the placing of instructions and data in memory so that they are available with minimum delay from the disk memory. Guidance software Autonetics was the associate contractor for the Minuteman (MM) guidance system, which included the flight and prelaunch software. This software was programmed in assembly language into a D17 disk computer. TRW provided the guidance equations that Autonetics programmed and was also responsible for the verification of the flight software. When MM I became operational, the flight computer was the only digital computer in the system. The targeting was done at Strategic Air Command (SAC) Headquarters by the Operational Targeting Program developed by TRW to execute on an IBM 709 mainframe computer. Sylvania Electronics Systems was selected to develop the first ground-based command and control system using a programmable computer. They developed the software, the message processing and control unit for Wing 6. To support the deployment of the Wing 6 system, TRW, Inc. developed the execution plan program (EPP) from a mainframe computer at SAC and performed an independent checkout of the command and control software. The EPP assisted in assigning targets and launch time for the missiles. The MM II missile was deployed with a D-37C disk computer. Autonetics also programmed functional simulators and the code inserter verifier that was used at Wing headquarters to generate and test the flight program codes to go into the airborne computer. Notes References Autonetics Division of North American Rockwell. Inc.; Minuteman D-17 Computer Training Data. Anaheim, California, 8 June 1970. Autonetics Division of North American Rockwell. Inc.; Part I - Preliminary Maintenance Manual of the Minuteman D-17A Computer and Associated Test Equipment. P.O. Memo 71. Anaheim, California, Inc., January 1960. Beck, C.H. Minuteman Computer Users Group, Report MCUG-l-71. New Orleans, Louisiana: Tulane University, April 1971. Beck, C.H. Minuteman Computer Users Group. D-17B Computer Programming Manual. Report MCUG-4-71. New Orleans: Tulane University, September 1971. Beck, Charles H. Investigation of Minuteman D-17B Computer Reutilization. Available from NTIS/DTIC as document AD0722476, January 1971, 54 pp. Lin, Tony C.; "Development of U.S. Air Force Intercontinental Ballistic Missile Weapon Systems." Journal of Spacecraft and Rockets, vol. 40, no. 4, 2003. pp. 491–509. See also D-37C D37D Minuteman (missile) External links Missile guidance Transistorized computers Serial computers
2469248
https://en.wikipedia.org/wiki/Charles%20Babbage%20Institute
Charles Babbage Institute
The Charles Babbage Institute is a research center at the University of Minnesota specializing in the history of information technology, particularly the history of digital computing, programming/software, and computer networking since 1935. The institute is named for Charles Babbage, the nineteenth-century English inventor of the programmable computer. The Institute is located in Elmer L. Andersen Library at the University of Minnesota Libraries in Minneapolis, Minnesota. Activities In addition to holding important historical archives, in paper and electronic form, its staff of historians and archivists conduct and publish historical and archival research that promotes the study of the history of information technology internationally. CBI also encourages research in the area and related topics (such as archival methods); to do this, it offers graduate fellowships and travel grants, organizes conferences and workshops, and participates in public programming. It also serves as an international clearinghouse of resources for the history of information technology. Also valuable for researchers are its extensive collection of oral history interviews, more than 400 in total. Oral histories with important early figures in the field have been conducted by CBI staff and collaborating colleagues. Owing to the poorly documented state of many early computer developments, these oral histories are immensely valuable documents. One author called the set of CBI oral histories "a priceless resource for any historian of computing." Most of CBI's oral histories are transcribed and available online. The archival collection also contains manuscripts; records of professional associations; corporate records (including the Burroughs corporate records and the Control Data corporate records, among many others); trade publications; periodicals; manuals and product literature for older systems, photographic material (stills and moving), and a variety of other rare reference materials. It is now a center at the University of Minnesota, and is located on its Twin Cities, Minneapolis campus, where it is housed in the Elmer L. Andersen Library on the West Bank. Archival papers and oral histories The CBI has collections of archival papers and oral histories from many notable figures in computing including: Gene Amdahl Walter L. Anderson Isaac L. Auerbach Rebecca Bace Charles W. Bachman Paul Baran Jean Bartik Edmund Berkeley James Bidzos Gertrude Blanch Vint Cerf John Day Edsger W. Dijkstra Wallace John Eckert Alexandra Illmer Forsythe Margaret R. Fox Gideon Gartner Bruce Gilchrist George Glaser Martin A. Goetz Gene H. Golub Carl Hammer Martin Hellman Frances E. Holberton Cuthbert Hurd Anita K. Jones Brian Kahin Donald Knuth Bryan S. Kocher Mark P. McCahill Daniel D. McCracken Alex McKenzie Carl Machover Michael Mahoney Marvin Minsky Calvin N. Mooers William C. Norris Susan Nycum Donn B. Parker Alan J. Perlis Robert M. Price Claire K. Schultz Erwin Tomash Keith Uncapher Willis Ware Terry Winograd Patrick Winston Konrad Zuse History CBI was founded in 1978 by Erwin Tomash and associates as the International Charles Babbage Society, and initially operated in Palo Alto, California. In 1979, the American Federation of Information Processing Societies (AFIPS) became a principal sponsor of the Society, which was renamed the Charles Babbage Institute. In 1980, the Institute moved to the University of Minnesota, which contracted with the principals of the Charles Babbage Institute to sponsor and house the Institute. In 1989, CBI became an organized research unit of the University. See also History of computing History of computing hardware History of operating systems History of the internet Internet governance List of pioneers in computer science Standards Setting Organization References External links Oral history University of Minnesota History of computing Charles Babbage Research institutes established in 1978 1978 establishments in Minnesota
36352657
https://en.wikipedia.org/wiki/Solus%20%28operating%20system%29
Solus (operating system)
Solus (previously known as Evolve OS) is an independently developed operating system for the x86-64 architecture based on the Linux kernel and a choice of the homegrown Budgie desktop environment, GNOME, MATE or KDE Plasma as the desktop environment. Its package manager, eopkg, is based on the PiSi package management system from Pardus Linux, and it has a semi-rolling release model, with new package updates landing in the stable repository every Friday. The developers of Solus have stated that Solus is intended exclusively for use on personal computers and will not include software that is only useful in enterprise or server environments. History On September 20, 2015, Ikey Doherty announced that "Solus 1.0 will be codenamed Shannon, after the River Shannon in Ireland", indicating that "codenames for releases will continue this theme, using Irish rivers." In July 2016, Solus announced the intention to discard the concept of fixed point releases and to embrace a rolling release model. In January 2017, Doherty announced that Solus will adopt Flatpak to reassemble third party applications. In August, Doherty announced that Solus also will adopt "Snaps" (next to Flatpak). On June 13 the same year, it was announced that the developer team had been expanded with Stefan Ric, and Ikey Doherty – previously working for Intel on Clear Linux OS – started working full-time on Solus. On November 2, 2018, technology website Phoronix published an open letter from original founder Ikey Doherty confirming that he was stepping back from the project, assigning "any and all intellectual, naming and branding rights relating to the ownership of Solus" to the development team "with immediate and permanent effect, acknowledging them as the official owners and leadership of the project." On January 1, 2022, experience lead Josh Strobl announced his resignation from Solus, after 6 years of involvement with the project. Releases and reception Point releases Solus 1.0 "Shannon" was released December 27, 2015. Jessie Smith reviewed the release as part of a feature story in DistroWatch Weekly, a weekly opinion column and summary of events from the distribution world. While he "ran into a number of minor annoyances" such as "Solus panicking and shutting itself down", he concluded that "Solus 1.0 represents a decent start". Solus 1.1 was released February 2, 2016. HecticGeek blogger Gayan has described Solus 1.1 as a "well optimized operating system", praising significantly faster boot and shutdown times than Ubuntu 15.10. Due to several usability issues encountered, he recommended to wait another year before trying it out again. Solus 1.2 was released on June 20, 2016. Michael Huff has described Solus in his review 'Finding Solace in Solus Linux' as a unique and original project for "those who’ve been reluctant to travel the Linux galaxy". Solus 1.2.0.5 was released on September 7, 2016. Michael Huff, a programmer and data analyst, wrote in his second review of Solus in Freedom Penguin that "we finally have the power and ease-of-use of a Mac in a Linux distribution" and "that the only people who need to use Solus are those who value their happiness in computing", praising the operating system as only one of few independent projects assured of "a tight cult following with the potential for mass appeal." Solus 1.2.1 was released on October 19, 2016. This is the last fixed point release of Solus and all future releases will be based on the snapshot model (the OS is now following the rolling-release model). Rolling releases Solus is considered a curated rolling release. It is a rolling release in the sense that once installed, end-users are guaranteed to continuously receive security and software updates for their Solus installation. Updates become available every Friday. Solus 2017.01.01.0, a snapshot following the recently adopted rolling release model, was released on January 1, 2017. Solus 2017.04.18.0, was released on April 18, 2017. Solus 3 was released on August 15, 2017. Solus 3.9999 (Solus 3 ISO Refresh) was released on September 20, 2018. Solus 4.0 "Fortitude" was released on March 17, 2019. Announcing the release, Solus Experience Lead, Joshua Strobl stated that Solus 4.0 delivered "a brand new Budgie experience, updated sets of default applications and theming, and hardware enablement". Solus 4.1 was released on January 25, 2020. Solus 4.2 was released on February 3, 2021. Solus 4.3 was released on July 11, 2021. Editions Solus is currently available in four editions: Budgie flagship edition, a "feature-rich, luxurious desktop using the most modern technologies"; GNOME edition, running the GNOME desktop environment, "a contemporary desktop experience"; MATE edition using the MATE desktop environment, a "traditional desktop for advanced users and older hardware"; KDE Plasma edition, "a sophisticated desktop experience for the tinkerers". Budgie Ikey Doherty stated that, regarding Budgie, he "wanted something that was a modern take on the traditional desktop, but not too traditional", aiming to keep a balance between aesthetics and functionality. Core team Technical lead: Beatrice T. Meyers [DataDrake] Global Maintainers: Friedrich von Gellhorn [Girtablulu], Joey Riches [joebonrichie], and Pierre-Yves [kyrios] Features Curated rolling release Solus brings updates to its users by means of a curated rolling release model. It is a rolling release in the sense that once installed, end-users are guaranteed to continuously receive security and software updates for their Solus installation without having to worry that their operating system will reach end-of-life. The latter is typically the case with fixed point releases of operating systems such as Fedora and Ubuntu but also Microsoft Windows. Marius Nestor at Softpedia has argued that all operating systems should use the rolling release model in order to decrease development and maintenance workload for developers and to make the latest technologies available for end users as soon as these are ready for the market. Compared to other rolling release operating systems such as Arch Linux - which provides bleeding edge software, i.e. software so new that there is a relatively high risk that software breakages might occur and render the system partially or completely unusable, Solus takes a slightly more conservative approach to software updates, hence the term curated rolling release. In contrast to Arch, Software on Solus is commonly referred to as cutting edge, typically excluding beta software, and is released after a short period of testing (in the unstable software repository) to end users in order to provide a safer, more stable and reliable update experience. By prioritizing usability (curated rolling release) over availability (pure rolling release), Solus intends to make the operating system accessible to a wider target market than Arch Linux, which is mainly aimed at more advanced users possessing in-depth technical knowledge about their system. Solus is also a curated rolling release in allowing its users to participate in the actual curation process, broadly conceived as the process by which software is selected, maintained and updated (on the server side in the software repositories of the operating system as well as on the client side on the end users computer system). More specifically, and contrary to other operating systems with various 'enforced update mechanisms', a Solus user has the freedom to choose what gets updated and when updates are applied (if at all), except for mandatory security updates. Software availability Solus comes pre-installed with a wide range of software that includes the latest Firefox, Thunderbird, LibreOffice, Transmission and GNOME MPV. Additional software that is not installed by default can be downloaded using the included Software Center. Wireless chips and modems are supported through optional non-free firmware packages. Package management is done through eopkg. Michael Huff has quoted project founder and lead developer Ikey Doherty that Solus will not be defined by its package manager. In a previous interview with Gavin Thomas from Gadget Daily on February 8, 2016, Doherty stated that as an end user the goal is to actually not interact with the package manager, sharply outlining the project's direction in terms of user experience. According to Doherty, the goal is "to actually get rid of it, so the user doesn’t even know about it." In Solus, the package manager is not intended to be used as a tool to deploy but to build software, distinguishing it from less beginner-friendly practices on other Linux-based operating systems. Software developed by Solus Budgie desktop environment: a GTK 3 desktop that tightly integrates with the GNOME software stack, employing the underlying technology. Starting with version 11, it was announced that Budgie will no longer be written in GTK, and the GNOME software stack will be fully replaced, due to unsolvable disagreements with the GNOME team. Raven: a sidebar interface that serves as an applet panel, notifications center and houses the desktop customization settings. Budgie Menu: a quick category and search-based application launcher. Budgie-wm: the window manager of the Budgie Desktop. eopkg: (Evolve OS Package) a fork of the PiSi package manager. ypkg: a tool to convert the build process into a packaging operation. ferryd: the binary repository manager for Solus. Software Center: a graphical frontend to install software in Solus. Brisk Menu: a menu co-written with the Ubuntu MATE development team, featured in Solus MATE. Security In July 2015, Solus announced integration of Clear Linux patches to deprecate insecure SSL ciphers, responding to a community post on Google+ by Arjan van de Ven. In response to security issues experienced by the Linux Mint project in late February 2016, Solus introduced improvements by providing a global Solus GPG key on its download section. Joshua Strobl, Communications Manager at Solus, announced the separation of official and community mirrors on the download page with official mirrors "to be regularly audited and updated" and "daily integrity checks against every ISO mirror" to be performed. Within its software center, Solus contains a wide variety of dedicated security software ranging from encryption software such as VeraCrypt to anonymization tools such as Tor. Solus integrates AppArmor instead of SELinux for restricting programs' capabilities. Popularity Because of user privacy, the Solus project doesn't track users so there isn't a direct and reliable way to measure popularity. As of July 2021, the DistroWatch website, which records the frequency of page clicks on its own site, ranked Solus 13th in the 6-month page hit rankings, 6th among the most popular rolling release distributions. and achieved an average reader-supplied review score of 8.42 out of 10. Critical reception Solus 3 was named one of the best Linux distributions of 2017 by OMG! Ubuntu! Matt Hartley praised Solus in his overview of the best Linux-based operating systems of 2017, as "Perhaps the most interesting distro in recent years...taking a unique approach to a logical user workflow, package management and how they work with the community. I see them doing great things in the future." In the more mainstream media Jason Evangelho covered Solus a few times for Forbes magazine. Solus received a lot of appraisal from Evangelho in his articles covering PC gaming and tech industry, most notable regarding gaming on Solus Linux and about the 4.0 release. References Notes: External links Solus on OpenSourceFeed gallery Free software operating systems Linux X86-64 Linux distributions Rolling Release Linux distributions Linux distributions
35924721
https://en.wikipedia.org/wiki/Samsung%20Galaxy%20Ace%202
Samsung Galaxy Ace 2
Samsung Galaxy Ace 2 (GT-I8160) is a smartphone manufactured by Samsung that runs the Android operating system. Announced and released by Samsung in February 2012, the Galaxy Ace 2 is the successor to the Galaxy Ace Plus. Being a mid-range smartphone, Galaxy Ace 2 contains hardware between that of the Galaxy Ace Plus and Galaxy S Advance; it features a dual-core 800 MHz processor on the NovaThor U8500 chipset with the Mali-400 GPU. In May 2012, the device went on sale in the UK. Hardware Galaxy Ace 2 is a 3.5G mobile device that offers quad-band GSM, and was announced with dual-band 900/2100 MHz HSDPA at 14.4 Mbit/s downlink and 5.76 Mbit/s uplink speeds. The display is a 3.8-inch capacitive PLS TFT LCD touchscreen with 16M colours in a WVGA (480x800) resolution. There is also a 5-megapixel camera with LED flash and auto-focus, capable of recording videos at QVGA (320x240), VGA (640x480) and HD (1280x720) resolutions. Galaxy Ace 2 also has a front-facing VGA camera. The device comes with a 1500 mAh Li-Ion battery. Software Galaxy Ace 2 comes with Android 2.3.6 Gingerbread and Samsung's proprietary TouchWiz 4.0 user interface. In September 2012, Samsung announced that Galaxy Ace 2 would be updated to Android 4.1 Jelly Bean. The phone can be upgraded to Android 4.1.2 Jelly Bean. Galaxy Ace 2 has social network integration abilities and multimedia features. It is also preloaded with basic Google Apps, such as Google+ and Google Talk. The phone is available in Onyx Black and in White colours. The device also unofficially supports CyanogenMod as well as other AOSP-derived roms like AOKP. It also unofficially supports LineageOS (14.1 and 15.1 version). Samsung Galaxy Ace 2 x / Trend Galaxy Ace 2x (GT-S7560M) and in some markets Galaxy Trend (GT-S7560) are at first glance variants of Galaxy Ace 2, in that both have a similar shell and specifications; such as the slightly larger 4" screen, and similar specs for RAM and storage space. The major differentiator is in processing power: While Galaxy Ace 2 has a dual-core 800 MHz CPU, then Galaxy Ace II x and Galaxy Trend contain a single-core 1 GHz ARM Cortex-A5 processor in conjunction with an enhanced Adreno 200 GPU. The single-core Snapdragon S1 MSM7227A ARMv7 SoC design is much closer to the one in Samsung Galaxy Mini 2. Galaxy Ace 2 x and Galaxy Trend have 645 MB of accessible RAM (out of the total 768 MB), and approximately 2 GB of user-accessible internal storage. Galaxy S Duos (GT-S7562) is available with very similar specifications; the primary differentiating feature is its dual-SIM support. Galaxy Trend Plus (GT-S7580) has very minor differences comparing with Galaxy Trend (GT-S7560). Trend Plus has Android 4.2 Jelly Bean out of the box, single-core 1.2 GHz processor in conjunction with VideoCore 4 GPU and Broadcom BCM21664 SoC. Galaxy S Duos 2 (GT-S7582) is a dual-SIM equivalent of Galaxy Trend Plus. Software The devices are powered by Android 4.0.4 Ice Cream Sandwich, running Samsung's proprietary TouchWiz Nature UX as the default user interface. Where possible, the operating systems can be upgraded to somewhat newer official versions of Android 4.x than the factory install. To perform a firmware upgrade, the phones must have at least 1 GB of free internal storage. Since these phones run Android 4.0, they are still supported by cloud, communications and social networking services that push the latest versions of their apps, which have in some cases been designed with only the newest hardware in mind. Such applications hog system resources and cause the phones to run slowly. As a remedy, phone owners can replace those apps with less resource-hungry equivalents, or remove them entirely and use a web browser to access the services' sites. The Facebook app has been singled out as the one that uses the most resources overall; it can demonstrably consume between 206-231 MB of RAM memory, whereas Metal (a Facebook wrapper) and Facebook Lite are much easier on phone RAM and battery life. Alternately, the Facebook mobile site can be used, as it uses browser notifications in browsers that support this functionality. See also Samsung Galaxy Ace Samsung Galaxy S Duos Samsung Galaxy S Duos 2 References Samsung mobile phones Samsung Galaxy Android (operating system) devices Smartphones Mobile phones introduced in 2012 Mobile phones with user-replaceable battery
49282028
https://en.wikipedia.org/wiki/List%20of%20data%20breaches
List of data breaches
This is a list of data breaches, using data compiled from various sources, including press reports, government news releases, and mainstream news articles. The list includes those involving the theft or compromise of 30,000 or more records, although many smaller breaches occur continually. Breaches of large organizations where the number of records is still unknown are also listed. In addition, the various methods used in the breaches are listed, with hacking being the most common. Most breaches occur in North America. It is estimated that the average cost of a data breach will be over $150 million by 2020, with the global annual cost forecast to be $2.1 trillion. As a result of data breaches, it is estimated that in first half of 2018 alone, about 4.5 billion records were exposed. In 2019, a collection of 2.7 billion identity records, consisting of 774 million unique email addresses and 21 million unique passwords, was posted on the web for sale. References Sources Data security Data breaches in the United States Internet privacy Internet vigilantism Cyberattacks on energy sector Cyberwarfare data breaches
26079534
https://en.wikipedia.org/wiki/John%20Stasko
John Stasko
John Thomas Stasko III (born August 28, 1961) is a Regents Professor in the School of Interactive Computing in the College of Computing at Georgia Tech, where he joined the faculty in 1989. He also is one of the founding members of the Graphics, Visualization, and Usability (GVU) Center there. Stasko is best known for his extensive research in information visualization and visual analytics, including his earlier work in software visualization and algorithm animation. Early life and education John Stasko was born on August 28, 1961 in Miami, Florida. As a youngster, he lived in Pennsylvania (Lancaster and Reading) and south Florida (Miami, Boca Raton, and Deerfield Beach). Stasko attended Bucknell University and graduated summa cum laude with a B.S. in Mathematics in 1983. He went directly to graduate school and earned an Sc.M. and Ph.D. in Computer Science at Brown University in 1985 and 1989, respectively. His doctoral thesis, "TANGO: A Framework and System for Algorithm Animation," is a highly cited project in the area of Software Visualization. Stasko joined the faculty of the College of Computing at Georgia Tech in 1989. He and his wife Christine have three children, John IV (Tommy), Mitchell, and Audrey. Stasko is an avid golfer and was winner of the 1996 Bobby Jones Memorial Tournament at East Lake Golf Club in Atlanta. Professional career Upon joining the faculty at Georgia Tech, Stasko continued his research in algorithm animation and software visualization. He was the lead editor on the 1998 MIT Press book Software Visualization: Programming as a Multimedia Experience, generally considered the lead reference for that field. Stasko also was one of the founding faculty for the GVU Center at Georgia Tech. In the late 1990s, his research broadened into other areas of human-computer interaction and he developed a specific focus on information visualization. He formed the Information Interfaces Research Group which he still directs. More recently, Stasko has been a pioneering researcher in the new field of visual analytics, and was a contributor to the 2005 book, Illuminating the Path, that laid out a research agenda for this field. Stasko has published extensively in these fields, including over 125 conference papers (two Best Papers Awards), journal articles, and book chapters. His research in information visualization spans a spectrum from theoretical work on interaction, evaluation, and the conceptual foundations of visualization to more applied work creating new techniques and systems (such as TANGO, POLKA, SunBurst, InfoCanvas, Jigsaw) for people in a variety of domains. He was Papers Co-Chair for the IEEE Information Visualization (InfoVis) Symposium in 2005 and 2006 and for the IEEE Visual Analytics Science and Technology (VAST) Symposium in 2009. He is currently on Steering Committee of the IEEE InfoVis Conference, the ACM Symposium on Software Visualization, and is an At Large member of the IEEE Visualization and Graphics Technical Committee. In 2007 Stasko was appointed Associate Chair of the newly created School of Interactive Computing at Georgia Tech. In addition to this role, he leads the Information Interfaces Research Group where he advises undergraduate, master's, and doctoral students. He traditionally teaches CS 1331, an introductory object-oriented programming course and CS 7450, Information Visualization, which originated in 1999 and is one of the first courses on this topic in the world. Selected bibliography John Stasko, Carsten Gorg, and Zhicheng Liu, "Jigsaw: Supporting Investigative Analysis through Interactive Visualization", Information Visualization, Vol. 7, No. 2, Summer 2008, pp. 118–132. Zachary Pousman, John T. Stasko and Michael Mateas, "Casual Information Visualization: Depictions of Data in Everyday Life", IEEE Transactions on Visualization and Computer Graphics, (Paper presented at InfoVis '07), Vol. 13, No. 6, November/December 2007, pp. 1145–1152. Ji Soo Yi, Youn ah Kang, John T. Stasko and Julie A. Jacko, "Toward a Deeper Understanding of the Role of Interaction in Information Visualization", IEEE Transactions on Visualization and Computer Graphics, (Paper presented at InfoVis '07), Vol. 13, No. 6, November/December 2007, pp. 1224–1231. Zach Pousman and John Stasko, "A Taxonomy of Ambient Information Systems: Four Patterns of Design", Proceedings of Advanced Visual Interfaces (AVI 2006), Venice, Italy, May 2006, pp. 67–74. Robert Amar, and John Stasko, "Knowledge Precepts for Design and Evaluation of Information Visualizations," IEEE Transactions on Visualization and Computer Graphics, Vol. 11, No. 4, July/August 2005, pp. 432–442. John Stasko, Todd Miller, Zachary Pousman, Christopher Plaue, and Osman Ullah, "Personalized Peripheral Information Awareness through Information Art", Proceedings of UbiComp '04, Nottingham, U.K., September 2004, pp. 18–35. Christopher Hundhausen, Sarah Douglas, and John Stasko, "A Meta-Study of Algorithm Visualization Effectiveness", Journal of Visual Languages and Computing, Vol. 13, No. 3, June 2002, pp. 259–290. Stasko, John T. and Zhang, Eugene, "Focus+Context Display and Navigation Techniques for Enhancing Radial, Space-Filling Hierarchy Visualizations", Proceedings of IEEE Information Visualization 2000, Salt Lake City, UT, October 2000, pp. 57–65. John Stasko, John Domingue, Marc H. Brown, Marc and Blaine Price,(editors), Software Visualization: Programming as a Multimedia Experience, MIT Press, Cambridge, MA, 1998. Stasko, John T., "TANGO: A Framework and System for Algorithm Animation", IEEE Computer, Vol. 23, No. 9, September 1990, pp. 27–39. References External links Stasko's personal home page Information Interfaces Research Group 1961 births Living people Georgia Tech faculty Bucknell University alumni Brown University alumni People from Deerfield Beach, Florida Information visualization experts
23727861
https://en.wikipedia.org/wiki/List%20of%203D%20modeling%20software
List of 3D modeling software
Following is a list of notable 3D modeling software, computer programs used for developing a mathematical representation of any three-dimensional surface of objects, also called 3D modeling. See also List of computer-aided design editors List of 3D computer graphics software List of 3D animation software List of 3D rendering software 3d Modelling Software 3D graphics software Software
30249701
https://en.wikipedia.org/wiki/Y.156sam
Y.156sam
ITU-T Y.156sam Ethernet Service Activation Test Methodology is a draft recommendation under study by the ITU-T describing a new testing methodology adapted to the multiservice reality of packet-based networks. Key objectives ITU-T Y.156sam is designed around three key objectives: To serve as a network service level agreement (SLA) validation tool, ensuring that a service meets its guaranteed performance settings in a controlled test time To ensure that all services carried by the network meet their SLA objectives at their maximum committed rate, proving that under maximum load network devices and paths can support all the traffic as designed To perform medium- and long-term service testing, confirming that network element can properly carry all services while under stress during a soaking period Test methodology ITU-T Y.156sam defines an out-of-service test methodology to assess the proper configuration and performance of an Ethernet service prior to customer notification and delivery. The test methodology applies to point-to-point and point-to-multipoint connectivity in the Ethernet layer and to the network portions that provide, or contribute to, the provisioning of such services. This recommendation does not define Ethernet network architectures or services, but rather defines a methodology to test Ethernet-based services at the service activation stage. In particular, it is aimed at solving the deficiencies of RFC 2544 listed below. Existing test methodologies: RFC 2544 The Internet Engineering Task Force RFC 2544 is a benchmarking methodology for network interconnect devices. This request for comment (RFC) was created in 1999 as a methodology to benchmark network devices such as hubs, switches and routers as well as to provide accurate and comparable values for comparison and benchmarking. RFC 2544 provides engineers and network technicians with a common language and results format. The RFC 2544 describes six subtests: Throughput: measures the maximum rate at which none of the offered frames are dropped by the device/system under test (DUT/SUT). This measurement translates into the available bandwidth of the Ethernet virtual connection. Back-to-back or burstability: measures the longest burst of frames at maximum throughput or minimum legal separation between frames that the device or network under test will handle without any loss of frames. This measurement is a good indication of the buffering capacity of a DUT. Frame loss: defines the percentage of frames that should have been forwarded by a network device under steady state (constant) loads that were not forwarded due to lack of resources. This measurement can be used for reporting the performance of a network device in an overloaded state, as it can be a useful indication of how a device would perform under pathological network conditions such as broadcast storms. Latency: measures the round-trip time taken by a test frame to travel through a network device or across the network and back to the test port. Latency is the time interval that begins when the last bit of the input frame reaches the input port and ends when the first bit of the output frame is seen on the output port. It is the time taken by a bit to go through the network and back. Latency variability can be a problem. With protocols like voice over Internet protocol (VoIP), a variable or long latency can cause degradation in voice quality. System reset: measures the speed at which a DUT recovers from a hardware or software reset. This subtest is performed by measuring the interruption of a continuous stream of frames during the reset process. System recovery: measures the speed at which a DUT recovers from an overload or oversubscription condition. This subtest is performed by temporarily oversubscribing the device under test and then reducing the throughput at normal or low load while measuring frame delay in these two conditions. The different between delay at overloaded condition and the delay and low load conditions represent the recovery time. Drawbacks of RFC 2544 From a laboratory and benchmarking perspective, the RFC 2544 methodology is an ideal tool for automated measurement and reporting. From a service turn-up and troubleshooting perspective, RFC 2544, although acceptable and valid, does have some drawbacks: Service providers are shifting from only providing Ethernet pipes to enabling services. Networks must support multiple services from multiple customers, and each service has its own performance requirements that must be met even under full load conditions and with all services being processed simultaneously. RFC 2544 was designed as a performance tool with a focus on a single stream to measure maximum performance of a DUT or network under test and was never intended for multiservice testing. With RFC 2544’s focus on identifying the maximum performance of a device or network under test, the overall test time is variable and heavily depends on the quality of the link and subtest settings. RFC 2544 test cycles can easily require a few hours of testing. This is not an issue for lab testing or benchmarking, but becomes a serious issue for network operators with short service maintenance windows. Packet delay variation is a key performance indicator (KPI) for real time services such as VoIP and Internet protocol television (IPTV) and is not measured by the RFC 2544 methodology. Network operators that performed service testing with RFC 2544 typically must execute external packet jitter testing outside of RFC 2544 as this KPI was not defined or measured by the RFC. Testing is performed sequentially on one KPI after another. In multiservice environments, traffic is going to experience all KPIs at the same time, although throughput might be good, it can also be accompanied by very high latency due to buffering. Designed as a performance assessment tool, RFC 2544 measures each KPI individually through its subtest and therefore cannot immediately associate a very high latency with a good throughput, which should be cause for concern. Service definitions The ITU-T Y.156sam defines test streams with service attributes linked to the Metro Ethernet Forum (MEF) 10.2 definitions. Services are traffic streams with specific attributes identified by different classifiers such as 802.1q VLAN, 802.1ad and class of service (CoS) profiles. These services are defined at the UNI level with different frame and bandwidth profile such as the service’s maximum transmission unit (MTU) or frame size, committed information rate (CIR), and excess information rate (EIR). Test rates The ITU Y.156sam defines three key test rates based on the MEF service attributes for Ethernet virtual circuit (EVC) and user-to-network interface (UNI) bandwidth profiles. CIR defines the maximum transmission rate for a service where the service is guaranteed certain performance objectives. These objectives are typically defined and enforced via SLAs. EIR defines the maximum transmission rate above the committed information rate considered as excess traffic. This excess traffic is forwarded as capacity allows and is not subject to meeting any guaranteed performance objectives (best effort forwarding). Overshoot rate defines a testing transmission rate above CIR or EIR and is used to ensure that the DUT or network under test does not forward more traffic than specified by the CIR or EIR of the service. Color markings These rates can be associated to color markings: Green traffic is equivalent to CIR Yellow traffic is equivalent to EIR Red traffic represents discarded traffic (overshoot – CIR or overshoot – EIR). Subtests The ITU-T Y.156sam is built around two key subtests, the service configuration test and the service performance test, which are performed in order: Service configuration test Forwarding devices such as switches, routers, bridges and network interface units are the basis of any network as they interconnect segments. If a service is not correctly configured on any one of these devices within the end-to-end path, network performance can be greatly affected, leading to potential service outages and network-wide issues such as congestion and link failures. The Service configuration test measures the ability of DUT or network under test to properly forward in three different states: In the CIR phase, where performance metrics for the service are measured and compared to the SLA performance objectives In the EIR phase, where performance is not guaranteed and the services transfer rate is measured to ensure that CIR is the minimum bandwidth In the discard phase, where the service is generated at the overshoot rate and the expected forwarded rate is not greater than the committed information rate or excess rate (when configured). Service performance test As network devices come under load, they must prioritize one traffic flow over another to meet the KPIs set for each traffic class. With only one traffic class, there is no prioritization performed by the network devices since there is only one set of KPIs. As the number of traffic flows increase, prioritization is necessary and performance failures may occur. The service performance test measures the ability of the DUT or network under test to forward multiple services while maintaining SLA conformance for each service. Services are generated at the CIR, where performance is guaranteed, and pass/fail assessment is performed on the KPI values for each service according to its SLA. Service performance assessment must also be maintained for a medium- to long-term period as performance degradation will likely occur as the network is under stress for longer period of times. The service performance test is designed to soak the network under full committed load for all services and measure performance over medium and long test time. Metrics The Y.156sam focuses on the following KPIs for service quality: Bandwidth: this is a bit rate measure of available or consumed data communication resources expressed in bits/second or multiples of it (kilobits/s, megabits/s, etc.). Frame transfer delay (FTD): also known as latency, this is a measurement of the time delay between the transmission and the reception of a frame. Typically this is a round-trip measurement, meaning that the calculation measures both the near-end to far-end and far-end to near-end direction simultaneously. Frame delay variations: also known as packet jitter, this is a measurement of the variations in the time delay between packet deliveries. As packets travel through a network to their destination, they are often queued and sent in bursts to the next hop. There may be prioritization at random moments also resulting in packets being sent at random rates. Packets are therefore received at irregular intervals. The direct consequence of this jitter is stress on the receiving buffers of the end nodes where buffers can be overused or underused when there are large swings of jitter. Frame loss: typically expressed as a ratio, this is a measurement of the number of packets lost over the total number of packets sent. Frame loss can be due to a number of issues such as network congestion or errors during transmissions. Vendor implementation The ITU-T Y.1564 (previously Y.156sam) has gained momentum in the test and measurement industry. EXFO was the first test vendor to implement the ITU-T Y.1564, and today other companies like JDSU, Albedo Telecom and Veex are also offering Ethernet and multiservice test equipment via the IntelliSAM and EtherSAM test methodologies. References RFC 1242 (1996), Benchmarking Terminology for Network Interconnection Devices RFC 2544 (1999), Benchmarking Methodology for Network Interconnect Devices External links ALBEDO Telecom eSAM EXFO EtherSAM Metro Ethernet Forum EPL Metro Ethernet Forum MEF 10.1 ITU-T Y Series Recommendations
54215536
https://en.wikipedia.org/wiki/Marvin%20Stein%20%28computer%20scientist%29
Marvin Stein (computer scientist)
Marvin Stein (1924-2015) was a mathematician and computer scientist, and the "father of computer science" at the University of Minnesota. Early life Marvin Stein was born in Cleveland, Ohio in 1924 to Russian-Jewish immigrants. The family later moved to Los Angeles, California to treat Stein's mother's tuberculosis. He graduated from Theodore Roosevelt High School in 1941, and immediately entered University of California, Los Angeles. His studies were interrupted and in 1942 he served in the US Army Signal Corps as a tabulating machine operator, and had a short stint working at IBM. He returned to school after the war and graduated from UCLA in 1947. Stein did his Ph.D. at the Institute for Numerical Analysis at UCLA (or INA, an ancestor of UCLA's computer science department), where in the summer of 1949 he participated in a seminar on solving linear equations and finding eigenvalues and eigenvectors of matrices with several other future luminaries of the domain, including Magnus Hestenes, J. Barkley Rosser, George Forsythe, Cornelius Lanczos, Gertrude Blanch, and William Karush. Magnus Hestenes's work on the conjugate gradient method was a direct outgrowth of this group's work together over the summer. High speed computers were not available yet, so numerical experiments to test theoretical results were performed by hand by Stein and other researchers. Stein in particular studied Rayleigh–Ritz methods of variational problems. After earning his Ph.D. from the INA in January of 1951, Stein was hired as a senior research engineer by aircraft manufacturer Convair in southern California. He primarily worked on missile simulations for the SM-65 Atlas, on which he worked with a UNIVAC 1103. Though the 1103 had been made for and used by the Armed Forces Security Agency under the name "Atlas 2", this was the first commercially sold 1103. Stein's work installing the UNIVAC 1103 with Minnesotan and University of Minnesota alumnus Erwin Tomash introduced him to the emerging computer-science scene in Minnesota in the 1950s. Stein lost his job with Convair when his security clearance was revoked by the House Un-American Activities Committee on account of Stein's Jewish heritage. It was later re-instated, but Stein had already decided to move on. University of Minnesota In 1955, Remington Rand, manufacturer of the UNIVAC computers, heard that the University of Minnesota was considering purchasing a machine from one of Rand's rivals: an IBM 650. Rand offered to simply give the university 400 free hours on a UNIVAC 1103 on the condition that they hire a dedicated faculty member to oversee its operations. Stein was hired in the IT Mathematics department in the University of Minnesota to fulfill this condition, and he assumed stewardship of the UNIVAC. The UNIVAC 1103 was around 60 feet long, 30 feet wide, and weighed over 17 tons. Stein taught the first University of Minnesota courses on high-speed computation and played a singular role in developing the university's path to computer science education. In 1958, Stein was made the head of the university's Numerical Analysis Center at the Institute of Technology (later the University Computer Center), for which the university purchased its own 1103 at a discounted price of $250,000. The center was also home to a REAC 100. Stein maintained a computer archives system for decades, over three significantly different generations of machine. In 1967, Stein created - with William Munro, Neal Amundson, and Hans Weinberger - the university's graduate program in Computer and Information Sciences. Three years later, in 1970, the university established a formal Computer Science department. Stein resigned as head of the Computer Center and became the first head of this new Computer Science department. He stepped down the following year, and served as a professor in the department until his retirement in 1997. Stein received a Guggenheim fellowship in 1963-1964 for his work with Magnus Hestenes on the conjugate gradient method and for being the principal inventor of the Pope-Stein division algorithm and the Stein-Rose sorting algorithm. He served as a visiting professor of computer science at Weizmann Institute of Science in Rehovot, Israel from 1963-1964 and at Tel Aviv University and Hebrew University of Jerusalem from 1971 to 1972. Stein died in 2015. His papers are held in the University of Minnesota Archives. Publications In 1964, Stein wrote Computer Programming: A Mixed Language Approach with contributor William Munro for Academic Press. It was well reviewed in its time, and in 2017, more than five decades after its publication, it was still in print in its third edition. It was written with the intention to provide instruction in assembly language programming to both professional programmers and highly technical laypersons. Much of the book was originally designed around the CDC 1604 and the Fortran language. Bibliography Books Stein, Marvin; Munro, William. Computer Programming: A Mixed Language Approach. (1964) Academic Press. Stein, Marvin; Munro, William. A Fortran introduction to programming and computers: including Fortran IV. (1966) Academic Press. Papers Notes University of Minnesota faculty Weizmann Institute of Science faculty Tel Aviv University people Hebrew University of Jerusalem faculty University of California, Los Angeles alumni American computer scientists 20th-century American mathematicians Jewish American scientists 1924 births 2015 deaths United States Army personnel of World War II 21st-century American Jews
598445
https://en.wikipedia.org/wiki/UNIX%20System%20V
UNIX System V
Unix System V (pronounced: "System Five") is one of the first commercial versions of the Unix operating system. It was originally developed by AT&T and first released in 1983. Four major versions of System V were released, numbered 1, 2, 3, and 4. System V Release 4 (SVR4) was commercially the most successful version, being the result of an effort, marketed as Unix System Unification, which solicited the collaboration of the major Unix vendors. It was the source of several common commercial Unix features. System V is sometimes abbreviated to SysV. , the AT&T-derived Unix market is divided between four System V variants: IBM's AIX, Hewlett Packard Enterprise's HP-UX and Oracle's Solaris, plus the free-software illumos forked from OpenSolaris. Overview Introduction System V was the successor to 1982's UNIX System III. While AT&T developed and sold hardware that ran System V, most customers ran a version from a reseller, based on AT&T's reference implementation. A standards document called the System V Interface Definition outlined the default features and behavior of implementations. AT&T support During the formative years of AT&T's computer business, the division went through several phases of System V software groups, beginning with the Unix Support Group (USG), followed by Unix System Development Laboratory (USDL), followed by AT&T Information Systems (ATTIS), and finally Unix System Laboratories (USL). Rivalry with BSD In the 1980s and early-1990s, UNIX System V and the Berkeley Software Distribution (BSD) were the two major versions of UNIX. Historically, BSD was also commonly called "BSD Unix" or "Berkeley Unix". Eric S. Raymond summarizes the longstanding relationship and rivalry between System V and BSD during the early period: While HP, IBM and others chose System V as the basis for their Unix offerings, other vendors such as Sun Microsystems and DEC extended BSD. Throughout its development, though, System V was infused with features from BSD, while BSD variants such as DEC's Ultrix received System V features. AT&T and Sun Microsystems worked together to merge System V with BSD-based SunOS to produce Solaris, one of the primary System V descendants still in use today. Since the early 1990s, due to standardization efforts such as POSIX and the success of Linux, the division between System V and BSD has become less important. Releases SVR1 System V, known inside Bell Labs as Unix 5.0, succeeded AT&T's previous commercial Unix called System III in January, 1983. Unix 4.0 was never released externally, which would have been designated as System IV. This first release of System V (called System V.0, System V Release 1, or SVR1) was developed by AT&T's UNIX Support Group (USG) and based on the Bell Labs internal USG UNIX 5.0. System V also included features such as the vi editor and curses from 4.1 BSD, developed at the University of California, Berkeley; it also improved performance by adding buffer and inode caches. It also added support for inter-process communication using messages, semaphores, and shared memory, developed earlier for the Bell-internal CB UNIX. SVR1 ran on DEC PDP-11 and VAX minicomputers. SVR2 AT&T's UNIX Support Group (USG) transformed into the UNIX System Development Laboratory (USDL), which released System V Release 2 in 1984. SVR2 added shell functions and the SVID. SVR2.4 added demand paging, copy-on-write, shared memory, and record and file locking. The concept of the "porting base" was formalized, and the DEC VAX-11/780 was chosen for this release. The "porting base" is the so-called original version of a release, from which all porting efforts for other machines emanate. Educational source licenses for SVR2 were offered by AT&T for US$800 for the first CPU, and $400 for each additional CPU. A commercial source license was offered for $43,000, with three months of support, and a $16,000 price per additional CPU. Apple Computer's A/UX operating system was initially based on this release. SCO XENIX also used SVR2 as its basis. The first release of HP-UX was also an SVR2 derivative. Maurice J. Bach's book, The Design of the UNIX Operating System, is the definitive description of the SVR2 kernel. SVR3 AT&T's UNIX System Development Laboratory (USDL) was succeeded by AT&T Information Systems (ATTIS), which distributed UNIX System V, Release 3, in 1987. SVR3 included STREAMS, Remote File Sharing (RFS), the File System Switch (FSS) virtual file system mechanism, a restricted form of shared libraries, and the Transport Layer Interface (TLI) network API. The final version was Release 3.2 in 1988, which added binary compatibility to Xenix on Intel platforms (see Intel Binary Compatibility Standard). User interface improvements included the "layers" windowing system for the DMD 5620 graphics terminal, and the SVR3.2 curses libraries that offered eight or more color pairs and other at this time important features (forms, panels, menus, etc.). The AT&T 3B2 became the official "porting base." SCO UNIX was based upon SVR3.2, as was ISC 386/ix. Among the more obscure distributions of SVR3.2 for the 386 were ESIX 3.2 by Everex and "System V, Release 3.2" sold by Intel themselves; these two shipped "plain vanilla" AT&T's codebase. IBM's AIX operating system is an SVR3 derivative. SVR4 System V Release 4.0 was announced on October 18, 1988 and was incorporated into a variety of commercial Unix products from early 1989 onwards. A joint project of AT&T Unix System Laboratories and Sun Microsystems, it combined technology from: SVR3 4.3BSD Xenix SunOS New features included: From BSD: TCP/IP support, sockets, UFS, support for multiple groups, C shell. From SunOS: the virtual file system interface (replacing the File System Switch in System V release 3), NFS, new virtual memory system including support for memory mapped files, an improved shared library system based on the SunOS 4.x model, the OpenWindows GUI environment, External Data Representation (XDR) and ONC RPC. From Xenix: x86 device drivers, binary compatibility with Xenix (in the x86 version of System V). KornShell. ANSI X3J11 C compatibility. Multi-National Language Support (MNLS). Better internationalization support. An application binary interface (ABI) based on Executable and Linkable Format (ELF). Support for standards such as POSIX and X/Open. Many companies licensed SVR4 and bundled it with computer systems such as workstations and network servers. SVR4 systems vendors included Atari (Atari System V), Commodore (Amiga Unix), Data General (DG/UX), Fujitsu (UXP/DS), Hitachi (HI-UX), Hewlett-Packard (HP-UX), NCR (Unix/NS), NEC (EWS-UX, UP-UX, UX/4800, SUPER-UX), OKI (OKI System V), Pyramid Technology (DC/OSx), SGI (IRIX), Siemens (SINIX), Sony (NEWS-OS), Sumitomo Electric Industries (SEIUX), and Sun Microsystems (Solaris) with illumos in the 2010s as the only open-source platform. Software porting houses also sold enhanced and supported Intel x86 versions. SVR4 software vendors included Dell (Dell UNIX), Everex (ESIX), Micro Station Technology (SVR4), Microport (SVR4), and UHC (SVR4). The primary platforms for SVR4 were Intel x86 and SPARC; the SPARC version, called Solaris 2 (or, internally, SunOS 5.x), was developed by Sun. The relationship between Sun and AT&T was terminated after the release of SVR4, meaning that later versions of Solaris did not inherit features of later SVR4.x releases. Sun would in 2005 release most of the source code for Solaris 10 (SunOS 5.10) as the open-source OpenSolaris project, creating, with its forks, the only open-source (albeit heavily modified) System V implementation available. After Oracle took over Sun, Solaris was forked into proprietary release, but illumos as the continuation project is being developed in open-source. A consortium of Intel-based resellers including Unisys, ICL, NCR Corporation, and Olivetti developed SVR4.0MP with multiprocessing capability (allowing system calls to be processed from any processor, but interrupt servicing only from a "master" processor). Release 4.1 ES (Enhanced Security) added security features required for Orange Book B2 compliance and Access Control Lists and support for dynamic loading of kernel modules. SVR4.2 / UnixWare In 1992, AT&T USL engaged in a joint venture with Novell, called Univel. That year saw the release System V.4.2 as Univel UnixWare, featuring the Veritas File System. Other vendors included UHC and Consensys. Release 4.2MP, completed late 1993, added support for multiprocessing and it was released as UnixWare 2 in 1995. Eric S. Raymond warned prospective buyers about SVR4.2 versions, as they often did not include on-line man pages. In his 1994 buyers guide, he attributes this change in policy to Unix System Laboratories. SVR5 / UnixWare 7 The Santa Cruz Operation (SCO), owners of Xenix, eventually acquired the UnixWare trademark and the distribution rights to the System V Release 4.2 codebase from Novell, while other vendors (Sun, IBM, HP) continued to use and extend System V Release 4. Novell transferred ownership of the Unix trademark to The Open Group. System V Release 5 was developed in 1997 by the Santa Cruz Operation (SCO) as a merger of SCO OpenServer (an SVR3-derivative) and UnixWare, with a focus on large-scale servers. It was released as SCO UnixWare 7. SCO's successor, The SCO Group, also based SCO OpenServer 6 on SVR5, but the codebase is not used by any other major developer or reseller. SVR6 (cancelled) System V Release 6 was announced by SCO to be released by the end of 2004, but was apparently cancelled. It was supposed to support 64-bit systems. SCO also discontinued Smallfoot in 2004. The industry has coalesced around The Open Group's Single UNIX Specification version 3 (UNIX 03). Market position Availability during the 1990s on x86 platforms In the 1980s and 1990s, a variety of SVR4 versions of Unix were available commercially for the x86 PC platform. However, the market for commercial Unix on PCs declined after Linux and BSD became widely available. In late 1994, Eric S. Raymond discontinued his PC-clone UNIX Software Buyer's Guide on USENET, stating, "The reason I am dropping this is that I run Linux now, and I no longer find the SVr4 market interesting or significant." In 1998, a confidential memo at Microsoft stated, "Linux is on track to eventually own the x86 UNIX market", and further predicted, "I believe that Linux – moreso than NT – will be the biggest threat to SCO in the near future." An InfoWorld article from 2001 characterized SCO UnixWare as having a "bleak outlook" due to being "trounced" in the market by Linux and Solaris, and IDC predicted that SCO would "continue to see a shrinking share of the market". Project Monterey Project Monterey was started in 1998 to combine major features of existing commercial Unix platforms, as a joint project of Compaq, IBM, Intel, SCO, and Sequent Computer Systems. The target platform was meant to be Intel's new IA-64 architecture and Itanium line of processors. However, the project was abruptly canceled in 2001 after little progress. System V and the Unix market By 2001, several major Unix variants such as SCO UnixWare, Compaq Tru64 UNIX, and SGI IRIX were all in decline. The three major Unix versions doing well in the market were IBM AIX, Hewlett-Packard's HP-UX, and Sun's Solaris. In 2006, when SGI declared bankruptcy, analysts questioned whether Linux would replace proprietary Unix altogether. In a 2006 article written for Computerworld by Mark Hall, the economics of Linux were cited as a major factor driving the migration from Unix to Linux: The article also cites trends in high-performance computing applications as evidence of a dramatic shift from Unix to Linux: In a November 2015 survey of the top 500 supercomputers, Unix was used by only 1.2% (all running IBM AIX), while Linux was used by 98.8%; the same survey in November 2017 reports 100% of them using Linux. System V derivatives continued to be deployed on some proprietary server platforms. The principal variants of System V that remain in commercial use are AIX (IBM), Solaris (Oracle), and HP-UX (HP). According to a study done by IDC, in 2012 the worldwide Unix market was divided between IBM (56%), Oracle (19.2%), and HP (18.6%). No other commercial Unix vendor had more than 2% of the market. Industry analysts generally characterize proprietary Unix as having entered a period of slow but permanent decline. OpenSolaris and illumos distributions OpenSolaris and its derivatives are the only SVR4 descendants that are open-source software. Core system software continues to be developed as illumos used in illumos distributions such as SmartOS, Omniosce, OpenIndiana and others. System V compatibility The System V interprocess communication mechanisms are available in Unix-like operating systems not derived from System V; in particular, in Linux (a reimplementation of Unix) as well as the BSD derivative FreeBSD. POSIX 2008 specifies a replacement for these interfaces. FreeBSD maintains a binary compatibility layer for the COFF format, which allows FreeBSD to execute binaries compiled for some SVR3.2 derivatives such as SCO UNIX and Interactive UNIX. Modern System V, Linux, and BSD platforms use the ELF file format for natively compiled binaries. References External links PC-clone UNIX Software Buyer's Guide by Eric S. Raymond (posted to USENET in 1994) Unix FAQ - history A Unix History Diagram - The original and continuously updated version of the Unix history, as published by O'Reilly Unix distributions 1983 software
4502354
https://en.wikipedia.org/wiki/Tz%20database
Tz database
The tz database is a collaborative compilation of information about the world's time zones, primarily intended for use with computer programs and operating systems. Paul Eggert is its current editor and maintainer, with the organizational backing of ICANN. The tz database is also known as tzdata, the zoneinfo database or IANA time zone database, and occasionally as the Olson database, referring to the founding contributor, Arthur David Olson. Its uniform naming convention for time zones, such as America/New_York and Europe/Paris, was designed by Paul Eggert. The database attempts to record historical time zones and all civil changes since 1970, the Unix time epoch. It also includes transitions such as daylight saving time, and also records leap seconds. The database, as well as some reference source code, is in the public domain. New editions of the database and code are published as changes warrant, usually several times per year. Data structure File formats The tz database is published as a set of text files which list the rules and zone transitions in a human-readable format. For use, these text files are compiled into a set of platform-independent binary files—one per time zone. The reference source code includes such a compiler called zic (zone information compiler), as well as code to read those files and use them in standard APIs such as localtime() and mktime(). Definition of a time zone Within the tz database, a time zone is any national region where local clocks have all agreed since 1970. This definition concerns itself first with geographic areas which have had consistent local clocks. This is different from other definitions which concern themselves with consistent offsets from a prime meridian. Therefore, each of the time zones defined by the tz database may document multiple offsets from UTC, typically including both standard time and daylight saving time. In the time zone text files, each time zone has one or more "zone lines" in one of the time zone text files. The first zone line for a time zone gives the name of the time zone; any subsequent zone lines for that time zone leave the name blank, indicating that they apply to the same zone as the previous line. Each zone line for a zone specifies, for a range of date and time, the offset to UTC for standard time, the name of the set of rules that govern daylight saving time (or a hyphen if standard time always applies), the format for time zone abbreviations, and, for all but the last zone line, the date and time at which the range of date and time governed by that line ends. Daylight saving time (DST) rules The rules for daylight saving time are specified in named rule sets. Each rule set has one or more rule lines in the time zone text files. A rule line contains the name of the rule set to which it belongs, the first year in which the rule applies, the last year in which the rule applies (or "only" if it applies only in one year or "max" if it is the rule currently in effect), the type of year to which the rule applies ("-" if it applies to all years in the specified range, which is almost always the case, otherwise a name used as an argument to a script that indicates whether the year is of the specified type), the month in which the rule takes effect, the day on which the rule takes effect (which could either be a specific day or a specification such as "the last Sunday of the month"), the time of day at which the rule takes effect, the amount of time to add to the offset to UTC when the rule is in effect, and the letter or letters to use in the time zone abbreviation (for example, "S" if the rule governs standard time and "D" if it governs daylight saving time). Names of time zones The time zones have unique names in the form "Area/Location", e.g. "America/New_York". A choice was also made to use English names or equivalents, and to omit punctuation and common suffixes. The underscore character is used in place of spaces. Hyphens are used where they appear in the name of a location. The Area and Location names have a maximum length of 14 characters. Area Area is the name of a continent, an ocean, or "Etc". The continents and oceans currently used are Africa, America, Antarctica, Arctic, Asia, Atlantic, Australia, Europe, Indian, and Pacific. The oceans are included since some islands are hard to connect to a certain continent. Some are geographically connected to one continent and politically to another. See also Boundaries between continents. The special area of "Etc" is used for some administrative zones, particularly for "Etc/UTC" which represents Coordinated Universal Time. In order to conform with the POSIX style, those zone names beginning with "Etc/GMT" have their sign reversed from the standard ISO 8601 convention. In the "Etc" area, zones west of GMT have a positive sign and those east have a negative sign in their name (e.g "Etc/GMT-14" is 14 hours ahead of GMT). Location Location is the name of a specific location within the area – usually a city or small island. Country names are not used in this scheme, primarily because they would not be robust, owing to frequent political and boundary changes. The names of large cities tend to be more permanent. Usually the most populous city in a region is chosen to represent the entire time zone, although another city may be selected if it is more widely known, and another location, including a location other than a city, may be used if it results in a less ambiguous name. In the event that the name of the location used to represent the time zone changes, the convention is to create an alias in future editions so that both the old and new names refer to the same database entry. In some cases the Location is itself represented as a compound name, for example the time zone "America/Indiana/Indianapolis". Three-level names include those under "America/Argentina/...", "America/Kentucky/...", "America/Indiana/...", and "America/North_Dakota/...". The location selected is representative for the entire area. However, if there were differences within the area before 1970, the time zone rules only apply in the named location. Examples Example zone and rule lines These are rule lines for the standard United States daylight saving time rules, rule lines for the daylight saving time rules in effect in the US Eastern Time Zone (called "NYC" as New York City is the city representing that zone) in some years, and zone lines for the America/New_York time zone, as of release version tzdata2011n of the time zone database. The zone and rule lines reflect the history of DST in the United States. # Rule NAME FROM TO TYPE IN ON AT SAVE LETTER/S Rule US 1918 1919 - Mar lastSun 2:00 1:00 D Rule US 1918 1919 - Oct lastSun 2:00 0 S Rule US 1942 only - Feb 9 2:00 1:00 W # War Rule US 1945 only - Aug 14 23:00u 1:00 P # Peace Rule US 1945 only - Sep 30 2:00 0 S Rule US 1967 2006 - Oct lastSun 2:00 0 S Rule US 1967 1973 - Apr lastSun 2:00 1:00 D Rule US 1974 only - Jan 6 2:00 1:00 D Rule US 1975 only - Feb 23 2:00 1:00 D Rule US 1976 1986 - Apr lastSun 2:00 1:00 D Rule US 1987 2006 - Apr Sun>=1 2:00 1:00 D Rule US 2007 max - Mar Sun>=8 2:00 1:00 D Rule US 2007 max - Nov Sun>=1 2:00 0 S .... # Rule NAME FROM TO TYPE IN ON AT SAVE LETTER Rule NYC 1920 only - Mar lastSun 2:00 1:00 D Rule NYC 1920 only - Oct lastSun 2:00 0 S Rule NYC 1921 1966 - Apr lastSun 2:00 1:00 D Rule NYC 1921 1954 - Sep lastSun 2:00 0 S Rule NYC 1955 1966 - Oct lastSun 2:00 0 S # Zone NAME GMTOFF RULES FORMAT [UNTIL] Zone America/New_York -4:56:02 - LMT 1883 November 18, 12:03:58 -5:00 US E%sT 1920 -5:00 NYC E%sT 1942 -5:00 US E%sT 1946 -5:00 NYC E%sT 1967 -5:00 US E%sT Data stored for each zone For each time zone that has multiple offsets (usually due to daylight saving time), the tz database records the exact moment of transition. The format can accommodate changes in the dates and times of transitions as well. Zones may have historical rule changes going back many decades (as shown in the example above). Zone.tab The file zone.tab is in the public domain and lists the zones. Columns and row sorting are described in the comments of the file, as follows: # This file contains a table with the following columns: # 1. ISO 3166 2-character country code. See the file `iso3166.tab'. # 2. Latitude and longitude of the zone's principal location # in ISO 6709 sign-degrees-minutes-seconds format, # either +-DDMM+-DDDMM or +-DDMMSS+-DDDMMSS, # first latitude (+ is north), then longitude (+ is east). # 3. Zone name used in value of TZ environment variable. # 4. Comments; present if and only if the country has multiple rows. # # Columns are separated by a single tab. # The table is sorted first by country, then an order within the country that # (1) makes some geographical sense, and # (2) puts the most populous zones first, where that does not contradict (1). Data before 1970 Data before 1970 aims to be correct for the city identifying the region, but is not necessarily correct for the entire region. This is because new regions are created only as required to distinguish clocks since 1970. For example, between 1963-10-23 and 1963-12-09 in Brazil only the states of Minas Gerais, Espirito Santo, Rio de Janeiro, and São Paulo had summer time. However, a requested split from America/Sao_Paulo was rejected in 2010 with the reasoning that, since 1970, the clocks were the same in the whole region. Time in Germany, which is represented by Europe/Berlin, is not correct for the year 1945 when the Trizone used different daylight saving time rules than Berlin. Coverage Zones covering multiple post-1970 countries There are two zones that cover an area that was covered by two countries after 1970. The database follows the definitions of countries as per ISO 3166-1, whose predecessor, ISO 3166, was first published in 1974. Asia/Aden – two countries until 1990: North Yemen (ISO 3166-1: YE; capital Sana'a) and South Yemen (People's Republic, ISO 3166-1: YD, ISO 3166-3: YDYE; capital: Aden). Europe/Berlin – two countries until 1990: East Germany (ISO 3166-1: DD, ISO 3166-3: DDDE) and West Germany (ISO 3166-1: DE) Maintenance The tz reference code and database is maintained by a group of volunteers. Arthur David Olson makes most of the changes to the code, and Paul Eggert to the database. Proposed changes are sent to the tz mailing list, which is gatewayed to the comp.time.tz Usenet newsgroup. Source files are distributed via the IANA FTP server. Typically, these files are taken by a software distributor like Debian, compiled, and then the source and binaries are packaged as part of that distribution. End users can either rely on their software distribution's update procedures, which may entail some delay, or obtain the source directly and build the binary files themselves. The IETF has published , "Procedures for Maintaining the Time Zone Database" documenting best practices based on similar principles. Unix-like systems The standard path for the timezone database is /usr/share/zoneinfo/ in Linux distributions, macOS, and some other Unix-like systems. Usage and extensions Boundaries of time zones Geographical boundaries in the form of coordinate sets are not part of the tz database, but boundaries are published by Eric Muller in the form of vector polygons. Using these vector polygons, one can determine, for each place on the globe, the tz database zone in which it is located. Use in other standards The Unicode Common Locale Data Repository (CLDR) refers to zones in the tz database. However, as the name for a zone can change from one tz database release to another, the CLDR assigns the UN/LOCODE for the city used in the name for the zone, or an internally-assigned code if there is no such city for the zone, to a tzdb zone. Use in software systems The tz database is used for time zone processing and conversions in many computer software systems, including: BSD-derived systems, including FreeBSD, NetBSD, OpenBSD, DragonFly BSD, macOS, and iOS (they also use the reference TZ database processing code as their TZ POSIX API implementation); the GNU C Library and systems that use it, including GNU, most Linux distributions, BeOS, Haiku, Nexenta OS, and Cygwin; System V Release 4-derived systems, such as Solaris and UnixWare; AIX 6.1 and later (earlier versions of AIX, starting with AIX 5.2, include zoneinfo, for support of third-party applications such as MySQL, but do not use it themselves); Android several other Unix systems, including Tru64, and UNICOS/mp (also IRIX, still maintained but no longer shipped); OpenVMS; the Java Runtime Environment since release 1.8 (2014), see java.time.ZoneId the Perl modules DateTime::TimeZone and DateTime::LeapSecond since 2003; PHP releases since 5.1.0 (2005); the Ruby Gem TZInfo; the Python standard library tzinfo module, and the third-party pytz package; the JavaScript language specification for Internationalization explicitly specifies the usage of IANA Time Zone names for API, and recommends the usage of the time zone data as well. Numerous libraries also available: timezone-js, BigEasy/TimeZone, WallTime-js and moment-timezone; the Pandas (Python) module ; the .NET Framework libraries NodaTime, TZ4Net and zoneinfo; the Haskell libraries timezone-series and timezone-olson; the Erlang module ezic; The Go standard library time package; The Rust crate chrono-tz; The Squeak Smalltalk time package; The C++ libraries Boost and Qt, and C++20 chrono standard library's std::chrono::tzdb; The Delphi and Free Pascal library TZDB; The Free Pascal library PascalTZ; The Tool Command Language has a clock command using tzdata; Oracle releases since 10g (2004); PostgreSQL since release 8.0 (2005); the Microsoft SQL Server library SQL Server Time Zone Support MongoDB since release 3.6; embedded software such as the firmware used in IP clocks. The Olson timezone IDs are also used by the Unicode Common Locale Data Repository (CLDR) and International Components for Unicode (ICU). For example, the CLDR Windows–Tzid table maps Microsoft Windows time zone IDs to the standard Olson names, although such a mapping cannot be perfect because the number of time zones in Windows systems is significantly lower that those in the IANA TZ database. History The project's origins go back to 1986 or earlier. 2011 lawsuit On 30 September 2011, a lawsuit, Astrolabe, Inc. v. Olson et al., was filed concerning copyright in the database. As a result, on 6 October 2011, the database's mailing list and FTP site were shut down. The case revolved around the database maintainers' use of The American Atlas, by Thomas G. Shanks, and The International Atlas, by Thomas G. Shanks and Rique Pottenger. It complained of unauthorised reproduction of atlas data in the timezone mailing list archive and in some auxiliary link collections maintained with the database, though it did not actually point at the database itself. The complaint related only to the compilation of historical timezone data, and did not cover current tzdata world timezone tables. This lawsuit was resolved on 22 February 2012 after the involvement of the Electronic Frontier Foundation, when Astrolabe voluntarily moved to dismiss the lawsuit without having ever served the defendants and agreed to a covenant not to sue in the future. Move to ICANN ICANN took responsibility for the maintenance of the database on 14 October 2011. The full database and a description of current and future plans for its maintenance are available online from IANA. See also List of tz database time zones Time zone Daylight saving time References External links General . (deprecated, see Official IANA sources below) . . tz mailing list at ICANN "A literary appreciation of the Olson/Zoneinfo/tz database" by Jon Udell Official IANA sources Home page FTP rsync, at rsync://rsync.iana.org/tz/ Man pages (gives the syntax of source files for the tz database) (gives the format of compiled tz database files)
1838103
https://en.wikipedia.org/wiki/Roxio
Roxio
Roxio is an American software company specializing in developing consumer digital media products. Its product line includes tools for setting up digital media projects, media conversion software and content distribution systems. The company formed as a spin-off of Adaptec's software division in 2001 and acquired MGI Software in 2002. Sonic Solutions acquired Roxio in 2003, going on to acquire Simple Star and CinemaNow in 2008. Rovi Corporation acquired Sonic Solutions in 2010, but Rovi announced in January 2012 that it would sell Roxio to Canadian software company Corel. That acquisition closed on February 7, 2012. Products Roxio Creator Roxio Toast Easy VHS to DVD Easy LP to MP3 Popcorn DVDitPro PhotoShow RecordNow Back on Track Easy DVD Copy MyDVD Retrospect Roxio Game Capture Roxio Game Capture HD Pro References External links American companies established in 2001 Software companies based in the San Francisco Bay Area Companies based in Santa Clara, California 2001 establishments in California Corel 2012 mergers and acquisitions American subsidiaries of foreign companies Software companies of the United States 2001 establishments in the United States Companies established in 2001 Software companies established in 2001
733166
https://en.wikipedia.org/wiki/Nessus%20%28software%29
Nessus (software)
Nessus is a proprietary vulnerability scanner developed by Tenable, Inc. (NASDAQ: TENB) Operation Examples of vulnerabilities and exposures Nessus can scan for include: Vulnerabilities that could allow unauthorized control or access to sensitive data on a system. Misconfiguration (e.g. open mail relay, missing patches, etc.). Default passwords, a few common passwords, and blank/absent passwords on some system accounts. Nessus can also call Hydra (an external tool) to launch a dictionary attack. Denial-of-service vulnerabilities Nessus scans cover a wide range of technologies including operating systems, network devices, hypervisors, databases, web servers, and critical infrastructure. The results of the scan can be reported in various formats, such as plain text, XML, HTML and LaTeX. The results can also be saved in a knowledge base for debugging. On UNIX, scanning can be automated through the use of a command-line client. There exist many different commercial, free and open source tools for both UNIX and Windows to manage individual or distributed Nessus scanners. Nessus provides additional functionality beyond testing for known network vulnerabilities. For instance, it can use Windows credentials to examine patch levels on computers running the Windows operating system. Nessus can also support configuration and compliance audits, SCADA audits, and PCI compliance. History The Nessus Project was started by Renaud Deraison in 1998 to provide to the Internet community with a free remote security scanner. On October 5, 2005, Tenable Network Security, the company Renaud Deraison co-founded, changed Nessus 3 to a proprietary (closed source) license. The Nessus 2 engine and a minority of the plugins are still GPL, leading to forked open source projects based on Nessus like OpenVAS and Greenbone Sustainable Resilience. Today, the product still exists in two formats; a limited, free version and a full-feature paid subscription option. Nessus is available for Linux, Windows, and macOS. Tenable, Inc. went public on July 26, 2018, twenty years after Nessus’ creation. See also Penetration test Metasploit Project OpenVAS Security Administrator Tool for Analyzing Networks (SATAN) SAINT (software) Snort (software) Wireshark References External links Nessus 2.2.11 files and source code Nessus source code up to 2.2.9 Pentesting software toolkits Free security software Network analyzers Linux security software Formerly free software
19990354
https://en.wikipedia.org/wiki/Features%20new%20to%20Windows%207
Features new to Windows 7
Some of the new features included in Windows 7 are advancements in touch, speech and handwriting recognition, support for virtual hard disks, support for additional file formats, improved performance on multi-core processors, improved boot performance, and kernel improvements. Shell and user interface Windows 7 retains the Windows Aero graphical user interface and visual style introduced in its predecessor, Windows Vista, but many areas have seen enhancements. Unlike Windows Vista, window borders and the taskbar do not turn opaque when a window is maximized while Windows Aero is active; instead, they remain translucent. Desktop Themes Support for themes has been extended in Windows 7. In addition to providing options to customize colors of window chrome and other aspects of the interface including the desktop background, icons, mouse cursors, and sound schemes, the operating system also includes a native desktop slideshow feature. A new theme pack extension has been introduced, .themepack, which is essentially a collection of cabinet files that consist of theme resources including background images, color preferences, desktop icons, mouse cursors, and sound schemes. The new theme extension simplifies sharing of themes and can also display desktop wallpapers via RSS feeds provided by the Windows RSS Platform. Microsoft provides additional themes for free through its website. The default theme in Windows 7 consists of a single desktop wallpaper named "Harmony" and the default desktop icons, mouse cursors, and sound scheme introduced in Windows Vista; however, none of the desktop backgrounds included with Windows Vista are present in Windows 7. New themes include Architecture, Characters, Landscapes, Nature, and Scenes, and an additional country-specific theme that is determined based on the defined locale when the operating system is installed; although only the theme for a user's home country is displayed within the user interface, the files for all of these other country-specific themes are included in the operating system. All themes included in Windows 7—excluding the default theme—include six wallpaper images. A number of new sound schemes (each associated with an included theme) have also been introduced: Afternoon, Calligraphy, Characters, Cityscape, Delta, Festival, Garden, Heritage, Landscape, Quirky, Raga, Savana, and Sonata. Themes may introduce their own custom sounds, which can be used with others themes as well. Desktop Slideshow Windows 7 introduces a desktop slideshow feature that periodically changes the desktop wallpaper based on a user-defined interval; the change is accompanied by a smooth fade transition with a duration that can be customized via the Windows Registry. The desktop slideshow feature supports local images and images obtained via RSS. Gadgets With Windows Vista, Microsoft introduced gadgets to display information such as image slideshows and RSS feeds on the user's desktop; the gadgets could optionally be displayed on a sidebar docked to a side of the screen. In Windows 7, the sidebar has been removed, but gadgets can still be placed on the desktop. Gadgets can be brought to the foreground on top of active applications by pressing . Several new features for gadgets are introduced, including new desktop context menu options to access gadgets and hide all active gadgets; high DPI support; and a feature that can automatically rearrange a gadget based on the position of other gadgets. Additional new features include cached gadget content; optimizations for touch-based devices; and a gadget for Windows Media Center. Gadgets are more closely integrated with Windows Explorer, but the gadgets themselves continue to operate in a single sidebar.exe process, unlike in Windows Vista where gadgets could operate in multiple sidebar.exe processes. Active gadgets can also be hidden via a new desktop menu option; Microsoft has stated that this option can result in power-saving benefits. Branding and customization For original equipment manufacturers and enterprises, Windows 7 natively supports the ability to customize the wallpaper that is displayed during user login. Because the settings to change the wallpaper are available via the Windows Registry, users can also customize this wallpaper. Options to customize the appearance of interface lighting and shadows are also available. Windows Explorer Libraries Windows Explorer in Windows 7 supports file libraries that aggregate content from various locations – including shared folders on networked systems if the shared folder has been indexed by the host system – and present them in a unified view. The libraries hide the actual location the file is stored in. Searching in a library automatically federates the query to the remote systems, in addition to searching on the local system, so that files on the remote systems are also searched. Unlike search folders, Libraries are backed by a physical location which allows files to be saved in the Libraries. Such files are transparently saved in the backing physical folder. The default save location for a library may be configured by the user, as can the default view layout for each library. Libraries are generally stored in the Libraries special folder, which allows them to be displayed on the navigation pane. By default, a new user account in Windows 7 contains four libraries for different file types: Documents, Music, Pictures, and Videos. They are configured to include the user's profile folders for these respective file types, as well as the computer's corresponding Public folders. The Public folder also contains a hidden Recorded TV library that appears in the Windows Explorer sidepane when TV is set up in Media Center for the first time. In addition to aggregating multiple storage locations, Libraries enable Arrangement Views and Search Filter Suggestions. Arrangement Views allow you to pivot your view of the library's contents based on metadata. For example, selecting the "By Month" view in the Pictures library will display photos in stacks, where each stack represents a month of photos based on the date they were taken. In the Music library, the "By Artist" view will display stacks of albums from the artists in your collection, and browsing into an artist stack will then display the relevant albums. Search Filter Suggestions are a new feature of the Windows 7 Explorer's search box. When the user clicks in the search box, a menu shows up below it showing recent searches as well as suggested Advanced Query Syntax filters that the user can type. When one is selected (or typed in manually), the menu will update to show the possible values to filter by for that property, and this list is based on the current location and other parts of the query already typed. For example, selecting the "tags" filter or typing "tags:" into the search box will display the list of possible tag values which will return search results. Arrangement Views and Search Filter Suggestions are database-backed features which require that all locations in the Library be indexed by the Windows Search service. Local disk locations must be indexed by the local indexer, and Windows Explorer will automatically add locations to the indexing scope when they are included in a library. Remote locations can be indexed by the indexer on another Windows 7 machine, on a Windows machine running Windows Search 4 (such as Windows Vista or Windows Home Server), or on another device that implements the MS-WSP remote query protocol. Federated search Windows Explorer also supports federating search to external data sources, such as custom databases or web services, that are exposed over the web and described via an OpenSearch definition. The federated location description (called a Search Connector) is provided as an .osdx file. Once installed, the data source becomes queryable directly from Windows Explorer. Windows Explorer features, such as previews and thumbnails, work with the results of a federated search as well. Miscellaneous shell enhancements Windows Explorer has received numerous minor enhancements that improve its overall functionality. The Explorer's search box and the address bar can be resized. Folders such as those on the desktop or user profile folders can be hidden in the navigation pane to reduce clutter. A new Content view is added, which shows thumbnails and metadata together. A new button to toggle the Preview Pane has been added to the toolbar. The button to create a new folder has been moved from the Organize menu and onto the toolbar. List view provides more space between items than in Windows Vista. Finally, storage space consumption bars that were only present for hard disks in Windows Vista are now shown for removable storage devices. Other areas of the shell have also received similar fine-tunings: Progress bars and overlay icons may now appear on an application's button on the taskbar to better alert the user of the status of the application or the work in progress. File types for which property handlers or iFilters are installed are re-indexed by default. Previously, adding submenus to shell context menus or customizing the context menu's behavior for a certain folder was only possible by installing a form of plug-in known as shell extensions. In Windows 7 however, computer-savvy users can do so by editing Windows Registry and/or desktop.ini files. Additionally, a new shell API was introduced designed to simplify the writing of context menu shell extensions by software developers. Windows 7 includes native support for burning ISO files. The functionality is available when a user selects the Burn disc image option within the context menu of an ISO file. Support for disc image verification is also included. In previous versions of Windows, users were required to install-third-party software to burn ISO images. Start menu The start orb now has a fade-in highlight effect when the user hovers the mouse cursor over it. The Start Menu's right column is now the Aero glass color. In Windows Vista, it was always black. Windows 7's Start menu retains the two-column layout of its predecessors, with several functional changes: The "Documents", "Pictures" and "Music" buttons now link to the Libraries of the same name. A "Devices and Printers" option has been added that displays a new device manager. The "shut down" icon in Windows Vista has been replaced with a text link indicating what action will be taken when the icon is clicked. The default action (switch user, log off, lock, restart, sleep, hibernate or shut down) to take is now configurable through the Taskbar and Start Menu Properties window. Taskbar Jump Lists are presented in the Start Menu via a guillemet; when the user moves the mouse cursor over the guillemet, or presses the right-arrow key, the right-hand side of the Start menu is widened and replaced with the application's Jump List. Links to the "Videos", "Downloads", and "Recorded TV", the Connect To menu, the Homegroup and Network menus, the Favorites and Recent Items folders and menus can now be added to the Start menu, and the Administrative Tools folder can be added to the All Programs menu. The Start Search field, introduced in Windows Vista, has been extended to support searching for keywords of Control Panel items. For example, clicking the Start button then typing "wireless" will show Control Panel options related to configuring and connecting to wireless network, adding Bluetooth devices, and troubleshooting. Group Policy settings for Windows Explorer provide the ability for administrators of an Active Directory domain, or an expert user to add up to five Internet web sites and five additional "search connectors" to the Search Results view in the Start menu. The links, which appear at the bottom of the pane, allow the search to be executed again on the selected web site or search connector. Microsoft suggests that network administrators could use this feature to enable searching of corporate Intranets or an internal SharePoint server. Taskbar The Windows Taskbar has seen its most significant revision since its introduction in Windows 95 and combines the previous Quick Launch functionality with open application window icons. The taskbar is now rendered as an Aero glass element whose color can be changed via the Personalization Control Panel. It is 10 pixels taller than in Windows Vista to accommodate touch screen input and a new larger default icon size (although a smaller taskbar size is available), as well as maintain proportion to newer high resolution monitor modes. Running applications are denoted by a border frame around the icon. Within this border, a color effect (dependent on the predominant color of the icon) that follows the mouse cursor also indicates the opened status of the application. The glass taskbar is more translucent than in Windows Vista. Taskbar buttons show icons by default, not application titles, unless they are set to 'not combine', or 'combine when taskbar is full.' In this case, only icons are shown when the application is not running. Programs running or pinned on the taskbar can be rearranged. Items in the notification area can also be rearranged. Pinned applications The Quick Launch toolbar has been removed from the default configuration, but may be easily added. The Windows 7 taskbar is more application-oriented than window-oriented, and therefore doesn't show window titles (these are shown when an application icon is clicked or hovered over). Applications can now be pinned to the taskbar allowing the user instant access to the applications they commonly use. There are a few ways to pin applications to the taskbar. Icons can be dragged and dropped onto the taskbar, or the application's icon can be right-clicked to pin it to the taskbar. Thumbnail previews Thumbnail previews which were introduced in Windows Vista have been expanded to not only preview the windows opened by the application in a small-sized thumbnail view, but to also interact with them. The user can close any window opened by clicking the X on the corresponding thumbnail preview. The name of the window is also shown in the thumbnail preview. A "peek" at the window is obtained by hovering over the thumbnail preview. Peeking brings up only the window of the thumbnail preview over which the mouse cursor hovers, and turns any other windows on the desktop transparent. This also works for tabs in Internet Explorer: individual tabs may be peeked at in the thumbnail previews. Thumbnail previews integrate Thumbnail Toolbars which can control the application from the thumbnail previews themselves. For example, if Windows Media Player is opened and the mouse cursor is hovering on the application icon, the thumbnail preview will allow the user the ability to Play, Stop, and Play Next/Previous track without having to switch to the Windows Media Player window. Jump lists Jump lists are menu options available by right-clicking a taskbar icon or holding the left mouse button and sliding towards the center of the desktop on an icon. Each application has a jump list corresponding to its features, Microsoft Word's displaying recently opened documents; Windows Media Player's recent tracks and playlists; frequently opened directories in Windows Explorer; Internet Explorer's recent browsing history and options for opening new tabs or starting InPrivate Browsing; Windows Live Messenger's common tasks such as instant messaging, signing off, and changing online status. Third-party software can add custom actions through a dedicated API. Up to 10 menu items may appear on a list, partially customizable by user. Frequently used files and folders can be pinned by the user as to not get usurped from the list if others are opened more frequently. Task progress Progress bar in taskbar's tasks allows users to know the progress of a task without switching to the pending window. Task progress is used in Windows Explorer, Internet Explorer and third-party software. Notification area The notification area has been redesigned; the standard Volume, Network, Power and Action Center status icons are present, but no other application icons are shown unless the user has chosen them to be shown. A new "Notification Area Icons" control panel has been added which replaces the "Customize Notification Icons" dialog box in the "Taskbar and Start Menu Properties" window first introduced in Windows XP. In addition to being able to configure whether the application icons are shown, the ability to hide each application's notification balloons has been added. The user can then view the notifications at a later time. A triangle to the left of the visible notification icons displays the hidden notification icons. Unlike Windows Vista and Windows XP, the hidden icons are displayed in a window above the taskbar, instead of on the taskbar. Icons can be dragged between this window and the notification area. Aero Peek In previous versions of Windows, the taskbar ended with the notification area on the right-hand side. Windows 7, however, introduces a show desktop button on the far right side of the taskbar which can initiate an Aero Peek feature that makes all open windows translucent when hovered over by a mouse cursor. Clicking this button shows the desktop, and clicking it again brings all windows to focus. The new button replaces the show desktop shortcut located in the Quick Launch toolbar in previous versions of Windows. On touch-based devices, Aero Peek can be initiated by pressing and holding the show desktop button; touching the button itself shows the desktop. The button also increases in width to accommodate being pressed by a finger. Window management mouse gestures Aero Snap Windows can be dragged to the top of the screen to maximize them and dragged away to restore them. Dragging a window to the left or right of the screen makes it take up half the screen, allowing the user to tile two windows next to each other. Also, resizing the window to the bottom of the screen or its top will extend the window to full height but retain its width. These features can be disabled via the Ease of Access Center if users do not wish the windows to automatically resize. Aero Shake Aero Shake allows users to clear up any clutter on their screen by shaking (dragging back and forth) a window of their choice with the mouse. All other windows will minimize, while the window the user shook stays active on the screen. When the window is shaken again, all previously minimized windows are restored, similar to desktop preview. Keyboard shortcuts A variety of new keyboard shortcuts have been introduced. Global keyboard shortcuts: operates as a keyboard shortcut for Aero Peek. maximizes the current window. if current window is maximized, restores it; otherwise minimizes current window. makes upper and lower edge of current window nearly touch the upper and lower edge of the Windows desktop environment, respectively. restores the original size of the current window. snaps the current window to the left edge of the screen. snaps the current window to the right half of the screen. and move the current window to the left or right display. functions as zoom in command wherever applicable. functions as zoom out command wherever applicable. turn off zoom once enabled. operates as a keyboard shortcut for Aero Shake. View opened application and windows in 3D stack view. Opens Connect to a Network Projector, which has been updated from previous versions of Windows, and allows one to dictate where the desktop is displayed: on the main monitor, an external display, both; or allows one to display two independent desktops on two separate monitors. Taskbar: Shift + Click, or Middle click starts a new instance of the application, regardless of whether it's already running. Ctrl + Shift + Click starts a new instance with Administrator privileges; by default, a User Account Control prompt will be displayed. Shift + Right-click (or right-clicking the program's thumbnail) shows the titlebar's context menu which, by default, contains "Restore", "Move", "Size", "Maximize", "Minimize" and "Close" commands. If the icon being clicked on is a grouped icon, a specialized context menu with "Restore All", "Minimize All", and "Close All" commands is shown. Ctrl + Click on a grouped icon cycles between the windows (or tabs) in the group. Font management The user interface for font management has been overhauled in Windows 7. As with Windows Vista, the collection of installed fonts is displayed in a Windows Explorer window, but fonts that originate from the same font family appear as icons that are represented as stacks that display font previews within the interface. Windows 7 also introduces the option to hide installed fonts; certain fonts are automatically removed from view based on a user's regional settings. An option to manually hide installed fonts is also available. Hidden fonts remain installed but are not enumerated when an application asks for a list of available fonts, thus reducing the amount of fonts to scroll through within the interface and also reducing memory usage. Windows 7 includes over 40 new fonts, including a new "Gabriola" font. The dialog box for fonts in Windows 7 has also been updated to display font previews within the interface, which allows users to preview fonts before selecting them. Previous versions of windows only displayed the name of the font. The ClearType Text Tuner which was previously available as a Microsoft Powertoy for earlier Windows versions has been integrated into, and updated for Windows 7. Microsoft would later backport Windows 8 Emoji features to Windows 7. Devices There are two major new user interface components for device management in Windows 7, "Devices and Printers" and "Device Stage". Both of these are integrated with Windows Explorer, and together provide a simplified view of what devices are connected to the computer, and what capabilities they support. Devices and Printers Devices and Printers is a new Control Panel interface that is directly accessible from the Start menu. Unlike the Device Manager Control Panel applet, which is still present, the icons shown on the Devices and Printers screen are limited to components of the system that a non-expert user will recognize as plug-in devices. For example, an external monitor connected to the system will be displayed as a device, but the internal monitor on a laptop will not. Device-specific features are available through the context menu for each device; an external monitor's context menu, for example, provides a link to the "Display Settings" control panel. This new Control Panel applet also replaces the "Printers" window in prior versions of Windows; common printer operations such as setting the default printer, installing or removing printers, and configuring properties such as paper size are done through this control panel. Windows 7 and Server 2008 R2 introduce print driver isolation, which improves the reliability of the print spooler by running printer drivers in a separate process to the spooler service. If a third party print driver fails while isolated, it does not impact other drivers or the print spooler service. Device Stage Device Stage provides a centralized location for an externally connected multi-function device to present its functionality to the user. When a device such as a portable music player is connected to the system, the device appears as an icon on the task bar, as well as in Windows Explorer. Windows 7 ships with high-resolution images of a number of popular devices, and is capable of connecting to the Internet to download images of devices it doesn't recognize. Opening the icon presents a window that displays actions relevant to that device. Screenshots of the technology presented by Microsoft suggest that a mobile phone could offer options for two-way synchronization, configuring ring-tones, copying pictures and videos, managing the device in Windows Media Player, and using Windows Explorer to navigate through the device. Other device status information such as free memory and battery life can also be shown. The actual per-device functionality is defined via XML files that are downloaded when the device is first connected to the computer, or are provided by the manufacturer on an installation disc. Mobility enhancements Multi-touch support Hilton Locke, who worked on the Tablet PC team at Microsoft, reported on December 11, 2007 that Windows 7 will have new touch features on devices supporting multi-touch. An overview and demonstration of the multi-touch capabilities, including a virtual piano program, a mapping and directions program and a touch-aware version of Microsoft Paint, was given at the All Things Digital Conference on May 27, 2008; a video of the multi-touch capabilities was made available on the web later the same day. Sensors Windows 7 introduces native support for sensors, including accelerometer sensors, ambient light sensors, and location-based sensors; the operating system also provides a unified driver model for sensor devices. A notable use of this technology in Windows 7 is the operating system's adaptive display brightness feature, which automatically adjusts the brightness of a compatible computer's display based on environmental light conditions and factors. Gadgets developed for Windows 7 can also display location-based information. Applications for certain sensor capabilities can be developed without the requisite hardware. Because data acquired by some sensors can be considered personally identifiable information, all sensors are disabled by default in Windows 7, and an account in Windows 7 requires administrative permissions to enable a sensor. Sensors also require user consent to share location data. Power management Battery notification messages Unlike previous versions of Windows, Windows 7 is able to report when a laptop battery is in need of a replacement. The operating system works with design capabilities present in modern laptop batteries to report this information. Hibernation improvements The powercfg command enables the customization of the hibernation file size. By default, Windows 7 automatically sets the size of the hibernation file to 75% of a computer's total physical memory. The operating system also compresses the contents of memory during the hibernate process to minimize the possibility that the contents exceeds the default size of the hibernation file. Power analysis and reporting Windows 7 introduces a new /Energy parameter for the powercfg command, which generates an HTML report of a computer's energy efficiency and displays information related to devices or settings. USB suspension Windows 7 can individually suspend USB hubs and supports selective suspend for all in-box USB class drivers. Graphics DirectX Direct3D 11 is included with Windows 7. It is a strict super-set of Direct3D 10.1, which was introduced in Windows Vista Service Pack 1 and Windows Server 2008. Direct2D and DirectWrite, new hardware-accelerated vector graphics and font rendering APIs built on top of Direct3D 10 that are intended to replace GDI/GDI+ for screen-oriented native-code graphics and text drawing. They can be used from managed applications with the Windows API Code Pack Windows Advanced Rasterization Platform (WARP), a software rasterizer component for DirectX that provides all of the capabilities of Direct3D 10.0 and 10.1 in software. DirectX Video Acceleration-High Definition (DXVA-HD) Direct3D 11, Direct2D, DirectWrite, DXGI 1.1, WARP and several other components are currently available for Windows Vista SP2 and Windows Server 2008 SP2 by installing the Platform Update for Windows Vista. Desktop Window Manager First introduced in Windows Vista, the Desktop Window Manager (DWM) in Windows 7 has been updated to use version 10.1 of Direct3D API, and its performance has been improved significantly. The Desktop Window Manager still requires at least a Direct3D 9-capable video card (supported with new device type introduced with the Direct3D 11 runtime). With a video driver conforming to Windows Display Driver Model v1.1, DXGI kernel in Windows 7 provides 2D hardware acceleration to APIs such as GDI, Direct2D and DirectWrite (though GDI+ was not updated to use this functionality). This allows DWM to use significantly lower amounts of system memory, which do not grow regardless of how many windows are opened, like it was in Windows Vista. Systems equipped with a WDDM 1.0 video card will operate in the same fashion as in Windows Vista, using software-only rendering. The Desktop Window Manager in Windows 7 also adds support for systems using multiple heterogeneous graphics cards from different vendors. Other changes Support for color depths of 30 and 48 bits is included, along with the wide color gamut scRGB (which for HDMI 1.3 can be converted and output as xvYCC). The video modes supported in Windows 7 are 16-bit sRGB, 24-bit sRGB, 30-bit sRGB, 30-bit with extended color gamut sRGB, and 48-bit scRGB. Each user of Windows 7 and Server 2008 R2 has individual DPI settings, rather than the machine having a single setting as in previous versions of Windows. DPI settings can be changed by logging on and off, without needing to restart. File system Solid state drives Over time, several technologies have been incorporated into subsequent versions of Windows to improve the performance of the operating system on traditional hard disk drives (HDD) with rotating platters. Since Solid state drives (SSD) differ from mechanical HDDs in some key areas (no moving parts, write amplification, limited number of erase cycles allowed for reliable operation), it is beneficial to disable certain optimizations and add others, specifically for SSDs. Windows 7 incorporates many engineering changes to reduce the frequency of writes and flushes, which benefit SSDs in particular since each write operation wears the flash memory. Windows 7 also makes use of the TRIM command. If supported by the SSD (not implemented on early devices), this optimizes when erase cycles are performed, reducing the need to erase blocks before each write and increasing write performance. Several tools and techniques that were implemented in the past to reduce the impact of the rotational latency of traditional HDDs, most notably disk defragmentation, SuperFetch, ReadyBoost, and application launch prefetching, involve reorganizing (rewriting) the data on the platters. Since SSDs have no moving platters, this reorganization has no advantages, and may instead shorten the life of the solid state memory. Therefore, these tools are by default disabled on SSDs in Windows 7, except for some early generation SSDs that might still benefit. Finally, partitions made with Windows 7's partition-creating tools are created with the SSD's alignment needs in mind, avoiding unwanted systematic write amplification. Virtual hard disks The Enterprise and Ultimate editions of Windows 7 incorporate support for the Virtual Hard Disk (VHD) file format. VHD files can be mounted as drives, created, and booted from, in the same way as WIM files. Furthermore, an installed version of Windows 7 can be booted and run from a VHD drive, even on non-virtual hardware, thereby providing a new way to multi boot Windows. Some features such as hibernation and BitLocker are not available when booting from VHD. Disk partitioning By default, a computer's disk is partitioned into two partitions: one of limited size for booting, BitLocker and running the Windows Recovery Environment and the second with the operating system and user files. Removable media Windows 7 has also seen improvements to the Safely Remove Hardware menu, including the ability to eject just one camera card at the same time (from a single hub) and retain the ports for future use without reboot; and the labels of removable media are now also listed, rather than just the drive letter. Windows Explorer now by default only shows memory card reader ports in My Computer if they contain a card. BitLocker to Go BitLocker brings encryption support to removable disks such as USB drives. Such devices can be protected by a passphrase, a recovery key, or be automatically unlocked on a computer. Boot performance According to data gathered from the Microsoft Customer Experience Improvement Program (CEIP), 35% of Vista SP1 installations boot up in 30 seconds or less. The more lengthy boot times on the remainder of the machines are mainly due to some services or programs that are loaded but are not required when the system is first started. Microsoft's Mike Fortin, a distinguished engineer on the Windows team, noted in August 2008 that Microsoft has set aside a team to work solely on the issue, and that team aims to "significantly increase the number of systems that experience very good boot times". They "focused very hard on increasing parallelism of driver initialization". Also, Microsoft aims to "dramatically reduce" the number of system services, along with their demands on processors, storage, and memory. Kernel and scheduling improvements User-mode scheduler The 64-bit versions of Windows 7 and Server 2008 R2 introduce a user-mode scheduling framework. On Microsoft Windows operating systems, scheduling of threads inside a process is handled by the kernel, ntoskrnl.exe. While for most applications this is sufficient, applications with large concurrent threading requirements, such as a database server, can benefit from having a thread scheduler in-process. This is because the kernel no longer needs to be involved in context switches between threads, and it obviates the need for a thread pool mechanism, as threads can be created and destroyed much more quickly when no kernel context switches are required. Prior to Windows 7, Windows used a one-to-one user thread to kernel-thread relationship. It was of course always possible to cobble together a rough many-to-one user-scheduler (with user-level timer interrupts) but if a system call was blocked on any one of the user threads, it would block the kernel thread and accordingly block all other user threads on the same scheduler. A many-to-one model could not take full advantage of symmetric multiprocessing. With Windows 7's user-mode scheduling, a program may configure one or more kernel threads as a scheduler supplied by a programming language library (one per logical processor desired) and then create a user-mode thread pool from which these UMS can draw. The kernel maintains a list of outstanding system calls which allows the UMS to continue running without blocking the kernel thread. This configuration can be used as either many-to-one or many-to-many. There are several benefits to a user mode scheduler. Context switching in User Mode can be faster. UMS also introduces cooperative multitasking. Having customizable scheduler also gives more control over thread execution. Memory management and CPU parallelism The memory manager is optimized to mitigate the problem of total memory consumption in the event of excessive cached read operations, which occurred on earlier releases of 64-bit Windows. Support for up to 256 logical processors Fewer hardware locks and greater parallelism Timer coalescing: modern processors and chipsets can switch to very low power usage levels while the CPU is idle. In order to reduce the number of times the CPU enters and exits idle states, Windows 7 introduces the concept of "timer coalescing"; multiple applications or device drivers which perform actions on a regular basis can be set to occur at once, instead of each action being performed on their own schedule. This facility is available in both kernel mode, via the KeSetCoalesableTimer API (which would be used in place of KeSetTimerEx), and in user mode with the SetWaitableTimerEx Windows API call (which replaces SetWaitableTimer). Multimedia Windows Media Center Windows Media Center in Windows 7 has retained much of the design and feel of its predecessor, but with a variety of user interface shortcuts and browsing capabilities. Playback of H.264 video both locally and through a Media Center Extender (including the Xbox 360) is supported. Some notable enhancements in Windows 7 Media Center include a new mini guide, a new scrub bar, the option to color code the guide by show type, and internet content that is more tightly integrated with regular TV via the guide. All Windows 7 versions now support up to four tuners of each type (QAM, ATSC, CableCARD, NTSC, etc.). When browsing the media library, items that don't have album art are shown in a range of foreground and background color combinations instead of using white text on a blue background. When the left or right remote control buttons are held down to browse the library quickly, a two-letter prefix of the current album name is prominently shown as a visual aid. The Picture Library includes new slideshow capabilities, and individual pictures can be rated. Also, while browsing a media library, a new column appears at the top named "Shared." This allows users to access shared media libraries on other Media Center PCs from directly within Media Center. For television support, the Windows Media Center "TV Pack" released by Microsoft in 2008 is incorporated into Windows Media Center. This includes support for CableCARD and North American (ATSC) clear QAM tuners, as well as creating lists of favorite stations. A gadget for Windows Media Center is also included. Format support Windows 7 includes AVI, WAV, AAC/ADTS file media sinks to read the respective formats, an MPEG-4 file source to read MP4, M4A, M4V, MP4V MOV and 3GP container formats and an MPEG-4 file sink to output to MP4 format. Windows 7 also includes a media source to read MPEG transport stream/BDAV MPEG-2 transport stream (M2TS, MTS, M2T and AVCHD) files. Transcoding (encoding) support is not exposed through any built-in Windows application but codecs are included as Media Foundation Transforms (MFTs). In addition to Windows Media Audio and Windows Media Video encoders and decoders, and ASF file sink and file source introduced in Windows Vista, Windows 7 includes an H.264 encoder with Baseline profile level 3 and Main profile support and an AAC Low Complexity (AAC-LC) profile encoder. For playback of various media formats, Windows 7 also introduces an H.264 decoder with Baseline, Main, and High profiles support, up to level 5.1, AAC-LC and HE-AAC v1 (SBR) multichannel, HE-AAC v2 (PS) stereo decoders, MPEG-4 Part 2 Simple Profile and Advanced Simple Profile decoders which includes decoding popular codec implementations such as DivX, Xvid and Nero Digital as well as MJPEG and DV MFT decoders for AVI. Windows Media Player 12 uses the built-in Media Foundation codecs to play these formats by default. Windows 7 also updates the DirectShow filters introduced in Windows Vista for playback of MPEG-2 and Dolby Digital to decode H.264, AAC, HE-AAC v1 and v2 and Dolby Digital Plus (including downmixing to Dolby Digital). Security Action Center, formerly Windows Security Center, now encompasses both security and maintenance. It was called Windows Health Center and Windows Solution Center in earlier builds. A new user interface for User Account Control has been introduced, which provides the ability to select four different levels of notifications, one of these notification settings, Default, is new to Windows 7. Geo-tracking capabilities are also available in Windows 7. The feature will be disabled by default. When enabled the user will only have limited control as to which applications can track their location. The Encrypting File System supports Elliptic-curve cryptographic algorithms (ECC) in Windows 7. For backward compatibility with previous releases of Windows, Windows 7 supports a mixed-mode operation of ECC and RSA algorithms. EFS self-signed certificates, when using ECC, will use 256-bit key by default. EFS can be configured to use 1K/2k/4k/8k/16k-bit keys when using self-signed RSA certificates, or 256/384/512-bit keys when using ECC certificates. In Windows Vista, the Protected User-Mode Audio (PUMA) content protection facilities are only available to applications that are running in a Protected Media Path environment. Because only the Media Foundation application programming interface could interact with this environment, a media player application had to be designed to use Media Foundation. In Windows 7, this restriction is lifted. PUMA also incorporates stricter enforcement of "Copy Never" bits when using Serial Copy Management System (SCMS) copy protection over an S/PDIF connection, as well as with High-bandwidth Digital Content Protection (HDCP) over HDMI connections. Biometrics Windows 7 includes the new Windows Biometric Framework. This framework consists of a set of components that standardizes the use of fingerprint biometric devices. In prior releases of Microsoft Windows, biometric hardware device manufacturers were required to provide a complete stack of software to support their device, including device drivers, software development kits, and support applications. Microsoft noted in a white paper on the Windows Biometric Framework that the proliferation of these proprietary stacks resulted in compatibility issues, compromised the quality and reliability of the system, and made servicing and maintenance more difficult. By incorporating the core biometric functionality into the operating system, Microsoft aims to bring biometric device support on par with other classes of devices. A new Control Panel called Biometric Device Control Panel is included which provides an interface for deleting stored biometrics information, troubleshooting, and enabling or disabling the types of logins that are allowed using biometrics. Biometrics configuration can also be configured using Group Policy settings. Networking DirectAccess, a VPN tunnel technology based on IPv6 and IPsec. DirectAccess requires domain-joined machines, Windows Server 2008 R2 on the DirectAccess server, at least Windows Server 2008 domain controllers and a PKI to issue authentication certificates. BranchCache, a WAN optimization technology. The Bluetooth stack includes improvements introduced in the Windows Vista Feature Pack for Wireless, namely, Bluetooth 2.1+EDR support and remote wake from S3 or S4 support for self-powered Bluetooth modules. NDIS 6.20 (Network Driver Interface Specification) WWAN (Mobile broadband) support (driver model based on NDIS miniport driver for CDMA and GSM device interfaces, Connection Manager support and Mobile Broadband COM and COM Interop API). Wireless Hosted Network capabilities: The Windows 7 wireless LAN service supports two new functions – Virtual Wi-Fi, that allows a single wireless network adapter to act like two client devices, or a software-based wireless access point (SoftAP) to act as both a wireless hotspot in infrastructure mode and a wireless client at the same time. This feature is not exposed through the GUI; however the Virtual WiFi Miniport adapter can be installed and enabled for wireless adapters with drivers that support a hosted network by using the command netsh wlan set hostednetwork mode=allow "ssid=<network SSID>" "key=<wlan security key>" keyusage=persistent|temporary at an elevated command prompt. The wireless SoftAP can afterwards be started using the command netsh wlan start hostednetwork. Windows 7 also supports WPA2-PSK/AES security for the hosted network, but DNS resolution for clients requires it to be used with Internet Connection Sharing or a similar feature. SMB 2.1, which includes minor performance enhancements over SMB2, such as a new opportunistic locking mechanism. RDP 7.0 Background Intelligent Transfer Service 4.0 HomeGroup Alongside the workgroup system used by previous versions, Windows 7 adds a new ad hoc home networking system known as HomeGroup. The system uses a password to join computers into the group, and allows users' libraries, along with individual files and folders, to be shared between multiple computers. Only computers running Windows 7 to Windows 10 version 1709 can create or join a HomeGroup; however, users can make files and printers shared in a HomeGroup accessible to Windows XP and Windows Vista through a separate account, dedicated to sharing HomeGroup content, that uses traditional Windows sharing. HomeGroup support was deprecated in Windows 10 and has been removed from Windows 10 version 1803 and later. HomeGroup as a concept is very similar to a feature slated for Windows Vista, known as Castle, which would have made it possible to have an identification service for all members on the network, without a centralized server. HomeGroup is created in response to the need for a simple sharing model for inexperienced users who need to share files without wrestling with user accounts, Security descriptors and share permissions. To that end, Microsoft previously created Simple File Sharing mode in Windows XP that, once enabled, caused all connected computers to be authenticated as Guest. Under this model, either a certain file or folder was shared with anyone who connects to the network (even unauthorized parties who are in range of the wireless network) or was not shared at all. In a HomeGroup, however: Communication between HomeGroup computers is encrypted with a pre-shared password. A certain file or folder can be shared with the entire HomeGroup (anyone who joins) or a certain person only. HomeGroup computers can also be a member of a Windows domain or Windows workgroup at the same time and take advantage of those file sharing mechanisms. Only computers that support HomeGroup (Windows 7 to Windows 10 version 1709) can join the network. Windows Firewall Windows 7 adds support for multiple firewall profiles. The Windows Firewall in Windows Vista dynamically changes which network traffic is allowed or blocked based on the location of the computer (based on which network it is connected to). This approach falls short if the computer is connected to more than one network at the same time (as for a computer with both an Ethernet and a wireless interface). In this case, Vista applies the profile that is more secure to all network connections. This is often not desirable; Windows 7 resolves this by being able to apply a separate firewall profile to each network connection. DNSSEC Windows 7 and Windows Server 2008 R2 introduce support for Domain Name System Security Extensions (DNSSEC), a set of specifications for securing certain kinds of information provided by the Domain Name System (DNS) as used on Internet Protocol (IP) networks. DNSSEC employs digital signatures to ensure the authenticity of DNS data received from a DNS server, which protect against DNS cache poisoning attacks. Management features Windows 7 contains Windows PowerShell 2.0 out-of-the-box, which is also available as a download to install on older platforms: Windows Troubleshooting Platform Windows PowerShell Integrated Scripting Environment PowerShell Remoting Other new management features include: AppLocker (a set of Group Policy settings that evolved from Software Restriction Policies, to restrict which applications can run on a corporate network, including the ability to restrict based on the application's version number or publisher) Group Policy Preferences (also available as a download for Windows XP and Windows Vista). The Windows Automation API (also available as a download for Windows XP and Windows Vista). Upgraded components Windows 7 includes Internet Explorer 8, .NET Framework 3.5 SP1, Internet Information Services (IIS) 7.5, Windows Installer 5.0 and a standalone XPS Viewer. Paint, Calculator, Resource Monitor, on-screen keyboard, and WordPad have also been updated. Paint and WordPad feature a Ribbon interface similar to the one introduced in Office 2007, with both sporting several new features. WordPad supports Office Open XML and ODF file formats. Calculator has been rewritten, with multiline capabilities including Programmer and Statistics modes, unit conversion, and date calculations. Calculator was also given a graphical facelift, the first since Windows 95 in 1995 and Windows NT 4.0 in 1996. Resource Monitor includes an improved RAM usage display and supports display of TCP/IP ports being listened to, filtering processes using networking, filtering processes with disk activity and listing and searching process handles (e.g. files used by a process) and loaded modules (files required by an executable file, e.g. DLL files). Microsoft Magnifier, an accessibility utility for low vision users has been dramatically improved. Magnifier now supports the full screen zoom feature, whereas previous Windows versions had the Magnifier attached to the top of the screen in a dock layout. The new full screen feature is enabled by default, however, it requires Windows Aero for the advantage of the full screen zoom feature. If Windows is set to the Windows 7 Basic, Windows Classic, or High Contrast themes, Magnifier will still function like it did in Windows Vista and earlier. Windows Installer 5.0 supports installing and configuring Windows Services, and provides developers with more control over setting permissions during software installation. Neither of these features will be available for prior versions of Windows; custom actions to support these features will continue to be required for Windows Installer packages that need to implement these features. Other features Windows 7 improves the Tablet PC Input Panel to make faster corrections using new gestures, supports text prediction in the soft keyboard and introduces a new Math Input Panel for inputting math into programs that support MathML. It recognizes handwritten math expressions and formulas. Additional language support for handwriting recognition can be gained by installing the respective MUI pack for that language (also called language pack). Windows 7 introduces a new Problem Steps Recorder tool that enables users to record their interaction with software for analysis and support. The feature can be used to replicate a problem to show support when and where a problem occurred. As opposed to the blank start-up screen in Windows Vista, Windows 7's start-up screen consists of an animation featuring four colored light balls (one red, one yellow, one green, and one blue). They twirl around for a few seconds and then join together to form a glowing Windows logo. This only occurs on displays with a vertical resolution of 768 pixels or higher, as the animation is 1024x768. Any screen with a resolution below this displays the same startup screen that Vista used. The Starter Edition of Windows 7 can run an unlimited number of applications, compared to only 3 in Windows Vista Starter. Microsoft had initially intended to ship Windows 7 Starter Edition with this limitation, but announced after the release of the Release Candidate that this restriction would not be imposed in the final release. For developers, Windows 7 includes a new networking API with support for building SOAP-based web services in native code (as opposed to .NET-based WCF web services), new features to shorten application install times, reduced UAC prompts, simplified development of installation packages, and improved globalization support through a new Extended Linguistic Services API. If an application crashes twice in a row, Windows 7 will automatically attempt to apply a shim. If an application fails to install a similar self-correcting fix, a tool that asks some questions about the application launches. Windows 7 includes an optional TIFF IFilter that enables indexing of TIFF documents by reading them with optical character recognition (OCR), thus making their text content searchable. TIFF iFilter supports Adobe TIFF Revision 6.0 specifications and four compression schemes: LZW, JPEG, CCITT v4, CCITT v6 The Windows Console now adheres to the current Windows theme, instead of showing controls from the Windows Classic theme. Games Internet Spades, Internet Backgammon and Internet Checkers, which were removed from Windows Vista, were restored in Windows 7. Users can disable many more Windows components than was possible in Windows Vista. The new components which can now be disabled include: Handwriting Recognition, Internet Explorer, Windows DVD Maker, Windows Fax and Scan, Windows Gadget Platform Windows Media Center, Windows Media Player, Windows Search, and the XPS Viewer (with its services). Windows XP Mode is a fully functioning copy of 32-bit Windows XP Professional SP3 running in a virtual machine in Windows Virtual PC (as opposed to Hyper-V) running on top of Windows 7. Through the use of the RDP protocol, it allows applications incompatible with Windows 7 to be run on the underlying Windows XP virtual machine, but still to appear to be part of the Windows 7 desktop, thereby sharing the native Start Menu of Windows 7 as well as participating in file type associations. It is not distributed with Windows 7 media, but is offered as a free download to users of the Professional, Enterprise and Ultimate editions from Microsoft's web site. Users of Home Premium who want Windows XP functionality on their systems can download Windows Virtual PC free of charge, but must provide their own licensed copy of Windows XP. XP Mode is intended for consumers rather than enterprises, as it offers no central management capabilities. Microsoft Enterprise Desktop Virtualization (Med-V) is available for the enterprise market. Native support for Hyper-V virtual machines through the inclusion of VMBus integration drivers. AVCHD camera support and Universal Video Class 1.1 Protected Broadcast Driver Architecture (PBDA) for TV tuner cards, first implemented in Windows Media Center TV Pack 2008 for Windows Vista. Multi-function devices and Device Containers: Prior to Windows 7, every device attached to the system was treated as a single functional end-point, known as a devnode, that has a set of capabilities and a "status". While this is appropriate for single-function devices (such as a keyboard or scanner), it does not accurately represent multi-function devices such as a combined printer, fax machine, and scanner, or web-cams with a built-in microphone. In Windows 7, the drivers and status information for multi-function device can be grouped together as a single "Device Container", which is presented to the user in the new "Devices and Printers" Control Panel as a single unit. This capability is provided by a new Plug and Play property, ContainerID, which is a Globally Unique Identifier that is different for every instance of a physical device. The Container ID can be embedded within the device by the manufacturer, or created by Windows and associated with each devnode when it is first connected to the computer. In order to ensure the uniqueness of the generated Container ID, Windows will attempt to use information unique to the device, such as a MAC address or USB serial number. Devices connected to the computer via USB, IEEE 1394 (FireWire), eSATA, PCI Express, Bluetooth, and Windows Rally's PnP-X support can make use of Device Containers. Windows 7 will also contain a new FireWire (IEEE 1394) stack that fully supports IEEE 1394b with S800, S1600 and S3200 data rates. The ability to join a domain offline. Service Control Manager in conjunction with the Windows Task Scheduler supports trigger-start services. See also References External links What's New in Windows 7 for IT Pros (RC) Windows 7 Support Windows 7 Windows 7
44732528
https://en.wikipedia.org/wiki/Sony%20Pictures%20hack
Sony Pictures hack
On November 24, 2014, a hacker group identifying itself as "Guardians of Peace" leaked a release of confidential data from the film studio Sony Pictures. The data included personal information about Sony Pictures employees and their families, emails between employees, information about executive salaries at the company, copies of then-unreleased Sony films, plans for future Sony films, scripts for certain films, and other information. The perpetrators then employed a variant of the Shamoon wiper malware to erase Sony's computer infrastructure. During the hack, the group demanded that Sony withdraw its then-upcoming film The Interview, a comedy about a plot to assassinate North Korean leader Kim Jong-un, and threatened terrorist attacks at cinemas screening the film. After many major U.S. theater chains opted not to screen The Interview in response to these threats, Sony chose to cancel the film's formal premiere and mainstream release, opting to skip directly to a downloadable digital release followed by a limited theatrical release the next day. United States intelligence officials, after evaluating the software, techniques, and network sources used in the hack, alleged that the attack was sponsored by the government of North Korea, which has since denied all responsibility. Hack and perpetrators The exact duration of the hack is yet unknown. U.S. investigators say the culprits spent at least two months copying critical files. A purported member of the Guardians of Peace (GOP) who has claimed to have performed the hack stated that they had access for at least a year prior to its discovery in November 2014, according to Wired. The hackers involved claim to have taken more than 100 terabytes of data from Sony, but that claim has never been confirmed. The attack was conducted using malware. Although Sony was not specifically mentioned in its advisory, US-CERT said that attackers used a Server Message Block (SMB) Worm Tool to conduct attacks against a major entertainment company. Components of the attack included a listening implant, backdoor, proxy tool, destructive hard drive tool, and destructive target cleaning tool. The components clearly suggest an intent to gain repeated entry, extract information, and be destructive, as well as remove evidence of the attack. Sony was made aware of the hack on Monday, November 24, 2014, as the malware previously installed rendered many Sony employees' computers inoperable by the software, with the warning by a group calling themselves the Guardians of Peace, along with a portion of the confidential data taken during the hack. Several Sony-related Twitter accounts were also taken over. This followed a message that several Sony Pictures executives had received via email on the previous Friday, November 21; the message, coming from a group called "God'sApstls" , demanded "monetary compensation" or otherwise, "Sony Pictures will be bombarded as a whole". This email message had been mostly ignored by executives, lost in the volume they had received or treated as spam email. In addition to the activation of the malware on November 24, the message included a warning for Sony to decide on their course of action by 11:00p.m. that evening, although no apparent threat was made when that deadline passed. In the days following this hack, the Guardians of Peace began leaking yet-unreleased films and started to release portions of the confidential data to attract the attention of social media sites, although they did not specify what they wanted in return. Sony quickly organized internal teams to try to manage the loss of data to the Internet, and contacted the FBI and the private security firm FireEye to help protect Sony employees whose personal data was exposed by the hack, repair the damaged computer infrastructure and trace the source of the leak. The first public report concerning a North Korean link to the attack was published by Re/code on November 28 and later confirmed by NBC News. On December 8, 2014, alongside the eighth large data dump of confidential information, the Guardians of Peace threatened Sony with language relating to the September 11 attacks that drew the attention of U.S. security agencies. North Korean state-sponsored hackers are suspected by the United States of being involved in part due to specific threats made toward Sony and movie theaters showing The Interview, a comedy film about an assassination attempt against Kim Jong-un. North Korean officials had previously expressed concerns about the film to the United Nations, stating that "to allow the production and distribution of such a film on the assassination of an incumbent head of a sovereign state should be regarded as the most undisguised sponsoring of terrorism as well as an act of war." In its first quarter financials for 2015, Sony Pictures set aside $15 million to deal with ongoing damages from the hack. Sony has bolstered its cyber-security infrastructure as a result, using solutions to prevent similar hacks or data loss in the future. Sony co-chairperson Amy Pascal announced in the wake of the hack that she would step down as of May 2015, and instead will become more involved with film production under Sony. Information obtained According to a notice letter dated December 8, 2014, from SPE to its employees, SPE learned on December 1, 2014, that personally identifiable information about employees and their dependents may have been obtained by unauthorized individuals as a result of a "brazen cyber-attack", including names, addresses, Social Security numbers and financial information. On December 7, 2014, C-SPAN reported that the hackers stole 47,000 unique Social Security numbers from the SPE computer network. Although personal data may have been stolen, early news reports focused mainly on celebrity gossip and embarrassing details about Hollywood and film industry business affairs gleaned by the media from electronic files, including private email messages. Among the information revealed in the emails was that Sony CEO Kazuo Hirai pressured Sony Pictures co-chairwoman Amy Pascal to "soften" the assassination scene in the upcoming Sony film The Interview. Many details relating to the actions of the Sony Pictures executives, including Pascal and Michael Lynton, were also released, in a manner that appeared to be intended to spur distrust between these executives and other employees of Sony. Other emails released in the hack showed Pascal and Scott Rudin, a film and theatrical producer, discussing Angelina Jolie. In the emails, Rudin referred to Jolie as "a minimally talented spoiled brat" because Jolie wanted David Fincher to direct her film Cleopatra, which Rudin felt would interfere with Fincher directing a planned film about Steve Jobs. Amy Pascal and Rudin were also noted to have had an email exchange about Pascal's upcoming encounter with Barack Obama that included characterizations described as racist, which led to Pascal's resignation from Sony. The two had suggested they should mention films about African-Americans upon meeting the president, such as Django Unchained, 12 Years a Slave and The Butler, all of which depict slavery in the United States or the pre-civil rights era. Pascal and Rudin later apologized. Details of lobbying efforts by politician Mike Moore on behalf of the Digital Citizens Alliance and FairSearch against Google were also revealed. The leak revealed multiple details of behind-the-scenes politics on Columbia Pictures' current Spider-Man film series, including emails between Pascal and others to various heads of Marvel Studios. Due to the outcry from fans, the Spider-Man license was eventually negotiated to be shared between both studios. In addition to the emails, a copy of the screenplay for the James Bond film Spectre, released in 2015, was obtained. Several future Sony Pictures films, including Annie, Mr. Turner, Still Alice and To Write Love on Her Arms, were also leaked. The hackers intended to release additional information on December 25, 2014, which coincided with the release date of The Interview in the United States. According to The Daily Dot, based on the email leaks, while he was at Sony, executive Charles Sipkins was responsible for following senior executives' orders to edit Wikipedia articles about them. In December 2014, former Sony Pictures Entertainment employees filed four lawsuits against the company for not protecting their data that was released in the hack, which included Social Security numbers and medical information. As part of the emails, it was revealed that Sony was in talks with Nintendo to make an animated film based on the Super Mario Bros. series (which came to fruition 4 years later, albeit under Universal and Illumination instead of Sony). In January 2015, details were revealed of the MPAA's lobbying of the United States International Trade Commission to mandate U.S. ISPs either at the internet transit level or consumer level internet service provider, to implement IP address blocking pirate websites as well as linking websites. WikiLeaks published over 30,000 documents that were obtained via the hack in April 2015, with founder Julian Assange stating that the document archive "shows the inner workings of an influential multinational corporation" that should be made public. In November 2015, after Charlie Sheen revealed he was HIV positive in a television interview to Matt Lauer, it was revealed that information about his diagnosis was leaked in an email between senior Sony bosses dated March 10, 2014. In December, Snap Inc., due to the hack, was revealed to have acquired Vergence Labs for $15 million in cash and stock, the developers of Epiphany Eyewear, and mobile app Scan for $150 million. Threats surrounding The Interview On December 16, for the first time since the hack, the "Guardians of Peace" mentioned the then-upcoming film The Interview by name, and threatened to take terrorist actions against the film's New York City premiere at Sunshine Cinema on December 18, as well as on its American wide release date, set for December 25. Sony pulled the theatrical release the following day. Seth Rogen and James Franco, the stars of The Interview, responded by saying they did not know if it was definitely caused by the film, but later canceled all media appearances tied to the film outside of the planned New York City premiere on December 16, 2014. Following initial threats made towards theaters that would show The Interview, several theatrical chains, including Carmike Cinemas, Bow Tie Cinemas, Regal Entertainment Group, Showcase Cinemas, AMC Theatres, Cinemark Theatres, as well as several independent movie theater owners announced that they would not screen The Interview. The same day, Sony stated that they would allow theaters to opt out of showing The Interview, but later decided to fully pull the national December 25 release of the film, as well as announce that there were "no further release plans" to release the film on any platform, including home video, in the foreseeable future. On December 18, two messages (both allegedly from the Guardians of Peace) were released. One, sent in a private message to Sony executives, stated that they would not release any further information if Sony never releases the film and removed its presence from the internet. The other, posted to Pastebin, a web application used for text storage that the Guardians of Peace have used for previous messages, stated that the studio had "suffered enough" and could release The Interview, but only if Kim Jong-un's death scene was not "too happy". The post also stated that the company cannot "test [them] again", and that "if [Sony Pictures] makes anything else, [they] will be here ready to fight". President Barack Obama, in an end-of-year press speech on December 19, commented on the Sony hacking and stated that he felt Sony made a mistake in pulling the film, and that producers should "not get into a pattern where you are intimidated by these acts". He also said, "We will respond proportionally and we will respond in a place and time and manner that we choose." In response to President Obama's statement, Sony Entertainment's CEO Michael Lynton said on the CNN program Anderson Cooper 360 that the public, the press and the President misunderstood the events. Lynton said the decision to cancel the wide release was in response to a majority of theaters pulling their showings and not to the hackers' threats. Lynton stated that they would seek other options to distribute the film in the future, and noted "We have not given in. And we have not backed down. We have always had every desire to have the American public see this movie." On December 23, Sony opted to authorize approximately 300 mostly-independent theaters to show The Interview on Christmas Day, as the four major theater chains had yet to change their earlier decision not to show the film. The FBI worked with these theaters to detail the specifics of the prior threats and how to manage security for the showings, but noted that there was no actionable intelligence on the prior threats. Sony's Lynton stated on the announcement that "we are proud to make it available to the public and to have stood up to those who attempted to suppress free speech". The Interview was also released to Google Play, Xbox Video, and YouTube on December 24. No incidents predicated by the threats occurred with the release, and instead, the unorthodox release of the film led to it being considered a success due to increased interest in the film following the attention it had received. On December 27, the North Korean National Defence Commission released a statement accusing Obama of being "the chief culprit who forced the Sony Pictures Entertainment to indiscriminately distribute the movie." U.S. accusations and formal charges against North Korea U.S. government officials stated on December 17, 2014 their belief that the North Korean government was "centrally involved" in the hacking, although there was initially some debate within the White House whether or not to make this finding public. White House officials treated the situation as a "serious national security matter", and the Federal Bureau of Investigation (FBI) formally stated on December 19 that they connected the North Korean government to the cyber-attacks. Including undisclosed evidence, these claims were made based on the use of similar malicious hacking tools and techniques previously employed by North Korean hackers—including North Korea's cyberwarfare agency Bureau 121 on South Korean targets. According to the FBI: "[A] technical analysis of the data deletion malware used in this attack revealed links to other malware that the FBI knows North Korea previously developed. For example, there were similarities in specific lines of code, encryption algorithms, data deletion methods, and compromised networks. "The FBI also observed significant overlap between the infrastructure used in this attack and other malicious cyber activity the U.S. government has previously linked directly to North Korea. For example, the FBI discovered that several Internet protocol (IP) addresses associated with known North Korean infrastructure communicated with IP addresses that were hardcoded into the data deletion malware used in this attack. The FBI later clarified that the source IP addresses were associated with a group of North Korean businesses located in Shenyang in northeastern China. "Separately, the tools used in the SPE attack have similarities to a cyber-attack in March of last year against South Korean banks and media outlets, which was carried out by North Korea." The FBI later clarified more details of the attacks, attributing them to North Korea by noting that the hackers were "sloppy" with the use of proxy IP addresses that originated from within North Korea. At one point the hackers logged into the Guardians of Peace Facebook account and Sony's servers without effective concealment. FBI Director James Comey stated that Internet access is tightly controlled within North Korea, and as such, it was unlikely that a third party had hijacked these addresses without allowance from the North Korean government. The National Security Agency assisted the FBI in analyzing the attack, specifically in reviewing the malware and tracing its origins; NSA director Admiral Michael Rogers agreed with the FBI that the attack originated from North Korea. A disclosed NSA report published by Der Spiegel stated that the agency had become aware of the origins of the hack due to their own cyber-intrusion on North Korea's network that they had set up in 2010, following concerns of the technology maturation of the country. The North Korean news agency KCNA denied the "wild rumours" of North Korean involvement, but said that "The hacking into the SONY Pictures might be a righteous deed of the supporters and sympathizers with the DPRK in response to its appeal." North Korea offered to be part of a joint probe with the United States to determine the hackers' identities, threatening consequences if the United States refused to collaborate and continued the allegation. The U.S. refused and asked China for investigative assistance instead. Some days after the FBI's announcement, North Korea temporarily suffered a nationwide Internet outage, which the country claimed to be the United States' response to the hacking attempts. On the day following the FBI's accusation of North Korea's involvement, the FBI received an email purportedly from the hacking group, linking to a YouTube video entitled "you are an idiot!", apparently mocking the organization. On December 19, 2014, U.S. Secretary of Homeland Security Jeh Johnson released a statement saying, "The cyber attack against Sony Pictures Entertainment was not just an attack against a company and its employees. It was also an attack on our freedom of expression and way of life." He encouraged businesses and other organizations to use the Cybersecurity Framework developed by the National Institute of Standards and Technology (NIST) to assess and limit cyber risks and protect against cyber threats. On the same day, U.S. Secretary of State John Kerry published his remarks condemning North Korea for the cyber-attack and threats against movie theatres and moviegoers. "This provocative and unprecedented attack and subsequent threats only strengthen our resolve to continue to work with partners around the world to strengthen cybersecurity, promote norms of acceptable state behavior, uphold freedom of expression, and ensure that the Internet remains open, interoperable, secure and reliable," he said. On January 2, 2015, the U.S., under an Executive Order issued by President Obama, installed additional economic sanctions on already-sanctioned North Korea for the hack, which North Korean officials called out as "groundlessly stirring up bad blood towards" the country. Doubts about accusations against North Korea Cyber security expert Kurt Stammberger from cyber security firm Norse, DEFCON organizer and Cloudflare researcher Marc Rogers, Hector Monsegur and Kim Zetter, a security journalist at Wired magazine, have expressed doubt and tended to agree that North Korea might not be behind the attack. Michael Hiltzik, a journalist for the Los Angeles Times, said that all evidence against North Korea was "circumstantial" and that some cybersecurity experts were "skeptical" about attributing the attack to the North Koreans. Cybersecurity expert Lucas Zaichkowsky said, "State-sponsored attackers don't create cool names for themselves like 'Guardians of Peace' and promote their activity to the public." Kim Zetter of Wired magazine called released evidence against the government "flimsy". Former hacker Hector Monsegur, who once hacked into Sony, explained to CBS News that exfiltrating one or one hundred terabytes of data "without anyone noticing" would have taken months or years, not weeks. Monsegur doubted the accusations due to North Korea's insufficient internet infrastructure to handle the transfer of that much data. He believed that it could have been either Chinese, Russian, or North Korean-sponsored hackers working outside of the country, but most likely to be the deed of a Sony employee. Stammberger provided to the FBI Norse's findings that suggest the hack was an inside job, stating, "Sony was not just hacked; this is a company that was essentially nuked from the inside. We are very confident that this was not an attack master-minded by North Korea and that insiders were key to the implementation of one of the most devastating attacks in history." Stammberger believes that the security failure may have originated from six disgruntled former Sony employees, based on their past skill sets and discussions these people made in chat rooms. Norse employees identified these people from a list of workers that were eliminated from Sony during a restructuring in May 2014, and noted that some had made very public and angry responses to their firing, and would be in appropriate positions to identify the means to access secure parts of Sony's servers. After a private briefing lasting three hours, the FBI formally rejected Norse's alternative assessment. Seth Rogen also expressed doubts about the claims that North Korea was behind the hack. Based on the timeline of events and the amount of information hacked, he believes the hack may have been conducted by a Sony employee. "I've also heard people say that they think someone was hired to do the hack as a way of getting Amy Pascal fired. I don't know if I subscribe to those theories, but I kind of don't think it was North Korea." Other investigations In response to allegations that the intrusion was the result of an inside job, or something other than a state-sponsored cyber attack, computer forensic specialist Kevin Mandia, president of the security firm FireEye, commented that there was not a "shred of evidence" that an insider was responsible for the attack and that the evidence uncovered by his security firm supports the position of the United States government. In February 2016, analytics firm Novetta issued a joint investigative report into the attack. The report, published in collaboration with Kaspersky Lab, Symantec, AlienVault, Invincea, Trend Micro, Carbon Black, PunchCyber, RiskIQ, ThreatConnect and Volexity, concluded that a well-resourced organization had committed the intrusion, and that "we strongly believe that the SPE attack was not the work of insiders or hacktivists". The analysis said that the same group is engaged in military espionage campaigns. Formal charges The U.S. Department of Justice issued formal charges related to the Sony hack on North Korean citizen Park Jin-hyok on September 6, 2018. The Department of Justice contends that Park was a North Korean hacker that worked for the country's Reconnaissance General Bureau, the equivalent of the Central Intelligence Agency. The Department of Justice also asserted that Park was partially responsible for arranging the WannaCry ransomware attack of 2017, having developed part of the ransomware software. The Department of Justice had previously identified Park and had been monitoring him for some time, but could not indict him immediately as much of the information around him was classified. The Criminal Complaint was unsealed by the US Department of Justice via a press release in September of 2018. Legal responses Obama also issued a legislative proposal to Congress to update current laws such as the Racketeer Influenced and Corrupt Organizations Act and introduce new ones to allow federal and national law enforcement officials to better respond to cybercrimes like the Sony hack, and to be able to prosecute such crimes compatibly to similar off-line crimes, while protecting the privacy of Americans. Governmental responses Less than a month following the attack, North Korea reportedly lost its connection to the internet. Although the United States' government did not take credit, President Obama announced that the United States would carry out a “proportional response” in light of the Sony hack. Public discussion About reporting on the hack In December 2014, Sony requested that the media stop covering the hack. Sony also threatened legal action if the media did not comply, but according to law professor Eugene Volokh, Sony's legal threats are "unlikely to prevail". Sony then threatened legal action against Twitter if it did not suspend accounts of people who posted the hacked material. American screenwriter Aaron Sorkin wrote an op-ed for The New York Times opining that the media was helping the hackers by publishing and reporting on the leaked information. On December 18, Reddit took the unusual step of banning the subreddit r/SonyGOP that was being used to distribute the hacked files. About pulling The Interview The threats made directly at Sony over The Interview were seen by many as a threat to free speech. The decision to pull the film was criticized by several Hollywood filmmakers, actors, and television hosts, including Ben Stiller, Steve Carell, Rob Lowe, Jimmy Kimmel and Judd Apatow. Some commentators contrasted the situation to the non-controversial release of the 2004 Team America: World Police, a film that mocked the leadership of North Korea's prior leader, Kim Jong-il. The Alamo Drafthouse was poised to replace showings of The Interview with Team America until the film's distributor Paramount Pictures ordered the theaters to stop. In light of the threats made to Sony over The Interview, New Regency cancelled its March 2015 production plans for a film adaptation of the graphic novel Pyongyang: A Journey in North Korea, which was set to star Steve Carell. Hustler announced its intentions to make a pornographic parody film of The Interview. Hustler founder Larry Flynt said, "If Kim Jong-un and his henchmen were upset before, wait till they see the movie we're going to make". Outside the United States In China, the media coverage of the hackings has been limited and outside sources have been censored. A search for "North Korea hack" on Baidu, China's leading search engine returned just one article, which named North Korea as "one of several suspects." However, Google, which was and is inaccessible in China, returned more than 36 million results for the same query. Hua Chunying, a spokeswoman of foreign affairs, "shied away from directly addressing" the Sony hacking situation. See also 2013 South Korea cyberattack 2015–16 SWIFT banking hack North Korea's illicit activities References 2014 controversies in the United States 2014 controversies 2014 crimes in the United States 2014 in computing 2014 in North Korea Attacks in the United States in 2014 Cyberattacks Cyberwarfare in the United States Data breaches in the United States Email hacking Hacking in the 2010s North Korea–United States relations November 2014 crimes November 2014 events in the United States Sony Pictures Entertainment
363734
https://en.wikipedia.org/wiki/Adobe%20Creative%20Suite
Adobe Creative Suite
Adobe Creative Suite (CS) is a discontinued software suite of graphic design, video editing, and web development applications developed by Adobe Systems. Each edition consisted of several Adobe applications, such as Photoshop, Acrobat, Premiere Pro or After Effects, InDesign, and Illustrator, which became industry standard applications for many graphic design positions. The last of the Creative Suite versions, Adobe Creative Suite 6 (CS6), was launched at a release event on April 23, 2012, and released on May 7, 2012. CS6 was the last of the Adobe design tools to be physically shipped as boxed software as future releases and updates would be delivered via download only. On May 6, 2013, Adobe announced that CS6 would be the last version of the Creative Suite, and that future versions of their creative software would only be available via their Adobe Creative Cloud subscription model. Adobe also announced that it would continue to support CS6 and would provide bug fixes and security updates through the next major upgrades of both Mac and Windows operating systems (as of 2013). The Creative Suite packages were pulled from Adobe's online store in 2013, but were still available on their website until January 2017. Applications The following table shows the different details of the core applications in the various Adobe Creative Suite editions. Each edition may come with all these apps included or only a subset. Editions Adobe sold Creative Suite applications in several different combinations called "editions", these included: Adobe Creative Suite 6 Design Standard is an edition of the Adobe Creative Suite 6 family of products intended for professional print, web, interactive and mobile designers. Adobe Creative Suite 6 Design & Web Premium is an edition of the Adobe Creative Suite 6 family of products intended for professional web designers and developers. Adobe Creative Suite 6 Production Premium is an edition of the Adobe Creative Suite 6 family of products intended for professional rich media and video post-production experts who create projects for film, video, broadcast, web, DVD, Blu-ray Disc, and mobile devices. Adobe Creative Suite 6 Master Collection contains applications from all of the above editions. Adobe Flash Catalyst, Adobe Contribute, Adobe OnLocation, and Adobe Device Central, previously available in CS5.5, have been dropped from the CS6 line-up. Adobe Prelude and Adobe Encore are not released as standalone products. Adobe Encore is available as part of Adobe Premiere Pro. Adobe InCopy, a word processing application that integrates with Adobe InDesign, is also part of the Creative Suite family, but is not included in any CS6 edition. In March 2013, it was reported that Adobe would no longer sell boxed copies of the Creative Suite software, instead offering digital downloads and monthly subscriptions. History Creative Suite 1 and 2 The first version of Adobe Creative Suite was released in September 2003 and Creative Suite 2 in April 2005. The first two versions (CS and CS2) were available in two editions. The Standard Edition included: Adobe Bridge (since CS2) Adobe Illustrator Adobe InCopy Adobe InDesign Adobe Photoshop Adobe Premiere Pro (since CS2) Adobe ImageReady Adobe Version Cue Design guide and training resources Adobe Stock Photos The Premium Edition also included: Adobe Acrobat Professional (Version 8 in CS2.3) Adobe Dreamweaver (since CS2.3) Adobe GoLive Creative Suite helped InDesign become the dominant publishing software, replacing QuarkXPress, because customers who purchased the suite for Photoshop and Illustrator received InDesign at no additional cost. Adobe shut down the "activation" servers for CS2 in December 2012, making it impossible for licensed users to reinstall the software if needed. In response to complaints, Adobe then made available for download a version of CS2 that did not require online activation, and published a serial number to activate it offline. Because there was no mechanism to prevent people who had never purchased a CS2 license from downloading and activating it, it was widely thought that the aging software had become either freeware or abandonware, despite Adobe's later explanation that it was intended only for people who had "legitimately purchased CS2". The later shutdown of the CS3 and CS4 activation servers was handled differently, with registered users given the opportunity to get individual serial numbers for offline activation, rather than a published one. Creative Suite Production Studio Adobe Creative Suite Production Studio (previously Adobe Video Collection) was a suite of programs for acquiring, editing, and distributing digital video and audio that was released during the same timeframe as Adobe Creative Suite 2. The suite was available in standard and premium editions. The Adobe Production Studio Premium edition consisted of: Adobe After Effects Professional Adobe Audition Adobe Bridge Adobe Encore DVD Adobe Premiere Pro Adobe Photoshop Adobe Illustrator Adobe Dynamic Link (Not sold separately) The Standard edition consisted of: Adobe After Effects Standard Adobe Bridge Adobe Premiere Pro Adobe Photoshop Since CS3, Adobe Production Studio became part of the Creative Suite family. The equivalent version for Production Studio Premium is the Adobe Creative Suite Production Premium. Macromedia Studio Macromedia Studio was a suite of programs designed for web content creation designed and distributed by Macromedia. After Adobe's 2005 acquisition of Macromedia, Macromedia Studio 8 was replaced, modified, and integrated into two editions of the Adobe Creative Suite family of software from version 2.3 onwards. The closest relatives of Macromedia Studio 8 are now called Adobe Creative Suite Web Premium. Core applications from Macromedia Studio have been merged with Adobe Creative Suite since CS3, including Flash, Dreamweaver, and Fireworks. Some Macromedia applications were absorbed into existing Adobe products, e.g. FreeHand has been replaced with Adobe Illustrator. Director and ColdFusion are not part of Adobe Creative Suite and will only be available as standalone products. The final version of Macromedia Studio released include: Macromedia Studio MX Released May 29, 2002, internally it was version 6 and the first incarnation of the studio to use the "MX" suffix, which for marketing purposes was a shorthand abbreviation that meant "Maximize". Studio MX included Dreamweaver, Flash, FreeHand, Fireworks and a developer edition of ColdFusion. Macromedia Studio MX Plus Released February 10, 2003, sometimes referred to as MX 1.1. MX Plus was a special edition release of MX that included Freehand MX (replacing Freehand 10), Contribute and DevNet Resource Kit Special Edition in addition to the existing MX suite of products. Macromedia Studio MX 2004 Released September 10, 2003, despite its name, it is internally version 7. Studio MX 2004 included FreeHand along with updated versions of Dreamweaver, Flash and Fireworks. An alternate version of Studio MX 2004 included Flash Professional and a new interface for Dreamweaver. Macromedia Studio 8 Released September 13, 2005, Studio 8 was the last version of Macromedia Studio. It comprised Dreamweaver 8, Flash 8, Flash 8 Video Converter, Fireworks 8, Contribute 3 and FlashPaper. Creative Suite 3 Adobe Creative Suite 3 (CS3) was announced on March 27, 2007; it introduced universal binaries for all major programs for the Apple Macintosh, as well as including all of the core applications from Macromedia Studio and Production Studio. Some Creative Suite programs also began using the Presto layout engine used in the Opera web browser. Adobe began selling CS3 applications in six different combinations called "editions." Design Standard & Premium and Web Standard & Premium began shipping on April 16, 2007, and Production Premium and Master Collection editions began shipping on July 2, 2007. The latest released CS3 version was version 3.3, released on June 2, 2008. In this version Fireworks CS3 was included in Design Premium and all editions that had included Acrobat 8 Pro had it replaced with Acrobat 9 Pro. Below is a matrix of the applications included in each edition of CS3 version 3.3: CS3 included several programs, including Dreamweaver, Flash Professional, and Fireworks that were developed by Macromedia, a former rival acquired by Adobe in 2005. It also included Adobe OnLocation and Adobe Ultra that were developed by Serious Magic, also a firm acquired by Adobe in 2006. Adobe dropped the following programs (that were previously included in CS2) from the CS3 software bundles: Adobe GoLive (replaced by Adobe Dreamweaver) Adobe ImageReady (merged into Adobe Photoshop and replaced by Adobe Fireworks) Adobe Audition (replaced by Adobe Soundbooth) Adobe had announced that it would continue to develop Audition as a standalone product, while GoLive had been discontinued. Adobe GoLive 9 was released as a standalone product on June 10, 2007. Adobe Audition 3 was announced as a standalone product on September 6, 2007. Adobe had discontinued ImageReady and had replaced it with Fireworks, with some of ImageReady's features integrated into Photoshop. Audition became part of the Creative Suite again in CS5.5 when Soundbooth was discontinued. Creative Suite 4 Adobe Creative Suite 4 (CS4) was announced on September 23, 2008, and officially released on October 15, 2008. All applications in CS4 featured the same user interface, with a new tabbed interface for working with concurrently running Adobe CS4 programs where multiple documents can be opened inside multiple tabs contained in a single window. Adobe CS4 was also developed to perform better under 64-bit and multi-core processors. On MS Windows, Adobe Photoshop CS4 ran natively as a 64-bit application. Although they were not natively 64-bit applications, Adobe After Effects CS4 and Adobe Premiere Pro CS4 had been optimized for 64-bit computers. However, there were no 64-bit versions of CS4 available for Mac OS X. Additionally, CS4 was the last version of Adobe Creative Suite installable on the PowerPC architecture on Mac OS X, although not all applications in the suite are available for PowerPC. The unavailable products on PowerPC include the featured applications within the Production Premium collection (Soundbooth, Encore, After Effects, Premiere, and OnLocation). In early testing of 64-bit support in Adobe Photoshop CS4, overall performance gains ranged from 8% to 12%, due to the fact that 64-bit applications could address larger amounts of memory and thus resulted in less file swapping — one of the biggest factors that can affect data processing speed. Two programs were dropped from the CS4 line-up: Adobe Ultra, a vector keying application which utilizes image analysis technology to produce high quality chroma key effects in less than ideal lighting environments and provides keying of a subject into a virtual 3D environment through virtual set technology, and Adobe Stock Photos. Below is a matrix of the applications that were bundled in each of the software suites for CS4: Creative Suite 5 Adobe Creative Suite 5 (CS5) was released on April 30, 2010. From CS5 onwards, Windows versions of Adobe Premiere Pro CS5 and Adobe After Effects CS5 were 64-bit only and required at least Windows Vista 64-bit or a later 64-bit Windows version. Windows XP Professional x64 Edition was no longer supported. The Mac versions of the CS5 programs were rewritten using macOS's Cocoa APIs in an effort to modernize the codebase. These new Mac versions dropped support for PowerPC-based Macs and were 64-bit Intel-only. Adobe Version Cue, an application that enabled users to track and manipulate file metadata and automate the process of collaboratively reviewing documents among groups of people, and the Adobe Creative Suite Web Standard edition, previously available in CS4, were dropped from the CS5 line-up. Below is a matrix of the applications that were bundled in each of the software suites for CS5: Creative Suite 5.5 Following the release of CS5 in April 2010, Adobe changed its release strategy to an every other year release of major number installments. CS5.5 was presented on April 12, 2011, as an in-between program until CS6. The update helped developers optimize websites for a variety of tablets, smart phones, and other devices. At the same time, Adobe announced a subscription-based pay service as an alternative to full purchase. On July 1, 2011, Adobe Systems announced its Switcher Program, which will allow people who had purchased any version of Apple's Final Cut Pro (or Avid Media Composer) to receive a 50 percent discount on Creative Suite CS5.5 Production Premium or Premiere Pro CS5.5. Not all products were upgraded to CS5.5 in this release; applications that were upgraded to CS5.5 included Adobe InDesign, Adobe Flash Catalyst, Adobe Flash Professional, Adobe Dreamweaver, Adobe Premiere Pro, Adobe After Effects, and Adobe Device Central. Adobe Audition also replaced Adobe Soundbooth in CS5.5, Adobe Story was first offered as an AIR-powered screenwriting and preproduction application, and Adobe Acrobat X Pro replaced Acrobat 9.3 Pro. Below is a matrix of the applications that were bundled in each of the software suites for CS5.5: Creative Suite 6 During an Adobe conference call on June 21, 2011, CEO Shantanu Narayen said that the April 2011 launch of CS5.5 was "the first release in our transition to an annual release cycle", adding, "We intend to ship the next milestone release of Creative Suite in 2012." On March 21, 2012, Adobe released a freely available beta version of Adobe Photoshop CS6. The final version of Adobe CS6 was launched on a release event April 23, 2012, and first shipped May 7. Adobe also launched a subscription-based offering named Adobe Creative Cloud where users are able to gain access to individual applications or the full Adobe Creative Suite 6 suite on a per-month basis, plus additional cloud storage spaces and services. The native 64-bit Windows applications available in Creative Suite 6 were Photoshop, Illustrator, After Effects (64-bit only), Premiere Pro (64-bit only), Encore (64-bit only), SpeedGrade (64-bit only) and Bridge. Discontinuation On May 5, 2013, during the opening keynote of its Adobe MAX conference, Adobe announced that it was retiring the "Creative Suite" branding in favor of "Creative Cloud", and making all future feature updates to its software (now appended with "CC" instead of "CS", e.g. Photoshop CC) available via the Creative Cloud subscription service rather than through the purchasing of perpetual licenses. Customers must pay a subscription fee and if they stop paying, they will lose access to the proprietary file formats, which are not backward-compatible with the Creative Suite (Adobe admitted that this is a valid concern). Individual subscribers must have an Internet connection to download the software and to use the 2 GB of provided storage space (or the additionally purchased 20 GB), and must validate the license monthly. Adobe's decision to make the subscription service the only sales route for its creative software was met with strong criticism (see Creative Cloud controversy). Several online articles began offering replacements of Photoshop, Illustrator, and other programs, with free software such as GIMP and Inkscape or competing products such as Affinity Designer, CorelDRAW, PaintShop Pro, and Pixelmator directly offering alternatives. In addition to many of the products formerly part of the Creative Suite (one product, Fireworks, was announced as having reached the end of its development cycle), Creative Cloud also offers subscription-exclusive products such as Adobe Muse and the Adobe Edge family, Web-based file and website hosting, Typekit fonts, and access to the Behance social media platform. The new CC versions of their applications, and the full launch of the updated Creative Cloud service, was announced for June 17, 2013. New versions with major feature updates have been released regularly, with a refresh of the file formats occurring in October 2014. Adobe also announced that it would continue to offer bug fixes for the CS6 products so that they will continue to run on the next versions of Microsoft Windows and Apple OS X. However, they have said there are no updates planned to enable CS6 to run in macOS Catalina. References External links Adobe Creative Suite Desktop publishing software MacOS graphics software MacOS multimedia software Media readers Photo software Raster graphics editors Technical communication tools Typesetting software Vector graphics editors Windows graphics-related software Windows multimedia software 2003 software
24809681
https://en.wikipedia.org/wiki/Computer%20Othello
Computer Othello
Computer Othello refers to computer architecture encompassing computer hardware and computer software capable of playing the game of Othello. Availability There are many Othello programs such as NTest, Saio, Edax, Cassio, Pointy Stone, Herakles, WZebra, and Logistello that can be downloaded from the Internet for free. These programs, when run on any up-to-date computer, can play games in which the best human players are easily defeated. This is because although the consequences of moves are predictable for both computers and humans, computers are better at envisaging them. Search techniques Computer Othello programs search for any possible legal moves using a game tree. In theory, they examine all positions / nodes, where each move by one player is called a "ply". This search continues until a certain maximum search depth or the program determines that a final "leaf" position has been reached. A naive implementation of this approach, known as Minimax or Negamax, can only search to a small depth in a practical amount of time, so various methods have been devised to greatly increase the speed of the search for good moves. These are based on Alpha-beta pruning, Negascout, MTD(f), NegaC*. The alphabeta algorithm is a method for speeding up the Minimax searching routine by pruning off cases that will not be used anyway. This method takes advantage of the fact that every other level in the tree will maximize and every other level will minimize. Several heuristics are also used to reduce the size of the searched tree: good move ordering, transposition table and selective Search. To speed up the search on machines with multiple processors or cores, a "parallel search" may be implemented. Several experiments have been made with the game Othello, like ABDADA or APHID On recent programs, the YBWC seems the preferred approach. Multi-Prob cut Multi-ProbCut is a heuristic used in alpha–beta pruning of the search tree. The ProbCut heuristic estimates evaluation scores at deeper levels of the search tree using a linear regression between deeper and shallower scores. Multi-ProbCut extends this approach to multiple levels of the search tree. The linear regression itself is learned through previous tree searches, making the heuristic a kind of dynamic search control. It is particularly useful in games such as Othello where there is a strong correlation between evaluations scores at deeper and shallower levels. Evaluation techniques There are three different paradigms for creating evaluation functions. Disk-square tables Different squares have different values - corners are good and the squares next to corners are bad. Disregarding symmetries, there are 10 different positions on a board, and each of these is given a value for each of the three possibilities: black disk, white disk and empty. A more sophisticated approach is to have different values for each position during the different stages of the game; e.g. corners are more important in the opening and early midgame than in the endgame. Mobility-based Most human players strive to maximize mobility (number of moves available) and minimize frontier disks (disks adjacent to empty squares). Player mobility and opponent mobility are calculated, and player potential mobility and opponent potential mobility are calculated as well. These measures can be found very quickly, and they significantly increase playing strength. Most programs have knowledge of edge and corner configurations and try to minimize the number of disks during the early midgame, another strategy used by human players. Pattern-based / pattern coefficients Mobility maximization and frontier minimization can be broken down into local configurations which can be added together; the usual implementation is to evaluate each row, column, diagonal and corner configuration separately and add together the values, many different patterns have to be evaluated. The process of determining values for all configurations is done by taking a large database of games played between strong players and calculating statistics for each configuration in each game stage from all the games. The most common choice to predict the final disc difference uses a weighted disk difference measure where the winning side gets a bonus corresponding to the number of disks. Opening book Opening books aid computer programs by giving common openings that are considered good ways to counter poor openings. All strong programs use opening books and update their books automatically after each game. To go through all positions from all games in the game database and determine the best move not played in any database game, transposition tables are used to record positions that have been previously searched. This means those positions do not need to be searched again. This is time-consuming as a deep search must be performed for each position, but once this is done, updating the book is easy. After each game played, all new positions are searched for the best deviation. Other optimizations Faster hardware and additional processors can improve Othello-playing program abilities, such as deeper ply searching. Solving Othello During gameplay, players alternate moves. The human player uses black counters while the computer uses white. The human player starts the game. Othello is strongly solved on 4×4 and 6×6 boards, with the second player (white) winning in perfect play. It remains unsolved on a standard 8×8 board, but computer analysis gives thousands of draw lines, or paths to a draw, although no such line has been fully proven. Othello 4 × 4 Othello 4x4 has a very small game tree and has been solved in less than one second by many simple Othello programs that use the Minimax method, which generates all possible positions (nearly 10 million). The result is that white wins with a +8 margin (3-11). Othello 6 × 6 Othello 6x6 has been solved in less than 100 hours by many simple Othello programs that use the Minimax method, which generates all possible positions (nearly 3.6 trillion). The result is that white wins with a +4 margin (16-20). Othello 8 × 8 The Othello 8x8 game tree size is estimated at 1054 nodes, and the number of legal positions is estimated at less than 1028. The game remains unsolved. A solution could possibly be found using intensive computation with top programs on fast parallel hardware or through distributed computation. Some top programs have expanded their books for many years now. As a result, many lines are in practice draws or wins for either side. Regarding the three main openings of diagonal, perpendicular and parallel, it appears that both diagonal and perpendicular openings lead to drawing lines, while the parallel opening is a win for black. The drawing tree also seems bigger after the diagonal opening than after the perpendicular opening. The parallel opening has strong advantages for the black player, enabling him to always win in a perfect play. Milestones in computer Othello 1977: Scientific American made the earliest known published reference to an Othello/Reversi program, written by N. J. D. Jacobs in BCPL. BYTE published "Othello, a New Ancient Game" as a BASIC type-in program in October. 1977: Creative Computing published a version of Othello written by Ed Wright in FORTRAN. 1978: Nintendo releases the video game Computer Othello in arcades. 1980: The Othello program The Moor (written by Mike Reeve and David Levy) won one game in a six-game match against world champion Hiroshi Inoue. Peter W Frey of Northwestern University discussed computer and human Othello strategies in BYTE, and discussed his TRS-80 Othello game which, Frey claimed, easily defeated Wright's version running on a CDC 6600. Paul Rosenbloom of Carnegie Mellon University developed IAGO, which finished in third place at a Northwestern University computer tournament. When IAGO played The Moor, IAGO was better at capturing pieces permanently and limiting its opponent's mobility. 1981: IAGO running on a DEC KA10 finished ahead of 19 other contestants at the Santa Cruz Open Othello Tournament at the University of California, Santa Cruz, with the only undefeated record. Charles Heath's TRS 80-based game finished in second place. Microcomputer CPU-based engines won the second through seventh places, ahead of several mainframes and minicomputers; Frey speculated that this was because computer Othello does not benefit from several of the advantages of larger computers, such as faster floating-point arithmetic. Late 1980s: Kai-Fu Lee and Sanjoy Mahajan created the Othello program BILL, which was similar to IAGO but incorporated Bayesian learning. BILL reliably beat IAGO. 1992: Michael Buro began work on the Othello program Logistello. Logistello's search techniques, evaluation function, and knowledge base of patterns were better than those in earlier programs. Logistello perfected its game by playing over 100,000 games against itself. 1997: Logistello won every game in a six-game match against world champion Takeshi Murakami. Though there had not been much doubt that Othello programs were stronger than humans, it had been 17 years since the last match between a computer and a reigning world champion. After the 1997 match, there was no longer any doubt: Logistello was significantly better than any human player. 1998: Michael Buro retired Logistello. Research interest in Othello waned somewhat, but some programs, including Ntest, Saio, Edax, Cassio, Zebra and Herakles, continued to be developed. 2004: Ntest became the strongest program, significantly stronger than Logistello. 2005: Ntest, Saio, Edax, Cyrano and Zebra, became significantly stronger than Logistello. Ntest and Zebra retired. 2011: Saio, Edax and Cyrano, became much faster than Logistello and other programs. List of top Othello/ Reversi programs NTest (Ntest) by Chris Welty Edax (Edax) by Richard Delorme Logistello by Michael Buro See also Computer Go Computer shogi Computer chess Computer Olympiad Reversi Notes External links 4 × 4 Othello 6 × 6 Othello List of Othello programs Chess programming Reversi software Abstract strategy games PSPACE-complete problems