id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
34095575
https://en.wikipedia.org/wiki/Mike%20Salmon%20%28American%20football%29
Mike Salmon (American football)
Michael William Salmon (born December 27, 1970 in Long Beach, California) is a former American Football defensive back in the National Football League. Football career Salmon named both the 5A Arizona High school football Player of the year and the Arizona Athlete of the year in 1989 by the Arizona Republic. He was recruited out of Greenway high school in 3 sports, football, baseball and basketball. In 1988 Salmon was selected by USA Today as the National H.S. Player of the Week after helping Greenway upset the #1 ranked Brophy Broncos on T.V. A game in which Salmon caught a T.D., threw for 2 T.D.'s, intercepted 2 passes, kicked 2 field goals and made 4 extra points all in the first half. Salmon later signed with the University of Southern California. Salmon was a captain, started all four seasons at either free safety, strong safety, cornerback, outside linebacker, kicker, punt returner and holder. Salmon played both football and baseball for the USC Trojans. He batted .280 as a junior on the USC Trojans baseball team. Salmon turned down a chance to play professional baseball, like his older brother Tim, rejecting an $80,000 offer by the Philadelphia Phillies to accept USC's scholarship offer. Against Washington State in 1993, Salmon beat out future NFL kicker Cole Ford . Salmon entered the game and made four field goals from greater than 38 yards and was named Pac-10 Player of the week. "Here's a guy who hadn't kicked since high school, and he couldn't wait to get in there and kick," Coach John Robinson said. "If it'd been me, I'd have been scared to death." Salmon also kicked the game winning kick that year at Oregon as time expired in front of a sold out stadium. Salmon was on the Deans List twice at USC and also had the top GPA on the football team in 1992. "He may be more important to our defense than (quarterback) Rob Johnson is to our offense," Salmon's defensive coordinator at USC, Don Lindsey, said. "We have an outstanding backup quarterback in Kyle Wachholz, so if we lost Rob it would hurt us, but it wouldn't be a killer. But if we lost Mike, the defense would lose a whole lot." On Senior Day in 1993, against the UCLA Bruins, as Salmon entered the Coliseum for the last time, USC coach John Robinson introduced Salmon as "one of the most competitive guys I have ever had". Salmon played in the NFL for the San Francisco 49ers (1994-1997) and the Buffalo Bills (1996). Salmon backed up the two NFC pro bowl safeties Merton Hanks and Tim MacDonald in San Francisco. In Buffalo by backed up AFC pro bowl safeties Henry Jones and Kurt Shultz. In 1997, he suffered a career ending cartilage tear to his knee. Salmon played two seasons (1995-1996) of NFL Europe with Rhein Fire in Düsseldorf, Germany. Salmon had four interceptions in two years with Rhein Fire where he was later named to the All World League First team at Free Safety and team captain. Personal Mike Salmon is the younger brother of former Major League Baseball right fielder Tim Salmon, who was named the American League's Rookie of the Year in 1993 and World Series Champion in 2002 for the Los Angeles Angels. His cousin is Academy Award-winning Best Actress Holly Hunter. In 2012, Mike Salmon had his football jersey retired at Greenway High School in Phoenix, Arizona. He is the first and only football player to have his number retired in school history. His brother Tim is the only baseball player in the school's history to have his baseball number retired. He lives in Newport Beach, California with his wife, Nina, and three children. Salmon serves as vice chairman of the board of directors of the Lott Trophy, an annual college football award given to a defensive player that exhibits excellence on and off the field. Salmon received two degrees Bachelor's degree in Urban and Regional Planning and Communications from the University of Southern California, where he was a member of the Phi Delta Theta fraternity and named to the All Phi Delta Theta football team all four years he played. References 1970 births Living people American football defensive backs Buffalo Bills players Players of American football from Long Beach, California Rhein Fire players San Francisco 49ers players USC Trojans football players
16778447
https://en.wikipedia.org/wiki/Jira%20%28software%29
Jira (software)
Jira ( ) is a proprietary issue tracking product developed by Atlassian that allows bug tracking and agile project management. Jira is a well-known project management tool. It was created by Atlassian, originated as a platform for managing software development projects but has subsequently evolved into a broader range of applications. Naming The product name is a truncation of Gojira, the Japanese word for Godzilla. The name originated from a nickname Atlassian developers used to refer to Bugzilla, which was previously used internally for bug-tracking. Description According to Atlassian, Jira is used for issue tracking and project management by over 180,000 customers in 190 countries. Some of the organizations that have used Jira at some point in time for bug-tracking and project management include Fedora Commons, Hibernate, and the Apache Software Foundation, which uses both Jira and Bugzilla. Jira includes tools allowing migration from competitor Bugzilla. Jira is offered in four packages: Jira Work Management is intended as generic project management. Jira Software includes the base software, including agile project management features (previously a separate product: Jira Agile). Jira Service Management is intended for use by IT operations or business service desks. Jira Align is intended for strategic product and portfolio management Jira is written in Java and uses the Pico inversion of control container, Apache OFBiz entity engine, and WebWork 1 technology stack. For remote procedure calls (RPC), Jira has REST, SOAP, and XML-RPC interfaces. Jira integrates with source control programs such as Clearcase, Concurrent Versions System (CVS), Git, Mercurial, Perforce, Subversion, and Team Foundation Server. It ships with various translations including English, French, German, Japanese, and Spanish. Jira implements the Networked Help Desk API for sharing customer support tickets with other issue tracking systems. License Jira is a commercial software product that can be licensed for running on-premises or available as a hosted application. Atlassian provides Jira for free to open source projects meeting certain criteria, and to organizations that are non-academic, non-commercial, non-governmental, non-political, non-profit, and secular. For academic and commercial customers, the full source code is available under a developer source license. Security In April 2010, a cross-site scripting vulnerability in Jira led to the compromise of two Apache Software Foundation servers. The Jira password database was also compromised. The database contained unsalted password hashes, which are vulnerable to rainbow attacks, dictionary lookups and cracking tools. Apache advised users to change their passwords. Atlassian themselves were also targeted as part of the same attack and admitted that a legacy database with passwords stored in plain text had been compromised. Evolution When launched in 2002, Jira was purely issue tracking software, targeted at software developers. The app was later adopted by non-IT organizations as a project management tool. The process sped up after the launch of Atlassian Marketplace in 2012, which allowed third-party developers to offer project management plugins for Jira. BigPicture, Portfolio for Jira, Structure, Tempo Planner and ActivityTimeline are major project management plugins for Jira. See also Comparison of issue-tracking systems References External links 2002 software Atlassian products Bug and issue tracking software Java (programming language) software Project management software Task management software
1178658
https://en.wikipedia.org/wiki/OpenSolaris
OpenSolaris
OpenSolaris (”) is a discontinued open-source computer operating system based on Solaris and created by Sun Microsystems. It was also (confusingly) the name of a project initiated by Sun to build a developer and user community around the eponymous operating system software. OpenSolaris is a descendant of the UNIX System V Release 4 (SVR4) code base developed by Sun and AT&T in the late 1980s and is the only version of the System V variant of UNIX available as open source. OpenSolaris was developed as a combination of several software consolidations that were open sourced starting with Solaris 10. It includes a variety of free software, including popular desktop and server software. After Oracle’s acquisition of Sun Microsystems in 2010, Oracle discontinued development of OpenSolaris in house, pivoting to focus exclusively on the development of the proprietary Solaris Express (now Oracle Solaris). Prior to Oracle's close-sourcing Solaris, a group of former OpenSolaris developers began efforts to fork the core software under the name OpenIndiana. The illumos Foundation, founded in the wake of the discontinuation of OpenSolaris, continues to develop and maintain the kernel and userland of OpenIndiana (together renamed “illumos”), while the OpenIndiana Project (now under the auspices of the illumos Foundation) continues to maintain and develop the illumos-based OpenIndiana distribution (including its installer and build system) as the direct descendant of OpenIndiana. Since then additional illumos distributions, both commercial and non-commercial, have appeared and are under active development, combining the illumos kernel and userland with custom installers, packaging and build systems, and other distribution-specific utilities and tooling. History OpenSolaris was based on Solaris, which was originally released by Sun in 1991. Solaris is a version of UNIX System V Release 4 (SVR4), jointly developed by Sun and AT&T to merge features from several existing Unix systems. It was licensed by Sun from Novell to replace SunOS. Planning for OpenSolaris started in early 2004. A pilot program was formed in September 2004 with 18 non-Sun community members and ran for 9 months growing to 145 external participants. Sun submitted the CDDL (Common Development and Distribution License) to the OSI, which approved it on January 14, 2005. The first part of the Solaris code base to be open sourced was the Solaris Dynamic Tracing facility (commonly known as DTrace), a tool that aids in the analysis, debugging, and tuning of applications and systems. DTrace was released under the CDDL on January 25, 2005, on the newly launched opensolaris.org website. The bulk of the Solaris system code was released on June 14, 2005. There remains some system code that is not open sourced, and is available only as pre-compiled binary files. To direct the newly fledged project, a Community Advisory Board was announced on April 4, 2005: two were elected by the pilot community, two were employees appointed by Sun, and one was appointed from the broader free software community by Sun. The members were Roy Fielding, Al Hopper, Rich Teer, Casper Dik, and Simon Phipps. On February 10, 2006 Sun approved The OpenSolaris Charter, which reestablished this body as the independent OpenSolaris Governing Board. The task of creating a governance document or "constitution" for this organization was given to the OGB and three invited members: Stephen Hahn and Keith Wesolowski (developers in Sun's Solaris organization) and Ben Rockwood (a prominent OpenSolaris community member). The former next-generation Solaris OS version under development by Sun to eventually succeed Solaris 10 was codenamed 'Nevada', and was derived from what was the OpenSolaris codebase and this new code was then pulled into new OpenSolaris 'Nevada' snapshot builds. "While under Sun Microsystems' control, there were bi-weekly snapshots of Solaris Nevada (the codename for the next-generation Solaris OS to eventually succeed Solaris 10) and this new code was then pulled into new OpenSolaris preview snapshots available at Genunix.org. The stable releases of OpenSolaris are based on these Nevada builds." Initially, Sun's Solaris Express program provided a distribution based on the OpenSolaris code in combination with software found only in Solaris releases. The first independent distribution was released on June 17, 2005, and many others have emerged since. On March 19, 2007, Sun announced that it had hired Ian Murdock, founder of Debian, to head Project Indiana, an effort to produce a complete OpenSolaris distribution, with GNOME and userland tools from GNU, plus a network-based package management system. The new distribution was planned to refresh the user experience, and would become the successor to Solaris Express as the basis for future releases of Solaris. On May 5, 2008, OpenSolaris 2008.05 was released in a format that could be booted as a Live CD or installed directly. It uses the GNOME desktop environment as the primary user interface. The later OpenSolaris 2008.11 release included a GUI for ZFS' snapshotting capabilities, known as Time Slider, that provides functionality similar to macOS's Time Machine. In December 2008, Sun Microsystems and Toshiba America Information Systems announced plans to distribute Toshiba laptops pre-installed with OpenSolaris. On April 1, 2009, the Tecra M10 and Portégé R600 came preinstalled with OpenSolaris 2008.11 release and several supplemental software packages. On June 1, 2009, OpenSolaris 2009.06 was released, with support for the SPARC platform. On January 6, 2010, it was announced that Solaris Express program would be closed while an OpenSolaris binary release was scheduled to be released March 26, 2010. The OpenSolaris 2010.03 release never appeared. On August 13, 2010, Oracle was rumored to have discontinued the OpenSolaris binary distribution to focus on the Solaris Express binary distribution program. Source code would continue to be accepted from the community and Oracle source code would continue to be released into Open Source, but Oracle code releases would occur only after binary releases. Internal email was released by an OpenSolaris kernel developer but was unconfirmed by Oracle. There was a post confirming the leak posted to the OpenSolaris Forums on August 13, 2010. Upstream contributions will continue through a new Oracle web site, downstream source code publishing will continue, binary distribution will continue under the old Solaris Express model, but release of source code will occur after binary cuts, and binary cuts will become less frequent. On September 14, 2010, OpenIndiana was formally launched at the JISC Centre in London. While OpenIndiana is a fork in the technical sense, it is a continuation of OpenSolaris in spirit: the project intends to deliver a System V family operating system which is binary-compatible with the Oracle products Solaris 11 and Solaris 11 Express. However, rather than being based around the OS/Net consolidation like OpenSolaris was, OpenIndiana became a distribution based on illumos (the first release is still based around OS/Net). The project uses the same IPS package management system as OpenSolaris. On November 12, 2010, a final build of OpenSolaris (134b) was published by Oracle to the /release repository to serve as an upgrade path to Solaris 11 Express. Oracle Solaris 11 Express 2010.11, a preview of Solaris 11 and the first release of the post-OpenSolaris distribution from Oracle, was released on November 15, 2010. Version history Release model OpenSolaris was offered as both development (unstable) and production (stable) releases. Development releases were built from the latest OpenSolaris codebase (consolidations) and included newer technologies, security updates and bug fixes, and more applications, but may not have undergone extensive testing. Production releases were branched from a snapshot of the development codebase (following a code freeze) and underwent a QA process that includes backporting security updates and bug fixes. OpenSolaris can be installed from CD-ROM, USB drives, or over a network with the Automated Installer. CD, USB, and network install images are made available for both types of releases. Repositories OpenSolaris uses a network-aware package management system called the Image Packaging System (also known as pkg(5)) to add, remove, and manage installed software and to update to newer releases. Packages for development releases of OpenSolaris were published by Oracle typically every two weeks to the /dev repository. Production releases use the /release repository which does not receive updates until the next production release. Only Sun customers with paid support contracts have access to updates for production releases. Paid support for production releases which allows access to security updates and bug fixes was offered by Sun through the /support repository on pkg.sun.com. Documentation A hardware compatibility list (HCL) for OpenSolaris can be consulted when choosing hardware for OpenSolaris deployment. Extensive OpenSolaris administration, usage, and development documentation is available online, including community-contributed information. License Sun released most of the Solaris source code under the Common Development and Distribution License (CDDL), which is based on the Mozilla Public License (MPL) version 1.1. The CDDL was approved as an open source license by the Open Source Initiative (OSI) in January 2005. Files licensed under the CDDL can be combined with files licensed under other licenses, whether open source or proprietary. During Sun's announcement of Java's release under the GNU General Public License (GPL), Jonathan Schwartz and Rich Green both hinted at the possibility of releasing Solaris under the GPL, with Green saying he was "certainly not" averse to relicensing under the GPL. When Schwartz pressed him (jokingly), Green said Sun would "take a very close look at it." In January 2007, eWeek reported that anonymous sources at Sun had told them OpenSolaris would be dual-licensed under CDDL and GPLv3. Green responded in his blog the next day that the article was incorrect, saying that although Sun is giving "very serious consideration" to such a dual-licensing arrangement, it would be subject to agreement by the rest of the OpenSolaris community. Conferences The first annual OpenSolaris Developer Conference (abbreviated as OSDevCon) was organized by the German Unix User Group (GUUG) and took place from February 27 to March 2, 2007 at the Freie Universität Berlin in Germany. The 2008 OSDevCon was a joint effort of the GUUG and the Czech OpenSolaris User Group (CZOSUG) and look place June 25–27, 2008 in Prague, Czech Republic. The 2009 OSDevCon look place October 27–30, 2009, in Dresden, Germany. In 2007, Sun Microsystems organized the first OpenSolaris Developer Summit, which was held on the weekend of October 13, 2007, at the University of California, Santa Cruz in the United States. The 2008 OpenSolaris Developer Summit returned to UCSC on May 2–3, 2008, and took place immediately prior to the launch of Sun's new OpenSolaris distribution on May 5, 2008, at the CommunityOne conference in San Francisco, California. The first OpenSolaris Storage Summit was organized by Sun and held September 21, 2008, preceding the SNIA Storage Developer Conference (SDC), in Santa Clara, California. The second OpenSolaris Storage Summit preceded the USENIX Conference on File and Storage Technologies (FAST) on February 23, 2009, in San Francisco, United States. On November 3, 2009, a Solaris/OpenSolaris Security Summit was held by Sun in the Inner Harbor area of Baltimore, Maryland, preceding the Large Installation System Administration Conference (LISA). Ports PowerPC Port: Project Polaris, experimental PowerPC port, based on the previous porting effort, Project Pulsar from Sun Labs. OpenSolaris for System z, for IBM mainframes: Project Sirius, developed by Sine Nomine Associates, named as an analogy to Polaris. OpenSolaris on ARM Port OpenSolaris on MIPS Port Derivatives Notable derivatives include: illumos, a fully open source fork of the project, started in 2010 by a community of Sun OpenSolaris engineers and the NexentaOS support. Note that OpenSolaris was not 100% open source: Some drivers and some libraries were property of other companies that Sun (now Oracle) licensed and was not able to release. OpenIndiana, a project under the illumos umbrella aiming "... to become the defacto OpenSolaris distribution installed on production servers where security and bug fixes are required free of charge." NexentaStor, optimized for storage workloads, based on Nexenta OS OSDyson: illumos kernel with GNU userland and packages from Debian. Strives to become an official Debian port. SmartOS: Virtualization-centered derivative from Joyent. Discontinued Nexenta OS (discontinued October 31, 2012), first distribution based on Ubuntu userland with Solaris-derived kernel See also Comparison of OpenSolaris distributions Comparison of open source operating systems Image Packaging System OpenSolaris Network Virtualization and Resource Control Darwin (operating system) References Further reading External links OpenSolaris archive and downloads OpenSolaris 2008 software Discontinued operating systems Formerly proprietary software Operating system distributions bootable from read-only media Software using the CDDL license Sun Microsystems software
54418502
https://en.wikipedia.org/wiki/LEAP%20Legal%20Software
LEAP Legal Software
LEAP Legal Software, commonly referred to as LEAP, is a technology company that develops practice management software for the legal profession that includes legal accounting, document assembly and management and legal publishing assets. LEAP Legal Software provides a cloud-based legal practice management software to clients in Australia, Canada, the United States, the United Kingdom, the Republic of Ireland and New Zealand. LEAP is used by more than 61,000 users worldwide and is developed by LEAP Dev. In 2016, LEAP Legal Software generated $50 million in annual revenue. LEAP commits more than $20 million a year to research and development. LEAP is a privately held company headquartered in Sydney, Australia, with additional offices in Sydney, Melbourne, Brisbane, Perth and Adelaide. LEAP expanded to the UK and US markets in 2014 and 2015. Presently, LEAP has offices in Toronto, New Jersey, Twickenham, Manchester, Edinburgh, Cardiff, Brighton, Dublin and Kraków. History LEAP Legal Software was created in 1992 by Christian Beck, who felt that lawyers’ workflow could be improved using technology. In 2008, LEAP acquired LegalPax, a provider of automated forms and precedents to law firms in Queensland. In the beginning of 2010, LEAP launched a cloud software called LEAP Expedite. It also purchased BING! Software, a family law precedents business. LEAP Office 10 was launched at the NSW State Legal Conference in August 2010. In 2012, LEAP Expedite was replaced with a rewritten cloud product. LEAP began rolling out the cloud version of its software to law firms in Australia in January 2013. LEAP expanded to the UK and US markets in 2014 and 2015. LEAP 365 was the first software-as-a-service legal application for the Australian market. LEAP 365 was released for the UK and US markets at an event at Yankee Stadium on September 12, 2016. In 2019, LEAP entered into a joint venture with LexisNexis Legal Professional to operate PCLaw | Time Matters. LEAP expanded to Canada in 2020 and to New Zealand in 2022. Software LEAP Legal Software's flagship product is LEAP, a legal practice productivity solution. The key features of LEAP include legal practice management, document assembly and management, automated matter types and forms, a client and contact database, file sharing, time recording, billing and trust accounting tools, and legal publishing assets. All LEAP data is stored in AWS (Amazon Web Services) in dedicated facilities around the world. LEAP Legal Software introduced mobile app integration in mid-2013. In November 2021, LEAP created the LEAP App for iPad. Integration with Microsoft LEAP was integrated with Microsoft Office in 2002. In January 2016, LEAP completed an integration with the new Microsoft Office 365 software allowing complex document assembly to occur in the cloud rather than the desktop. Awards Winner, “Best Software as a Service – USA (SMB),” 2021-2022 International Cloud Computing Awards Program, The Cloud Awards Winner, “Overall Case Management Platform of the Year,” LegalTech Breakthrough Awards 2021 Winner, Established Business of the Year Award and #1 in Law Category, Australian Digital Technology Awards 2021 Australian Financial Review Top Ten Most Innovative Technology Companies, 2020 Australian Deloitte's Technology Fast 50 in 2004, 2008 and 2009 Leadership Christian Beck, Founder Richard Hugo-Hamman, Executive Chairman Mark Burgess, Chief Technology Officer of LEAP Legal Tech Corporate development Acquisitions LEAP Legal Software has acquired several companies and brands: Law Perfect in 2007 BING! in 2008 LegalPax in 2008 LawWare in 2012 Peapod Legal Software in 2013 Edgebyte in 2014 Perfect Software (Perfect Books) in 2015 Turbolaw in 2020 DivorceMate in 2021 Internal Start-up: InfoTrack Initially a division of LEAP known as LEAP Searching, InfoTrack was launched as a separate company in 2012 and has since been brought under the umbrella company of Australian Technology Investors. It had a reported annual revenue of A$144 million in 2015. References Companies based in Sydney Software companies of Australia
18291947
https://en.wikipedia.org/wiki/OneFS%20distributed%20file%20system
OneFS distributed file system
The OneFS File System is a parallel distributed networked file system designed by Isilon Systems and is the basis for the Isilon Scale-out Storage Platform. The OneFS file system is controlled and managed by the OneFS Operating System, a FreeBSD variant. On-disk Structure All data structures in the OneFS file system maintain their own protection information. This means in the same filesystem, one file may be protected at +1 (basic parity protection) while another may be protected at +4 (resilient to four failures) while yet another file may be protected at 2x (mirroring); this feature is referred to as FlexProtect. FlexProtect is also responsible for automatically rebuilding the data in the event of a failure. The protection levels available are based on the number of nodes in the cluster and follow the Reed Solomon Algorithm. Blocks for an individual file are spread across the nodes. This allows entire nodes to fail without losing access to any data. File metadata, directories, snapshot structures, quotas structures, and a logical inode mapping structure are all based on mirrored B+ trees. Block addresses are generalized 64-bit pointers that reference (node, drive, blknum) tuples. The native block size is 8192 bytes; inodes are 512 bytes on disk (for disks with 512 byte sectors) or 8KB (for disks with 4KB sectors). One distinctive characteristic of OneFS is that metadata is spread throughout the nodes in a homogeneous fashion. There are no dedicated metadata servers. The only piece of metadata that is replicated on every node is the address list of root btree blocks of the inode mapping structure. Everything else can be found from that starting point, following the generalized 64-bit pointers. Clustering The collection of computer hosts that comprise a OneFS System is referred to as a "cluster". A computer host that is a member of a OneFS cluster is referred to as a "node" (plural "nodes"). The nodes that comprise a OneFS System must be connected by a high performance, low-latency back-end network for optimal performance. OneFS 1.0-3.0 used Gigabit Ethernet as that back-end network. Starting with OneFS 3.5, Isilon offered InfiniBand models. From about 2007 until mid-2018, all nodes sold utilized an InfiniBand back-end. Starting with OneFS 8.1.0 and Gen6 models, Isilon again offers Ethernet back-end network (10, 25, 40, or 100 Gigabit). Data, metadata, locking, transaction, group management, allocation, and event traffic are communicated using an RPC mechanism traveling over the back-end network of the OneFS cluster. All data and metadata transfers are zero-copy. All modification operations to on-disk structures are transactional and journaled. Protocols OneFS supports accessing stored files using common computer network protocols including NFS, CIFS/SMB, FTP, HTTP, and HDFS. It can utilize non-local authentication such as Active Directory, LDAP, and NIS. It is capable of interfacing with external backup devices and applications that use NDMP protocol. OneFS Operating System The OneFS File System is a proprietary file system that can only be managed and controlled by the FreeBSD-derived OneFS Operating System. zsh is the default login shell of the OneFS Operating System. OneFS presents a specialized command set to administer the OneFS File System. Most specialized shell programs start with letters isi. Notable exceptions are the Isilon extensions to the FreeBSD ls and chmod programs. Versions 1.0 "Bell," 2.0 "Jalapeno," 3.0 "Serrano," 3.5 "Tabasco" 4.0 "Poblano," 4.1 "Anaheim," 4.5 "Thai," 4.6 "Ancho" 4.7 "Chiltepin" 4.7.1 to .11 5.0 "Jamaican" 5.0.0 to .8 5.5 "Scotch Bonnet" (based on FreeBSD 6.1) 5.5.1 to .2 5.5.3 - OS updates with rolling reboots of individual nodes. 5.5.4 - Adds iSCSI 5.5.5 to .7 6.0 "Habanero" - Up to 10.4 PB in a single file system 6.0.1 to .4 6.5 "Chopu" (based on FreeBSD 7.3) 6.5.1 to .5 7.0 "Mavericks" - released November 2012; (based on FreeBSD 7.4-STABLE) 7.0.1 to .2 7.1 "Waikiki" - released October 2013 7.1.1 "Jaws" - released July 2014 7.2 "Moby" - released November 2014 7.2.0, 7.2.1 8.0 "Riptide" (based on FreeBSD 10) - released February 2016 - iSCSI deprecated 8.0.1 "Halfpipe" - released October 2016 8.1 "Freight Trains" - released June 2017 8.1.1 "Niijima" - released January 2018 8.1.2 "Kanagawa" - Released August 2018 8.1.3 "Seismic" - Released January 2019 8.2.0 "Pipeline" (based on FreeBSD 11) - Released May 2019 8.2.1 "Acela" - Released September 2019 8.2.2 "Beachcomber" - Released January 2020 9.0.0.0 "Cascades" - Released June 2020 9.1.0.0 - Released October 2020 See also List of file systems Clustered file system References External links Distributed file systems
250719
https://en.wikipedia.org/wiki/Internet%20in%20China
Internet in China
China has been on the internet intermittently since May 1989 and on a permanent basis since 20 April 1994, although with limited access. In 2008, China became the country with the largest population on the Internet and, , has remained so. As of July 2016, 730,723,960 people (53.2% of the country's total population) were internet users. China's first foray into global cyberspace was an email (not TCP/IP based and thus technically not internet) sent on 20 September 1987 to Karlsruhe Institute of Technology. It said "Across the Great Wall, we can reach every corner in the world" (). This later became a well-known phrase in China and , was displayed on the desktop login screen for QQ mail. History By the end of 2009, the number of Chinese domestic websites grew to 3.23 million, with an annual increase rate of 12.3%, according to the Ministry of Industry and Information Technology. As of first half of 2010, the majority of the Web content is user-generated. As of June 2011, Chinese internet users spent an average of 18.7 hours online per week, which would result in a total of about 472 billion hours in 2011. China had 618 million internet users by the end of December 2013, a 9.5 percent increase over the year before and a penetration rate of 45.8%. By June 2014, there were 632 million internet users in the country and a penetration rate of 46.9%. The number of users using mobile devices to access the internet overtook those using PCs (83.4% and 80.9%, respectively). China replaced the U.S. in its global leadership in terms of installed telecommunication bandwidth in 2011. By 2014, China hosts more than twice as much national bandwidth potential than the U.S., the historical leader in terms of installed telecommunication bandwidth (China: 29% versus US:13% of the global total). As of March 2017, there are about 700 million Chinese internet users, and many of them have a high-speed internet connection. Most of the users live in urban areas but at least 178 million users reside in rural towns. A majority of broadband subscribers were DSL, mostly from China Telecom and China Netcom. The price varies in different provinces, usually around US$5 – $50/month for a 50M – 1000M ADSL/Fiber (price varies by geographic region). By 2013, broadband made up the majority of internet connections in China, with 363.8 million users at this service tier. The price of a broadband connection places it well within the reach of the mainland Chinese middle class. Wireless, especially internet access through a mobile phone, has developed rapidly. 500 million were accessing the internet via cell phones in 2013. The number of dial-up users peaked in 2004 and since then has decreased sharply. Generally statistics on the number of mobile internet users in China show a significant slump in the growth rate between 2008 and 2010, with a small peak in the next two years. In April 2020, the National Development and Reform Commission (NDRC) proposed that "satellite internet" should be a part of new national infrastructure. By the next month, Shanghai, Beijing, Fuzhou, Chongqing, Chengdu, and Shenzhen had each proposed regional action plans to support the new satellite internet constellation project with a goal to provide domestic China satellite internet to rural areas . Beginning in 2019, US (SpaceX Starlink) and UK (OneWeb, 2020) private companies had begun fielding large internet satellite constellations with global coverage; however China does not intend to license non-Chinese technical solutions for satellite broadband within the jurisdiction of Chinese law. Structure An important characteristic of the Chinese internet is that online access routes are owned by the PRC government, and private enterprises and individuals can only rent bandwidth from the state. The first four major national networks, namely CSTNET, ChinaNet, CERNET and CHINAGBN, are the "backbone" of the mainland Chinese internet. Later dominant telecom providers also started to provide internet services. In January 2015, China added seven new access points to the world's internet backbone, adding to the three points that connect through Beijing, Shanghai, and Guangzhou. Public internet services are usually provided by provincial telecom companies, which sometimes are traded between networks. Internet service providers without a nationwide network could not compete with their bandwidth provider, the telecom companies, and often run out of business. The interconnection between these networks is a big concern for internet users, since internet traffic via the global internet is quite slow. However, major internet services providers are reluctant to aid rivals. Userbase The January 2013 China Internet Network Information Center (CNNIC) report states that 56% of internet users were male, and 44% were female, and expresses other data based on sixty thousand surveys. The majority of Chinese internet users are restricted their use of the internet to Chinese websites, as most of the population has a lack of foreign language skills and access to Google or Wikipedia. English-language media in China often use the word "netizen" to refer to Chinese internet users. Content According to Kaiser Kuo, the internet in China is largely used for entertainment purposes, being referred to as the "entertainment superhighway". However, it also serves as the first public forum for Chinese citizens to freely exchange their ideas. Most users go online to read news, to search for information, and to check their email. They also go to BBS or web forums, find music or videos, or download files. Content providers Chinese-language infotainment web portals such as Tencent, Sina.com, Sohu, and 163.com are popular. For example, Sina claimed it has about 94.8 million registered users and more than 10 million active ones engaged in their fee-based services. Other Internet service providers such as the human resource service provider 51job and the electronic commerce web sites such as Alibaba.com are less popular but more successful on their specialty. Their success led some of them to make IPOs. All websites that operate in China with their own domain name must have an ICP license from the Ministry of Industry and Information Technology. Because the PRC government blocks many foreign websites, many homegrown copycats of foreign websites have appeared. Search engines Baidu is the leading search engine in China, while most web portals also provide search opportunities like Sogou. Bing China has also entered the Chinese market. Bing.cn also operates Yahoo's China search functions. As of 2015, Google was limited to no presence in China. Before 2014, Googlers in China were linked to Google Hong Kong from its google.cn page because of an issue with hackers reportedly based in Mainland China. As of 4 June 2014, Google became officially blocked without the use of a virtual private network (VPN), an effect still in place in 2021. Online communities Although the Chinese write fewer emails, they enjoy other online communication tools. Users form their communities based on different interests. Bulletin boards on portals or elsewhere, chat rooms, instant messaging groups, blogs and microblogs are very active, while photo-sharing and social networking sites are growing rapidly. Some Wikis such as the Sogou Baike and Baidu Baike are "flourishing". Until 2008 the Chinese Wikipedia could not be accessed from mainland China. Since 2008, the government only blocks certain pages on Wikipedia which they deem to contain controversial content. Social media China is one of the most restricted countries in the world in terms of internet, but these constraints have directly contributed to the staggering success of local Chinese social media sites. The Chinese government makes it impossible for foreign companies to enter the Chinese social media network. Without access to the majority of social media platforms used elsewhere in the world, the Chinese have created their own networks, just like Facebook, Myspace, YouTube, and Foursquare – but with more users – which is why every global company pays attention to these sites. Some Chinese famous social medias are Sina Weibo, Tencent Weibo, Renren, Pengyou, QQ, Douban etc. And in recent years, the use of WeChat has become more and more popular among people in China. Online shopping The rapidly increasing number of Internet users in China has also generated a large online shopping base in the country. A large number of Chinese internet users have even been branded as having an "online shopping addiction" as a result of the growth of the industry. According to Sina.com, Chinese consumers with Internet access spend an average of RMB10,000 online annually. Online Mapping Services China has endeavored to offer a number of online mapping services and allows the dissemination of geographic information within the country. Tencent Maps (腾讯地图), Baidu Maps (百度地圖) and Tianditu (天地圖) are typical examples. Online mapping services can be understood as online cartography backed up by a geographic information system (GIS). GIS was originally a tool for cartographers, geographers and other types of specialists to store, manage, present and analyze spatial data. In bringing GIS online, the Web has made these tools available to a much wider audience. Furthermore, with the advent of broadband, utilizing GIS has become much faster and easier. Increasingly, non-specialist members of the public can access, look up and make use of geographic information for their own purposes. Tianditu is China's first online mapping service. Literally World Map, Tianditu was launched in late October 2010. The Chinese government has repeatedly claimed that this service is to offer comprehensive geographical data for Chinese users to learn more about the world. Online payment The method of directly paying by online banking is required to be able to make online banking payment after opening online banking and can realize online payment of UnionPay, online payment by credit card, and so on. This payment method is directly paid from the bank card. The third-party payment itself integrates multiple payment methods, and the process is as follows: 1. Recharge the money in online banking to a third-party. 2. Pay by third-party deposit when the user pays. 3. The fee is charged for withdrawal. Third-party payment methods are diverse, including mobile payments and fixed-line payments. The most commonly used third-party payment is Alipay, Tenpay, Huanxun, Epro, fast money, online banking, and as an independent online merchant or a website with payment services, the most common choice is nothing more than Alipay, Huanxun payment, Epro payment, fast money these four. As of January 2015, Alipay, owned by Alibaba Group has 600 million counts of users and has the largest user group among all online-payment providers. Online gaming As of 2009, China is the largest market for online games. The country has 368 million internet users playing online games and the industry was worth US$13.5 billion in 2013. 73% of gamers are male, 27% are female. In 2007, the Ministry of Culture (MoC) and General Administration of Press and Publication (GAPP) along with several other agencies implemented the Online Game Anti-Addiction System which aimed to stop video game addiction in youth. This system restricted minors from playing more than 3 hours a day and required Identification (ID) checking in order to verify you are of age. Later in 2019, the Chinese government announced in November that gamers under the age of 18 would be banned from playing video games between the hours of 10 p.m. and 8 a.m. In addition, gamers under 18 would be restricted to 90 minutes of playing during the weekdays and 3 hours of playing during weekends and holidays as per new guidelines. As of 2021, the National Press and Publication Administration (NPPA) further restricted rules limiting playtime for under-18’s to one hour per day from 8p.m. to 9 p.m. and only on Fridays, Saturdays, and Sundays. Adult content Although restrictions on political information remain strong, several sexually oriented blogs began appearing in early 2004. Women using the web aliases Muzi Mei (木子美) and Zhuying Qingtong (竹影青瞳) wrote online diaries of their sex lives and became minor celebrities. This was widely reported and criticized in mainland Chinese news media, and several of these bloggers' sites have since been blocked, and remain so to this day. This coincided with an artistic nude photography fad (including a self-published book by dancer Tang Jiali) and the appearance of pictures of minimally clad women or even topless photos in a few Chinese newspapers, magazines and on several websites. Many dating and "adult chat" sites, both Chinese and foreign, have been blocked. Some, however, continue to be accessible, although this appears to be due more to the Chinese government's ignorance of their existence than any particular policy of leniency. Censorship The Golden Shield Project was proposed to the State Council by Premier Zhu Rongji in 1993. As a massive surveillance and content control system, it was launched in November 2000, and became known as the Great Firewall of China. The apparatus of China's Internet control is considered more extensive and more advanced than in any other country in the world. The governmental authorities not only block website content but also monitor the Internet access of individuals; such measures have attracted the derisive nickname "The Great Firewall of China." However, there are some methods of circumventing the censorship by using proxy servers outside the firewall. Users may circumvent all of the censorship and monitoring of the Great Firewall if they have a secure VPN or SSH connection method to a computer outside mainland China. Disruptions of VPN services have been reported and many of the free or popular services are now blocked. On 29 July 2017, Apple complied with an order from the Chinese government to remove all VPN apps from its App Store that were not pre-approved by the government. Different methods are used to block certain websites or pages including DNS poisoning, blocking access to IPs, analyzing and filtering URLs, inspecting filter packets and resetting connections. Memes The Baidu 10 Mythical Creatures, initially a humorous hoax, became a popular and widespread internet meme in China. These ten hoaxes reportedly originated in response to increasing online censorship and have become an icon of Chinese internet users' resistance to it. The State Administration of Press, Publication, Radio, Film and Television issued a directive on 30 March 2009 to highlight 31 categories of content prohibited online, including violence, pornography and content which may "incite ethnic discrimination or undermine social stability". Many Chinese internet users believe the instruction follows the official embarrassment over the "Grass Mud Horse" and the "River Crab". Industry observers believe that the move was designed to stop the spread of parodies or other comments on politically sensitive issues in the runup to the anniversary of the 4 June Tiananmen Square protests. Cyber Attacks In the second quarter of 2014, China was by far the main country of origin of cyber attacks, with 43% of the worldwide total. Internet advertising market The size of China's online advertising market was RMB 3.3 billion in the third quarter 2008, up 19.1% compared with the previous quarter. Tencent, Baidu.com Inc, Sina Corp and Google Inc. remain the Top 4 in terms of market share. Keyword advertising market size reached RMB 1.46 billion, accounting for 43.8% of the total Internet advertising market with a quarter-on-quarter growth rate of 19.3%, while that of the online advertising site amounted to RMB 1.70 billion, accounting for 50.7% of the total, up 18.9% compared with the second quarter. Currently, Baidu has launched the CPA platform, and Sina Corp has launched an advertising scheme for intelligent investment. The moves indicate a market trend of effective advertising with low cost. Online advertisements of automobiles, real estate and finance will keep growing rapidly in the future. Online encyclopedias Sogou Baike, 15+ million articles Baike.com claiming to have more than 18 million articles as of 2020 Baidu Baike, 3.5 million articles Chinese Wikipedia, 594,376 articles as of October 2012 See also Telecommunications in China Telecommunications industry in China Internet censorship in China Golden Shield Project China Internet Project Human flesh search engine (HFSE) List of Internet phenomena in China List of Internet slang in China Media of China All-China Youth Network Civilization Convention References Computer-related introductions in 1994
610480
https://en.wikipedia.org/wiki/Inslaw
Inslaw
Inslaw, Inc. is a Washington, D.C. based information technology company that markets case management software for corporate and government users. Inslaw is known for developing Promis, an early case management software system. It is also known for a lawsuit that it brought against the United States Department of Justice in 1986 over Promis. Inslaw won damages in bankruptcy court, but these were overturned on appeal. The suit resulted in several Justice Department internal reviews, two Congressional investigations, the appointment of a special counsel by Attorney General William P. Barr, and a lengthy review of the special counsel's report under Attorney General Janet Reno. Inslaw's claims were finally referred by Congress to the Court of Federal Claims in 1995, and the dispute ended with the Court's ruling against Inslaw in 1998. During the 12-year long legal proceedings, Inslaw accused the Department of Justice of conspiring to steal its software, attempting to drive it into Chapter 7 liquidation, using the stolen software for covert intelligence operations against foreign governments, and involvement in a murder. These accusations were eventually rejected by the special counsel and the Court of Federal Claims. History of Inslaw Inslaw began as a non-profit organization called the Institute for Law and Social Research. The Institute was founded in 1973 by William A. Hamilton to develop case management software for law enforcement office automation. Funded by grants and contracts from the Law Enforcement Assistance Administration (LEAA), the Institute developed a program it called "Promis", an acronym for Prosecutors' Management Information System, for use in law enforcement record keeping and case-monitoring activities. When Congress voted to abolish the LEAA in 1980, Hamilton decided to continue operating as a for-profit corporation and market the software to current and new users. In January 1981 Hamilton established the for-profit Inslaw, transferring the Institute's assets over to the new corporation. Development of Promis Promis software was originally written in COBOL for use on mainframe computers; later a version was developed to run on 16 bit mini-computers such as the Digital Equipment Corporation PDP-11. The primary users of this early version of the software were the United States Attorneys Office of the District of Columbia, and state and local law enforcement. Both the mainframe and 16 bit mini-computer versions of Promis were developed under LEAA contracts, and in later litigation, both Inslaw and DOJ eventually agreed that the early version of Promis was in the public domain, meaning that neither the Institute nor its successor had exclusive rights to it. The Promis implementation contract In 1979, the DOJ contracted with the Institute to do a pilot project that installed versions of Promis in four US Attorneys offices; two using the mini-computer version, and the other two a "word-processor" version which the Institute was developing. Encouraged by the results, the Department decided in 1981 to go ahead with a full implementation of locally based Promis systems, and issued a request for proposals (RFP) to install the mini-computer version of Promis in the 20 largest United States Attorneys offices. This contract, usually called the "implementation contract" in later litigation, also included developing and installing "word-processor" versions of Promis at 74 smaller offices. The now for-profit Inslaw responded to the RFP, and in March 1982 was awarded the three year $10m contract by the contracting division, the Executive Office of United States Attorneys (EOUSA). Contract disputes and Inslaw bankruptcy The contract did not go smoothly. Disputes between EOUSA and Inslaw began soon after its execution. A key dispute over proprietary rights had to be solved by a bi-lateral change to the original contract. (This change, "Modification 12," is discussed below.) EOUSA also determined that Inslaw was in violation of the terms of an "advance payment" clause in the contract. This clause was important to Inslaw's financing and became the subject of months of negotiations. There were also disputes over service fees. During the first year of the contract, the DOJ did not have the hardware to run Promis in any of the offices covered by the contract. As a stopgap measure, Inslaw provided Promis on a time-share basis through a VAX computer in Virginia, allowing the offices to access Promis on the Inslaw VAX through remote terminals, until the needed equipment was installed on-site. EOUSA claimed that Inslaw had overcharged for this service and withheld payments. The DOJ ultimately acquired Prime computers, and Inslaw began installing Promis on these in the second year of the contract, in August 1983. The "word-processor" Promis installation, however, continued to have problems, and in February 1984 the DOJ cancelled this portion of the contract. Following this cancellation, the financial condition of Inslaw worsened, and the company filed for Chapter 11 bankruptcy in February 1985. Proprietary rights dispute The implementation contract called for the installation of the mini-computer version of Promis, plus some later modifications that had also been funded by LEAA contracts and like the mini-computer version were in the public domain. In addition, the contract data rights clause "gave the government unlimited rights in any technical data and computer software delivered under the contract." This presented a potential conflict with Inslaw's plans to market a commercial version of Promis which it called "Promis 82" or "Enhanced Promis." The issue came up early in the implementation contract, but was resolved by an exchange of letters in which DOJ signed off on the issue after Inslaw assured the DOJ that Promis 82 contained "enhancements undertaken by Inslaw at private expense after the cessation of LEAA funding." The issue arose again in December 1982 when the DOJ invoked its contract rights to request all the PROMIS programs and documentation being provided under the contract. The reason the DOJ gave for this request in later litigation was that it was concerned about Inslaw's financial condition. At that point, DOJ had access to Promis only through the VAX time-sharing arrangement with Inslaw; if Inslaw failed, DOJ would be left without a copy of the software and data it was entitled to under the contract. Inslaw responded in February 1983 that it was willing to provide the computer tapes and documents for Promis, but that the tapes it had were for the VAX version of Promis, and included proprietary enhancements. Before providing the tapes, Inslaw wrote, "Inslaw and the Department of Justice will have to reach an agreement on the inclusion or exclusion" of the features. The DOJ response to Inslaw was to emphasize that the implementation contract called for a version of PROMIS in which the government had unlimited rights and to ask for information about the enhancements Inslaw claimed as proprietary. Inslaw agreed to provide this information, but noted that it would be difficult to remove the enhancements from the time-sharing version of Promis and offered to provide the VAX version of Promis if the DOJ would agree to limit their distribution. In March 1983, the DOJ again informed Inslaw that the implementation contract required Inslaw to produce software in which the government had unlimited rights, and that delivery of software with restrictions would not satisfy the contract. Contract revisions After some back and forth, DOJ contracting officer Peter Videnieks sent a letter proposing a contract modification. Under the modification, in return for the software and data request, DOJ agreed not to disclose or disseminate the material "beyond the Executive Office for United States Attorney and the 94 United States Attorneys' Offices covered by the subject contract, until the data rights of the parties to the contract are resolved." To resolve the data rights issue, the letter proposed that Inslaw identify its claimed proprietary enhancements and demonstrate that the enhancements were developed "at private expense and outside the scope of any government contract." After these were identified, the government would then "either direct Inslaw to delete those enhancements from the versions of PROMIS to be delivered under the contract or negotiate with Inslaw regarding the inclusion of those enhancements in that software." Inslaw eventually agreed to this suggestion, and the change, referred to as "Modification 12," was executed in April 1983. Inslaw then provided DOJ with tapes and documentation for the VAX version of Promis. Under this arrangement, however, Inslaw had substantial difficulty demonstrating the extent of the enhancements and the use of private funding in their development. It proposed several methods for doing this, but these were rejected by DOJ as inadequate. Inslaw's attempts to identify the proprietary enhancements and their funding ended when it began installing Promis on the USAO Prime computers in August 1983. By the end of the contract in March 1983, it had completed installing Promis in all 20 of the offices specified in the implementation contract. Since none of the available versions of Promis was compatible with the Department's new Prime computers, Inslaw ported the VAX version, which contained Inslaw's claimed enhancements, to the Prime computers. Inslaw's bankruptcy case After Inslaw filed for Chapter 11 bankruptcy in February 1985, there were still disputes over implementation contract payments, and as a result the DOJ was listed as one of Inslaw's creditors. At the same time, the DOJ continued its office automation program and, in place of the originally planned "word-processor" version of Promis, it installed the version ported to Prime mini-computers in at least 23 more offices. When Inslaw learned of the installations, it notified EOUSA that this was in violation of Modification 12 and filed a claim for $2.9m, which Inslaw said was the license fees for the software DOJ self-installed. Inslaw also filed claims for services performed during the contract, for a total of $4.1m. The DOJ contracting officer, Peter Videnieks, denied all these claims. Inslaw appealed the denial of the service fees to the Department of Transportation Board of Contract Appeals (DOTBCA). For the data rights claim, however, Inslaw took a different approach. In June 1986 it filed an adversary hearing in the Bankruptcy Court, claiming that DOJ's actions violated the automatic stay provision of the bankruptcy code by interfering with the company's rights to its software. Inslaw's initial filing claimed that the contract disputes arose because the DOJ officials who administered the contract were biased against Inslaw. The filing specifically mentioned Promis project manager, C. Madison Brewer, and Associate Attorney General, D. Lowell Jensen. Brewer had previously been Inslaw's general counsel, but according to Inslaw had been terminated for cause. Inslaw claimed that Brewer's dismissal caused him to be unreasonably biased against Inslaw and owner William Hamilton. Jensen was a member of the project oversight committee at the time of the contract. He had helped to develop another competing case management software system several years earlier, and Inslaw claimed that this led him to be prejudiced against Promis, so that he ignored the unreasonable bias of Brewer. "Independent handling" proceeding In February 1987, Inslaw requested an "independent handling hearing", to force the DOJ to conduct the adversary hearing "independent of any Department of Justice officials involved in the allegations made" in the hearing. The bankruptcy court judge assigned to handle Inslaw's Chapter 11 proceedings, Judge George F. Bason, granted the request, and scheduled the hearing for June. Prior to the hearing, Inslaw owners William and Nancy Hamilton spoke to Anthony Pasciuto, then the Deputy Director of the Executive Office of the United States Trustees (EOUST), a DOJ component responsible for overseeing the administration of bankruptcy cases. Pasciuto told the Hamiltons that the Director of the EOUST, Thomas Stanton, had pressured the U.S. Trustee assigned to the Inslaw case, Edward White, to convert Inslaw's bankruptcy from chapter 11 (reorganization of the company), to chapter 7 (liquidation). The Hamiltons had Inslaw's attorneys depose the people whom Pasciuto had named. One of them corroborated part of Pasciuto's claims: Cornelius Blackshear, then a U. S. Trustee in New York, swore in his deposition testimony that he was aware of pressure to convert the bankruptcy. Two days later, however, Blackshear submitted an affidavit recanting his testimony, saying that he had mistakenly recalled an instance of pressure from another case. Blackshear repeated his retraction at the June hearing on Inslaw's request. Pasciuto also retracted part of his claims at this hearing, and said instead that he did not use the word "conversion." Judge Bason, however, chose to believe the original depositions of Pasciuto and Blackshear, and found that the DOJ, "unlawfully, intentionally and willfully" tried to convert INSLAW's Chapter 11 reorganization case to a Chapter 7 liquidation "without justification and by improper means." In the ruling, Bason was harshly critical of the testimony of several DOJ officials, describing it as "evasive and unbelieveable," or "simply on its face unbelievable." He enjoined the DOJ and the EOUST from contacting anyone in the U.S. Trustee's office handling the Inslaw case except for information requests. Adversary proceeding Inslaw's adversary proceeding followed a month after the "independent handling hearing." The proceeding lasted for two and half weeks, from late July to August. In a bench ruling on September 28, Judge Bason found that DOJ project manager Brewer, "believing he had been wrongfully discharged by Mr. Hamilton and INSLAW, developed an intense and abiding hatred for Mr. Hamilton and INSLAW," and had used his position at DOJ to "vent his spleen." He also found that the DOJ "took, converted, stole, INSLAW's enhanced PROMIS by trickery, fraud, and deceit." Specifically, he found that DOJ had used the threat of terminating "advance payments" to get a copy of the enhanced Promis that it was not entitled to, and that it had negotiated modification 12 of the contract in bad faith, never intending to meet its commitment under the modification. In his ruling, Judge Bason again called the testimony of DOJ witnesses "biased", "unbelievable", and "unreliable." Judge Bason not reappointed Bankruptcy Court Judge Bason was appointed to the District of Columbia Bankruptcy Court in February 1984 for a four-year term. He sought re-appointment early in 1987, but was informed in December that the Court of Appeals had chosen another candidate. Judge Bason then suggested in a letter to the Court of Appeals that DOJ might have improperly influenced the selection process because of his bench ruling for Inslaw. After learning of this letter, DOJ lawyers moved to recuse Judge Bason from the Inslaw case, but their motion was rejected, and Judge Bason remained on the case until the expiration of his term on February 8, 1988. In early February, Judge Bason filed a lawsuit seeking to prevent the judge the Court of Appeals had selected for the District of Columbia Bankruptcy Court from taking office, but the suit was rejected. Bason's last actions in the case were to file a written ruling on Inslaw's adversary proceeding, and to award damages and attorneys fees to Inslaw. Appeals of the bankruptcy suit After Judge Bason left the bench, the Inslaw bankruptcy was assigned to Baltimore bankruptcy judge James Schneider. Schneider accepted Inslaw's reorganization plan at the end of 1988 after a cash infusion from IBM. In the meanwhile, DOJ filed an appeal of Judge Bason's adversary suit ruling in the District of Columbia Circuit Court. In November 1989, Circuit Court Judge William Bryant upheld Bason's ruling. Reviewing the case under the "clear error" standard for reversal, Bryant wrote: "[T]here is convincing, perhaps compelling support for the findings set forth by the bankruptcy court." DOJ appealed the Circuit Court decision and in May 1991, the Court of Appeals found the DOJ had not violated the automatic stay provisions of the bankruptcy code and that the Bankruptcy Court therefore lacked jurisdiction over Inslaw's claims against DOJ. It vacated the Bankruptcy Court's rulings and dismissed Inslaw's complaint. Inslaw appealed the decision to the Supreme Court, which declined to hear the case. Federal investigations Inslaw's allegations against the Justice Department led to a number of investigations, including internal Department probes and Congressional investigations by the Senate's Permanent Subcommittee on Investigations (PSI) and the House Judiciary Committee. The DOJ eventually appointed a special counsel to investigate. After the special counsel issued his report, Inslaw responded with a lengthy rebuttal. The DOJ then re-examined the special counsel's findings, resulting in the release of a final Department review. During these federal investigations, Inslaw began making allegations of a broad, complex conspiracy to steal Promis, involving many more people and many more claims than the bankruptcy proceedings had covered. These later allegations are described below under the investigations which examined them. Justice Department investigations After Judge Bason's June 1987 bench ruling found several DOJ officials' testimony "unbelievable", DOJ's Office of Professional Responsibility (OPR) opened an investigation of DOJ staff who testified at the hearing, including C. Madison Brewer, Peter Videnieks, and EOUST director Thomas Stanton. It also opened a separate investigation of EOUST deputy director Anthony Pasciuto. OPR recommended Pasciuto be terminated, based on his hearing testimony that he had made false statements to the Hamiltons, but in its final report it found no evidence that the other officials investigated had applied pressure to convert Inslaw's bankruptcy or lied during the independent handling hearing. After Judge Bason issued his written ruling in January 1988, Inslaw's attorneys also complained to the DOJ's Public Integrity Section that Judge Blackshear and U.S. Trustee Edward White had committed perjury. Public Integrity opened an investigation that ultimately found perjury cases could not be proven, and recommended declining prosecution. The Senate report The first Congressional investigation into the Inslaw case came from the Senate's Permanent Subcommittee on Investigations (PSI). PSI's report was issued in September 1989, after a year and a half of investigation. During the investigation, Inslaw made a number of new allegations, which take up most of the PSI report. New allegations Inslaw's new allegations described the Justice Department dispute with Inslaw as part of a broad conspiracy to drive Inslaw into bankruptcy so that Earl Brian, the founder of a venture capital firm called Biotech (later Infotechnology), could acquire Inslaw's assets, including its software Promis. Inslaw owner William Hamilton told PSI investigators that Brian had first attempted to acquire Inslaw through a computer services corporation he controlled, called Hadron. Hamilton said that he rejected an offer from Hadron to acquire Inslaw, and that Brian then attempted to drive Inslaw into bankruptcy through his influence with Attorney General Edwin Meese. Both Meese and Brian had served in the cabinet of Ronald Reagan when he was governor of California, and Meese's wife had later bought stock in Brian's company, so that Meese was willing to do this, Hamilton claimed. The contract dispute with DOJ was contrived by Brian and Meese with the help of Associate Attorney General Jensen, and Promis project manager Brewer. Hamilton also complained that a DOJ automation program, 'Project Eagle', was part of a scheme to benefit Brian after he acquired Promis, and that an AT&T subsidiary, AT&T Information Systems, had engaged with the DOJ in a conspiracy to interfere with Inslaw's efforts to reorganize. He also told PSI investigators that the DOJ had undermined Bankruptcy Court Judge Bason's reappointment, and had attempted to undermine Inslaw's lead counsel in the bankruptcy suit. Report findings Senate investigators found no proof for any of these claims. Their report noted that the bankruptcy court ruling had not concluded that Jensen had engaged in a conspiracy against Inslaw and that their own investigation had found no proof that Jensen and Meese had conspired to ruin Inslaw or steal its product, or that Brian or Hadron were involved in a conspiracy to undermine Inslaw and acquire its assets. The report did re-examine the bankruptcy finding that the DOJ had pressured the United States Trustee to recommend converting Inslaw's bankruptcy from Chapter 11 to Chapter 7, and found that EOUST director Thomas Stanton had improperly tried to get special handling for Inslaw's bankruptcy. He did this, the report stated, in order to gain support for the EOUST from the DOJ. The report concluded that the Subcommittee found no proof for a broad conspiracy against Inslaw within the DOJ, or a conspiracy between DOJ officials and outside parties to force Inslaw into bankruptcy for personal benefit. However, it criticized DOJ for hiring a former Inslaw employee (Brewer) to oversee Inslaw's contract with EOUSA, and for failing to follow standard procedures in handling Inslaw's complaints. It also criticized the DOJ for a lack of cooperation with the Subcommittee, which delayed the investigation and undercut the Subcommittee's ability to interview Department employees. The House report Following the PSI report, the House Judiciary Committee began another investigation into the dispute. By the time the report was released in September 1992, Inslaw's bankruptcy suit had been first upheld in the D.C. Circuit Court, then vacated by the D.C. Appeals court. The House report thus took a different approach to several of the legal issues that the Senate report had discussed. Like the Senate report, much of the House report dealt with new evidence and new allegations from Inslaw. New allegations Inslaw's new evidence consisted of statements and affidavits from witnesses supporting Inslaw's previous claims. The most important of these witnesses was Michael Riconosciuto, who swore in an affidavit for Inslaw that businessman Earl Brian had provided him with a copy of Inslaw's enhanced Promis, supporting Inslaw's earlier claims that Brian had been interested in acquiring and marketing the software. A new allegation was also introduced in Riconosciuto's affidavit: Riconosciuto swore that he added modifications to enhanced Promis "to support a plan for the implementation of PROMIS in law enforcement and intelligence agencies worldwide." According to Riconosciuto, "Earl W. Brian was spearheading the plan for this worldwide use of the PROMIS computer software." Another important witness was Ari Ben-Menashe, who also provided affidavits for Inslaw that Brian had brought both public domain and enhanced versions of Promis to Israel, and eventually sold the enhanced version to the Israeli government. Committee investigators interviewed Ben-Menashe in May 1991, and he told them that Brian sold enhanced Promis to both Israeli intelligence and Singapore's armed forces, receiving several million dollars in payment. He also testified that Brian sold public domain versions to Iraq and Jordan. Report findings On the issue of Inslaw's rights in "enhanced Promis", the House report found that "There appears to be strong evidence" supporting Judge Bason's finding that DOJ "acted willfully and fraudulently" when it "took, converted and stole" INSLAW's Enhanced PROMIS by "trickery, fraud and deceit." Like Judge Bason, the report found that DOJ did not negotiate with Inslaw in good faith, citing a statement by Deputy Attorney General Arnold Burns as "one of the most damaging statements received by the committee." According to the report, Burns told OPR investigators that Department attorneys informed him in 1986 that INSLAW's claim of proprietary rights was legitimate and that DOJ would probably lose in court on this issue. House Investigator found it "incredible" that DOJ would pursue litigation after such a determination, and concluded "This clearly raises the specter that the Department actions taken against INSLAW in this matter represent an abuse of power of shameful proportions." On the new allegations brought by Inslaw, although the Committee conducted extensive investigations, the report did not make any factual findings on the allegations, it did conclude that further investigation was warranted into the statements and claims of Inslaw's witnesses. The report also discussed the case of Danny Casolaro, a free-lance writer who became interested in the Inslaw case in 1990, and began his own investigation. According to statements from Casolaro's friends and family, the scope of his investigation eventually expanded to include a number of scandals of the time, including the Iran-Contra affair, the October Surprise conspiracy claims, and the BCCI banking scandal. In August 1991, Casolaro was found dead in a hotel room where he was staying. The initial coroner's report ruled his death suicide, but Casolaro's family and friends were suspicious, and a lengthy second autopsy was conducted. This too ruled Casolaro's death a suicide, but House investigators noted that "The suspicious circumstances surrounding his death have led some law enforcement professionals and others to believe that his death may not have been a suicide," and strongly urged further investigation. The Democratic majority called upon Attorney General Dick Thornburgh to compensate Inslaw immediately for the harm that the government had "egregiously" inflicted on Inslaw. The Republican minority dissented and the committee divided along party lines 21–13. Bua report In October 1991 William P. Barr succeeded Dick Thornburgh as Attorney General. In November, Barr appointed retired federal judge Nicholas J. Bua as a Special Counsel to investigate the allegations in the Inslaw case. Bua was granted authority to appoint his own staff and investigators, to impanel a grand jury, and to issue subpoenas. In March 1993 he issued a 267-page report. The report concluded that there was no credible evidence to support Inslaw's allegations that DOJ officials conspired to help Earl Brian acquire Promis software, and that the evidence was overwhelming that there was no connection between Brian and Promis. It found the evidence "woefully insufficient" to support the claim that DOJ obtained enhanced PROMIS through "fraud, trickery, and deceit," or that DOJ illegally distributed PROMIS inside or outside of DOJ. It found no credible evidence that DOJ had influenced the selection process that replaced Judge Bason. It found "insufficient evidence" to confirm the allegation that DOJ employees sought to influence the conversion of Inslaw's bankruptcy or commit perjury to conceal the attempt to do so. Finally, it concluded that the DOJ had not sought to influence the investigation of Danny Casolaro's death, and that the physical evidence strongly supported the autopsy finding of suicide. Bua's report came to a number of conclusions that contradicted earlier proceedings and investigations. Judge Bason had found that DOJ's claim it was concerned about Inslaw's financial condition when it requested a copy of Promis was a false pretext. Bua rejected this finding as "just plain wrong." The House report had cited Deputy Attorney General Burns' statements as evidence that DOJ knew it did not have a valid defense to Inslaw's claims. Bua found this interpretation "entirely unwarranted." Bua was particularly critical of several of Inslaw's witnesses. He found that Michael Riconosciuto had given inconsistent accounts in statements to the Hamiltons, his affidavit, and in testimony at his 1992 trial for manufacturing methamphetamine. Bua compared Riconosciuto's story about Promis to "a historical novel; a tale of total fiction woven against the background of accurate historical facts." Bua found Ari Ben-Menashe's affidavits for Inslaw inconsistent with his later statements to Bua, in which Ben-Menashe said that he had "no knowledge of the transfer of Inslaw's proprietary software by Earl Brian or DOJ" and denied that he had ever said this elsewhere. Ben-Menashe said that others simply assumed that he was referring in his previous statements to Inslaw's Promis, but acknowledged that one reason he failed to clarify this was because he was going to publish a book and "he wanted to make sure that his affidavit was filed in court and came to the attention of the public." Bua also noted that the House October Surprise Task Force had examined Ben-Menashe's October Surprise allegations and found them "totally lacking in credibility," "demonstrably false from beginning to end," "riddled with inconsistencies and factual misstatements," and "a total fabrication." He specifically observed that the Task Force found no evidence to substantiate Ben-Menashe's October Surprise allegations about Earl Brian. DOJ review Inslaw responded to the Bua report with a 130-page Rebuttal, and another set of new allegations in an Addendum. These allegations included the claim that the DOJ's Office of Special Investigations was "a front for the Justice Department's own covert intelligence service" and that "another undeclared mission of the Justice Department's covert agents was to insure that investigative journalist Danny Casolaro remained silent about the role of the Justice Department in the INSLAW scandal by murdering him in west Virginia in August 1991." By this time, Janet Reno had succeeded Barr as Attorney General after Bill Clinton's election as president. Reno then asked for a review of Bua's report with recommendations on appropriate actions. In September 1994, the Department released a 187-page review (written by Assistant Associate Attorney General John C. Dwyer) which concluded that "there is no credible evidence that Department officials conspired to steal computer software developed by Inslaw, Inc. or that the company is entitled to additional government payments." The review also reaffirmed the earlier police findings that Casolaro's death was a suicide and rejected Inslaw's claim that OSI agents had murdered Casolaro as "fantasy," with "no corroborative evidence that is even marginally credible." Court of Federal Claims trial and ruling In May 1995, the United States Senate asked the U.S. Court of Federal Claims to determine if the United States owed Inslaw compensation for the government's use of Promis. On July 31, 1997, Judge Christine Miller, the hearing officer for the U.S. Court of Federal Claims ruled that all of the versions of Promis were in the public domain and that the government had therefore always been free to do whatever it wished with Promis. The following year, the appellate authority, a three-judge Review Panel of the same court, upheld Miller's ruling and in August 1998 informed the Senate of its findings. Later developments A 1999 book by the British journalist Gordon Thomas, titled Gideon's Spies: The Secret History of the Mossad, repeated the claims of Ari Ben-Menashe that Israeli intelligence created and marketed a Trojan horse version of Promis in order to spy on intelligence agencies in other countries. In 2001, the Washington Times and Fox News each quoted federal law enforcement officials familiar with debriefing former FBI Agent Robert Hanssen as claiming that the convicted spy had stolen copies of a Promis-derivative for his Soviet KGB handlers. Later reports and studies of Hanssen's activities have not repeated these claims. References Further reading The Last Circle: Danny Casolaro's Investigation into the Octopus and the PROMIS Software Scandal by Cheri Seymour (Trine Day, September 26, 2010 ) The Attorney General's refusal to provide congressional access to "privileged" INSLAW documents: hearing before the Subcommittee on Economic and Commercial Law of the Committee on the Judiciary, House of Representatives, One Hundred First Congress, second session, December 5, 1990. Washington : U.S. G.P.O. : For sale by the Supt. of Docs., Congressional Sales Office, U.S. G.P.O, 1990. Superintendent of Documents Number Y 4.J 89/1:101/114 PROMIS : briefing series. Washington, D.C. : Institute for Law and Social Research, 1974-1977. "[A] series of 21 Briefing Papers for PROMIS (Prosecutor's Management Information System), this publication was prepared by the Institute for Law and Social Research (INSLAW), Washington, D.C., under a grant from the Law Enforcement Assistance Administration (LEAA), which has designated PROMIS as an Exemplary Project." OCLC Number 5882076 External links Inslaw related documents on the Internet Archive United States contract case law Copyright infringement of software Political scandals in the United States Companies that have filed for Chapter 7 bankruptcy Companies that filed for Chapter 11 bankruptcy in 1985 Reagan administration controversies
4902158
https://en.wikipedia.org/wiki/QGIS
QGIS
QGIS (until 2013 known as Quantum GIS) is a free and open-source cross-platform desktop geographic information system (GIS) application that supports viewing, editing, and analysis of geospatial data. Functionality QGIS functions as geographic information system (GIS) software, allowing users to analyze and edit spatial information, in addition to composing and exporting graphical maps. QGIS supports both raster and vector layers; vector data is stored as either point, line, or polygon features. Multiple formats of raster images are supported, and the software can georeference images. QGIS supports shapefiles, coverages, personal geodatabases, dxf, MapInfo, PostGIS, and other formats. Web services, including Web Map Service and Web Feature Service, are also supported to allow use of data from external sources. QGIS integrates with other open-source GIS packages, including PostGIS, GRASS GIS, and MapServer. Plugins written in Python or C++ extend QGIS's capabilities. Plugins can geocode using the Google Geocoding API, perform geoprocessing functions similar to those of the standard tools found in ArcGIS, and interface with PostgreSQL/PostGIS, SpatiaLite and MySQL databases. QGIS can also be used with SAGA GIS and Kosmo. Development Gary Sherman began development of Quantum GIS in early 2002, and it became an incubator project of the Open Source Geospatial Foundation in 2007. Version 1.0 was released in January 2009. In 2013, along with release of version 2.0 the name was officially changed from Quantum GIS to QGIS to avoid confusion as both names had been used in parallel. Written in C++, QGIS makes extensive use of the Qt library. In addition to Qt, required dependencies of QGIS include GEOS and SQLite. GDAL, GRASS GIS, PostGIS, and PostgreSQL are also recommended, as they provide access to additional data formats. , QGIS is available for multiple operating systems including Mac OS X, Linux, Unix, and Microsoft Windows. A mobile version of QGIS was under development for Android . QGIS can also be used as a graphical user interface to GRASS. QGIS has a small install footprint on the host file system compared to commercial GISs and generally requires less RAM and processing power; hence it can be used on older hardware or running simultaneously with other applications where CPU power may be limited. QGIS is maintained by volunteer developers who regularly release updates and bug fixes. , developers have translated QGIS into 48 languages and the application is used internationally in academic and professional environments. Several companies offer support and feature development services. Function Layers QGIS can display multiple layers containing different sources or depictions of sources. Preparing maps In order to prepare printed map with QGIS, Print Layout is used. It can be used for adding multiple map views, labels, legends, etc. Licensing As a free software application under GNU GPLv2, QGIS can be freely modified to perform different or more specialized tasks. Two examples are the QGIS Browser and QGIS Server applications, which use the same code for data access and rendering, but present different front-end interfaces. Adoption Many public and private organizations have adopted QGIS, including: US National Security Agency National Geospatial-Intelligence Agency Austrian state of Vorarlberg Swiss cantons of Glarus and Solothurn New Zealand's Land Information public service department References External links QGIS Blog QGIS Podcast QGIS Changelogs QGIS Blogposts QGIS - Map Showcase (Flickr) QGIS - Screenshots (Flickr) QGIS Spanish Blogpost Free GIS software Free software programmed in C++ Software that uses Qt
37539119
https://en.wikipedia.org/wiki/2012%20Kraft%20Fight%20Hunger%20Bowl
2012 Kraft Fight Hunger Bowl
The 2012 Kraft Fight Hunger Bowl was a postseason American college football bowl game held on December 29, 2012 at AT&T Park in San Francisco, California, United States. The 11th edition of the Kraft Fight Hunger Bowl began at 1:00 p.m. PST, and was televised on ESPN2. It featured the Arizona State Sun Devils of the Pac-12 Conference (Pac-12) and the Navy Midshipmen, who were conference independent. It was the final game of the 2012 NCAA Division I FBS football season for both teams. The game, won by the Sun Devils 62–28, drew 34,172 spectators. In accordance with a 2009 deal with bowl organizers, the Midshipmen accepted their invitation to play in the game on November 3 after winning six of their first nine games of the season. After the Sun Devils achieved bowl eligibility by defeating in-state rival Arizona Wildcats for a 7–5 regular-season record, the team accepted its bowl invitation on December 2. The pregame buildup focused on the contest between Navy's triple option offense and the Sun Devils' defense. After a change in quarterbacks, the Midshipmen's rushing offense had become one of the best in the nation; however, the team's passing offense ranked near the bottom of the Football Bowl Subdivision (FBS). Arizona State's balanced offense hinged on its quarterback efficiency, with the potential to set a number of school records for the season. Defensively the team ranked as one of the best in the nation in sacks and tackles for loss, but its rushing defense ranked 74th in yards allowed per game. Most analysts predicted a victory for the Sun Devils. Navy sold over 10,000 tickets to the game, and Arizona State sold over 5,000. The Sun Devils scored the first 21 points of the game in the first quarter, while keeping the Midshipmen scoreless. After Navy scored its first points in the second quarter, Arizona State scored two more touchdowns to bring the score to 34–7 at halftime. The Sun Devils added four more touchdowns in the third quarter, but the only additional points from the Midshipmen came from a 95-yard kickoff return for a touchdown. Navy scored the only two touchdowns of the fourth quarter, ending the game with a score of 62–28. The bowl brought both teams' won–lost records to 8–5. Team selection In September 2009 organizers of the Emerald Bowl (renamed the Kraft Fight Hunger Bowl, reflecting its new sponsor) announced that they had renewed their contract with the Pac-12 Conference for four years, with the sixth bowl-eligible team in the conference playing against a team from the Western Athletic Conference in the 2010 and 2013 editions of the game and the Navy Midshipmen in the 2012 game (assuming that Navy was bowl-eligible). Before the 2012 season college-football analyst Phil Steele projected that Navy would play the California Golden Bears, but changed his projected Pac-12 team in late October to the Sun Devils. On November 3, CBSSports.com writer Jerry Palm projected that Navy would play the Arizona Wildcats in the game, but after the USC Trojans lost to the UCLA Bruins on November 17, both Palm and ESPN.com analyst Kevin Gemmell projected that the Trojans would play in the Fight Hunger Bowl. Navy received $750,000 for its participation, and Arizona State received $950,000. The game's executive director, Gary Cavalli, received $375,176 in compensation. The game was the 18th bowl appearance for the Midshipmen and the 26th for the Sun Devils; it was the first ever meeting between the teams. Navy The Navy Midshipmen, representing the United States Naval Academy, began their 2012 season with a 50–10 loss to the Notre Dame Fighting Irish at the Emerald Isle Classic in Dublin, Ireland. They accumulated a 1–3 record for their first four games, but began a five-game winning streak with a victory over the Air Force Falcons in overtime on October 6. On November 3, after defeating the Florida Atlantic Owls to raise their season record to 6–3, the Midshipmen accepted the first invitation of the 2012–13 NCAA bowl season to play in the Kraft Fight Hunger Bowl. Navy athletic director Chet Gladchuk cited the team's positive experiences at the 2004 Emerald Bowl as a primary reason for returning to play in the game. The Midshipmen won that game, against the New Mexico Lobos, by a score of 34–19; it was highlighted by a drive from the Midshipmen in the third and fourth quarters which set an NCAA record for the longest drive in a college-football game. During the rest of the season the Midshipmen lost to the Troy Trojans and defeated the Texas State Bobcats before playing the final game of the regular season, the 113th Army–Navy Game. After the Sun Devils accepted their invitation, Navy head coach Ken Niumatalolo stressed that his team still considered the game against the Army Black Knights for the Commander-in-Chief's Trophy more pressing than preparations for the bowl: The Midshipmen won 17–13, their 11th consecutive victory in that series. The 2012 Kraft Fight Hunger Bowl was the first postseason game for the Midshipmen since their loss in the 2010 Poinsettia Bowl to the San Diego State Aztecs; Navy had lost four of its last five postseason games. Arizona State The Sun Devils, representing Arizona State University (ASU), began their first season under head coach Todd Graham by winning four of their first five games; their lone defeat was a 20–24 loss to the Missouri Tigers. They lost to their next four Pac-12 Conference opponents, giving them a 5–5 record in the first 10 games of the season. With two games left the team needed at least one more win for bowl eligibility, and defeated the Washington State Cougars on November 17. On November 23 it defeated the 24th-ranked Arizona Wildcats in its final regular-season game to win the Territorial Cup, finishing the season at 7–5 (its best record since the 2007 season). After the game, bowl organizers announced that one of the two Arizona teams would receive the second invitation to play in the Kraft Fight Hunger Bowl, with the other team probably going to the 2012 New Mexico Bowl. Both schools lobbied heavily for an invitation to the Kraft bowl, with ASU officials saying that the date of the New Mexico Bowl conflicted with the school's final examination schedule. The Sun Devils accepted their invitation on December 2. It was the second straight bowl game for ASU; the Sun Devils lost the 2011 Maaco Bowl Las Vegas against the Boise State Broncos the previous year. Pre-game buildup Both teams were allotted 15 extra practices to prepare for the bowl game, and expected to use the game to increase their recruiting presence in the San Francisco Bay Area. After their final regular-season game against Arizona, the Sun Devils had 37 days to prepare for the bowl. Arizona State coach Todd Graham, who attracted criticism in early December when he ranked his team 20th in the nation in the coaches' poll, used part of his extended practice schedule to prepare younger players for the upcoming season. After it defeated Army, the Midshipmen had 21 days to prepare for the bowl game; however, due to the demanding final-examination schedule at the United States Naval Academy and the unusual length of Navy's football season, head coach Ken Niumatalolo gave his team extra recovery time after the Army–Navy game. The team had one-hour practices at 6 a.m. to avoid conflicting with the school's examination schedule. Before the team's game against Army, Vice Admiral Michael H. Miller reinstated linebacker Brye French and slotback Bo Snelson (who had been removed as a team captain before the Emerald Isle Classic). During pregame walk-throughs the team prepared its new play calling system, which consisted of a number of oversized cards divided into four sections with a variety of colors and symbols. Members of both teams participated in community-service activities in the days before the game; the Midshipmen and Sun Devils served meals at St. Anthony's Dining Room and Glide Memorial Church, respectively, on Christmas Day. Offensive matchups Navy offense Although the Midshipmen scored 10 or fewer points in three of their first four games, they averaged almost 32 points per game in their seven games before facing Army. The team's offense focused on its flexbone-style triple option offensive scheme, ranked sixth in the nation in rushing yards per game. Senior slotback Gee Gee Greene led the team in rushing yards with 765. Greene (who had played in every Midshipmen game since 2009) had earned 1,996 career rushing yards before the Army game, second among Navy slotbacks behind Shun White. Sophomore fullback Noah Copeland ran for 694 yards during the season. Typical of triple-option teams, the Midshipmen rarely threw the ball (attempting 143 passes in their first eleven games). Keenan Reynolds, the third freshman to start at quarterback in the team's history, started every game after the team's win against the Air Force Falcons (rushing for 628 yards on 140 carries during the season). In the win over Army, Reynolds completed 10 of 17 passes for 130 yards and received the game's Most Valuable Player award. With 758 career receiving yards, Gee Gee Greene needed 73 more in the bowl to pass Reggie Campbell's record for most receiving yards by a Navy slotback. Arizona State offense Using a balanced spread offense system, the Sun Devils came into the game with the third-best scoring offense in the Pac-12 Conference. Sophomore quarterback Taylor Kelly led the team with 2,772 passing yards and 25 touchdowns. He also threw nine interceptions, seven of these during the Sun Devils' four-game losing streak. The team's receiving corps included six players with at least 300vyards for the season. Senior running back Cameron Marshall led the team in rushing yards with 524, but his role in the team's offense had decreased since the previous season. Junior running back Marion Grice ran for 520 yards during the season, receiving 406. Grice's status for the game was in doubt when he took leave from the team after his brother's death, but it was later announced that he expected to play in the game. Several Sun Devils had opportunities to set school records. Kelly entered the game with a season 65.9-percent pass-completion rate, and was likely to break the school record held by Brock Osweiler set during the 2011 season. He was also likely to break Rudy Carpenter's single-season passing efficiency record set during the 2007 season. Tight end Chris Coyle was likely to tie (or break) school records for receptions and touchdowns by a tight end in a season, both held by Zach Miller. Punter Josh Hubner also had an opportunity to break the single-season record for average yards per punt. Defensive matchups Navy defense The Midshipmen came into the game ranking 55th in yards allowed per game and 31st in scoring defense, allowing opponents to score an average of 22 points per game. The team ranked 94th and 102nd in the nation in sacks and tackles for losses, respectively. Immediately after returning from the Army–Navy Game, 11th-year Midshipmen defensive coordinator Buddy Green began preparing his team's 3–4 defensive scheme for the Sun Devils' offense. Although the Midshipmen had faced similar spread offenses earlier in the season, Green noted the Sun Devils' balance between running and passing plays: "They do some things that are similar to a lot of the teams we play, but I don't think there has been one opponent that incorporates all the elements that [Arizona State] has in its package." Linebacker Keegan Wetzel led the team in tackles. Although the team's defensive secondary unit was plagued with injuries early in the season, the squad found some stability by its final regular-season game. After the team's win against the Texas State Bobcats, cornerback Kwazel Bertrand was named the FBS Independent Defensive Player of Week. After an early-season series of personal issues and injuries depleted the Midshipmen defensive line, Navy had established a rotating system of 11 linemen to keep players fresh during games. The Midshipmen also used unconventional position moves to adapt to opponents: before the bowl game, Greene moved defensive end Danny Ring to nose tackle to better handle opponents' running backs. Arizona State defense The Sun Devils' defense ranked 26th in the nation in total defense, second in tackles for loss and second in sacks; however, it ranked 74th in rushing defense (allowing opponents to gain an average of 172 yards per game on the ground). The team's defensive squad was led by players like Keelan Johnson (11th in the nation for interceptions), Carl Bradford (16th in the nation for tackles for loss and 17th for sacks), Brandon Magee (third in the nation for solo tackles) and defensive tackle Will Sutton (fifth in the nation in tackles for loss, and thirteenth in sacks). Sutton, who forced three fumbles and broke up five passes, was voted the Pat Tillman Defensive Player of the Year in the Pac-12 Conference and was named as a consensus All-American player. Before the game, Sutton told reporters he was uncertain if he would forgo his senior season and enter the 2013 NFL Draft after the game. Pregame media attention focused on the ability of Arizona State's defense to stop Navy's triple-option offense. ESPN.com analyst Kevin Gemmell wrote, "Watching Sutton... and the rest of the Sun Devils defense square off with the Navy offense is going to be one of the more fascinating chess matches of the postseason". Terese Karmel wrote for SportsPageMagazine.com, "The key to the game could be the match-up between the Sun Devils' defense (second in the nation in sacks with 48) against the Naval Academy's young quarterback, Keenan Reynolds". After the Sun Devils accepted the invitation, head coach Todd Graham began to prepare the team to face Navy's run-heavy offense. Graham spoke about preparing defensively for Navy's triple-option offense: Defensive coordinator Paul Randolph said the team would not focus solely on stopping the run, citing Reynolds' recent passing performance as reasons for emphasizing discipline in the team's defensive secondary. Predictions Most sports analysts predicted that Arizona State to win the game. All six college-football commentators surveyed by CBSSports.com predicted an Arizona State victory; of eight analysts surveyed by River Region Sports, six predicted that the Sun Devils would win the game and two favored the Midshipmen. Basing his prediction on the Sun Devils' balanced offensive scheme, NBC Sports writer John Tamanaha predicted that Arizona State would defeat Navy 42–23. The Sporting News predicted that if Arizona State could avoid turnovers, they would win the game. Analyst Will Harris predicted a 42–24 victory for the Sun Devils on ESPN.com, citing Todd Graham's bowl record and Navy's undersized defense as reasons for his confident projection. Phil Steele and Sports Illustrated writer Stewart Mandel also predicted that Arizona State would win, rating the game among their most-confident predictions. Spread bettors favored the Sun Devils by points. Ticket sales Each team was required to purchase a minimum of 11,500 tickets for the game. Navy's athletics department began selling tickets priced from $25 to $75 in November. The Arizona State athletics department began selling tickets in December for $50 to $85. As of December 4, tickets were selling on the secondary market for an average of $99. Both schools campaigned to increase fan turnout at the game. Arizona State sold $75 travel packages to students, which included a game ticket, bus transportation and hotel accommodations. Navy allowed donors to purchase tickets to the game for active military personnel and veterans. On December 17, bowl director Gary Cavalli said that Navy had sold over 10,000 tickets. Although Arizona State aimed to sell another 10,000, the school sold about 5,000 by December 19. Cavalli estimated that the game would sell a total of about 35,000 tickets. For every ticket sold, bowl sponsors donated a meal to the San Francisco Food Bank, the St. Anthony Foundation, and the Glide Foundation. Game summary The 2012 Kraft Fight Hunger Bowl began at 1:00 p.m. PST on December 29, 2012 at AT&T Park in San Francisco, California. Since AT&T Park hosts the San Francisco Giants, the stadium had to be converted for the bowl game. Both teams were on the same sideline. As the designated visiting team, the Navy Midshipmen wore white jerseys, pants, and helmets; the Arizona State Sun Devils wore black jerseys, white pants and white helmets as the home team. The Sun Devils' uniform also included a black-and-white circular helmet sticker with the number 57 to honor Emerson Harvey, the school's first African American student athlete in 1937. In a pregame ceremony, Will Sutton received the 2012 Pat Tillman Defensive Player of the Year award as the best defensive player in the Pac-12 Conference. Before kickoff, the Midshipmen marched onto the field in service dress. The Glide Memorial Church Ensemble performed "The Star-Spangled Banner" with a American flag. At the end of the national anthem, four United States Navy F/A-18 Hornet aircraft flew over the stadium. Arizona Senator John McCain performed the game's coin toss with a chocolate and vanilla Oreo (representing the game's sponsor, Kraft Foods). Media coverage The game was televised on ESPN2 with Dave Pasch providing the play-by-play, Brian Griese providing color commentary and Jenn Brown with sideline coverage. About 1.1 million people watched the game on television, giving it a Nielsen rating of 0.7. Live online streaming was available through WatchESPN. ESPN struck a deal with Twitter to provide game highlights with expanded tweets. The game was also broadcast on many radio stations; Touchdown Radio featured Roxy Bernstein with the play-by-play and Gino Torretta the color commentary. It was also available on Sirius and XM satellite radio on SiriusXM channel 91. Each school had their own respective radio networks cover the game. Navy's broadcast team consisted of Bob Socci providing play-by-play, Omar Nelson handling analysis, and Pete Medhurst handling sideline coverage and pre- and post-game workings; the coverage was aired on Navy Sports Network. Arizona State's IMG Sports Network covered the game for the Sun Devils. Tim Healey, Jeff Van Raaphorst, and Doug Franz provided play-by-play, analysis, and sideline coverage, respectively, for ASU. Television coverage began during Arizona State's first play from scrimmage, because a basketball game between the University of North Carolina Tar Heels and the UNLV Runnin' Rebels went beyond its allotted time slot. First quarter The game began when Navy's Colin Amerau hit the opening kickoff into the endzone for a touchback. Beginning at the 25-yard line, Arizona State quarterback Taylor Kelly completed an eight-yard pass to receiver Jamal Miles. Running back Cameron Marshall ran the ball on the following two plays, picking up four and seven yards and the team's opening first down on the latter. Kelly passed the ball to Miles on the following play, picking up five yards. On a third-down and one, running back Marion Grice rushed for 18 yards, picking up the first down. Grice, whose brother was murdered the previous week and was "playing with a heavy heart" as Sports Illustrated put it, ran the following two plays, picking up a total of 17 yards and another first down. Arizona State scored on the following play, with a 16-yard pass from Kelly to Rashad Ross. Kicker Alex Garoutte converted the extra point, then kicked off to Navy's Gee-Gee Green, who returned it for 18 yards. Navy gained a total of two yards on its first two plays, on runs from quarterback Keenan Reynolds and fullback Noah Copeland. On the following play, Reynolds was sacked by linebacker Carl Bradford, and punter Pablo Beltran came out and punted the ball to Arizona State's 40-yard line. Reynolds landed awkwardly on his elbow during the sack, and afterward conferred with trainers on the sidelines. Arizona State began their second drive of the game with a nine-yard run when Kelly faked a hand-off and kept the ball. The next play, freshman running back D. J. Foster carried the ball for a one-yard gain, picking up a first down and moving Arizona State to midfield. Kelly ran the ball again the following play, faking a hand-off to Grice before getting to the outside for a pick up 20 yards and another first down. Following a dropped pass by tight end Chris Coyle, Kelly completed back to back passes to Foster for gains of three and eight yards, respectively. In Navy's red zone for the second time, Grice picked up nine yards for ASU. After Grice's gain, Navy took their first time-out of the half. Grice received the ball again on the next play and took it through a gap in the offensive line for a ten-yard touchdown, his second of the game. Garoutte made the extra point, and Navy's Greene returned the kickoff to the 25-yard line. Navy began their second drive with a two-yard run by Reynolds, who fumbled the ball when tackled. However, he was ruled down by contact by the side judge. After a six-yard completion from Reynolds to Greene, the former kept the ball again on an option run and picked up only a single yard. Faced with a short fourth down, Reynolds pitched the ball to Greene, who ran it to the outside for an eleven-yard gain, picking up an important first down. The following play, Copeland received the hand-off and rushed for a three-yard gain. Navy brought in backup sophomore running back Geoffrey Whiteside, who picked up three yards and moved the Midshipmen into ASU territory for the first time. After that, Reynolds attempted a hurried pass to Greene, who dropped the ball. The offense caught a break, however, when Arizona State received a fifteen-yard penalty for roughing the passer due to head contact with Reynolds. With a new set of downs, Copeland again rushed for a three-yard gain. Greene followed him with a one-yard pick-up, but on third down, Reynolds threw a screen pass to Copeland, who was tackled for a one-yard loss. Navy again attempted a fourth-down conversion, but turned the ball over on downs after Greene dropped a pass from Reynolds in the end zone. The Sun Devils took over possession at their own thirty-one, beginning with an incomplete pass from Kelley to Grice. ASU followed the play with a hand-off to Grice, who took the ball eighteen yards for a first down. Grice ran the ball on the following play, picking up five yards and moving the Sun Devils into Navy territory. T. J. Foster was injured on the next play due to a head-to-head collision while trying to catch a pass from Kelley, and Arizona State accepted a fifteen-yard penalty for the helmet-to-helmet collision. However, Foster suffered a concussion on the play, and had to leave for the rest of the game. Following the penalty, Kelley kept the ball and ran for an eleven-yard pickup, again getting his team into the red zone. ASU's following two plays both resulted in gains of two yards, one on a run from Grice and the other on a pass from Kelley to Coyle. Facing a third and long, Kelley threw to Grice, who picked up 14 yards and the first down. Now two yards from the end zone, Grice was stopped immediately on a run, managing only to fall forward for a yard's gain. Kelley picked up the remaining yard on a keep on the next play, scoring the Sun Devils' third touchdown of the quarter. Garoutte made the extra point and kicked off to Greene, who only managed to return it to his own sixteen. Navy ended the quarter with a two-yard loss on a run by Reynolds. Second quarter Navy began the second quarter on their own fourteen-yard line. Faced with a second-and-twelve, Reynolds handed the ball off to Greene, who picked up 15 yards and a first down. The Midshipmen followed the gain with a rush by Copeland, who managed only a single yard. Their next play, another run by Greene, gained 11 yards, and another first down. The Midshipmen went to the air on the following play on a five-yard reception by Copeland from Reynolds. Navy's biggest play of the half came on the next play, when slotback Chris Swain broke down the sideline for 36 yards, moving the Midshipmen into Arizona State territory for the first time. Reynolds kept the ball on the following two plays, giving Navy gains of eight and one yard, respectively. Facing a third-and-one from the Sun Devils' nine-yard line, Navy used their second time-out, which stopped the game clock at 10:43. Reynolds ran the ball for a third straight time, getting Navy its first down. After another rush by Reynolds, ASU took their first timeout of the half, pausing the clock at 9:30. The Midshipmen returned to the air following the timeout; after dodging an unblocked Arizona State defender, Reynolds completed a pass to receiver Matt Aiken in the end zone for a three-yard touchdown. Kicker Nick Sloan converted the extra point. Navy brought in kicker Colin Amerau for the kickoff; despite a deep kick, back to the ASU four-yard line, the Sun Devils started their next drive with the best field position of the half after Miles managed a 41-yard return. Taking advantage of the situation, Arizona State scored just five plays later. They began the drive with a three-yard rush from Cameron Marshall. ASU went to the air on the following two plays, both of which resulted in major yardage gains. The first was a sixteen-yard completion between Kelley and Coyle, while the second went to Marshall, which picked up 20 yards. Already in Navy's red zone, the Sun Devils caught a break on the next play, when Navy was called for a substitution infraction. After accepting the five-yard penalty, Kelley completed a third-straight pass, this time to receiver Alonzo Agwuenu for an eleven-yard touchdown. Now with a 28–7 lead, Garoutte completed the extra point and kicked off to Greene, who returned the ball to the 21-yard line. Navy began their drive with back-to-back rushes by Geoffrey Whiteside, which went for gains of sixteen and two yards, including a first down on the former. Copeland then received the ball for the following two plays, both of which were also runs. After a short gain on the first, Copeland gave the Midshipmen a first down on the second, an eleven-yard pickup. After that, Reynolds kept the ball and managed eight yards, enough to move Navy into Sun Devils' territory. Swain came in for the next play and took the hand-off for a first down and six yards. The ball went back to Whiteside the next play, who picked up two yards, which were lost by Reynolds on a rush the following play. Stuck facing a third-and-ten, Navy gave the ball back to Copeland, who got them the first down on a thirteen-yard rush. In ASU's red zone for the second consecutive time, Navy went back to Copeland, who moved them forward three yards. Aiken received the hand-off on the resulting play and took it for a six-yard gain. Now with a third-and-one, Reynolds attempted to run the ball, on the team's twelfth consecutive rushing play. He lost three yards, setting up Navy for a short field goal instead of a touchdown opportunity. However, the Midshipmen mismanaged the clock and were penalized five yards for a delay of game. As a result, Nick Sloan missed the thirty-four yard field goal attempt on the following play, ad Navy turned the ball over to the Sun Devils for the second time. Arizona State, starting om their own twenty due to Sloan's miss, were again able to capitalize on a mistake from the Midshipmen. Navy was called on a fifteen-yard penalty for pass interference, which the Sun Devils accepted. Now on his own thirty-five, Kelley quickly completed a thirteen-yard pass to Miles for another first down. Almost already in Navy territory, Arizona State's largest play of the game came on a fifty-two yard touchdown reception between Kelley and Rashad Ross. Garoutte came out to convert the extra point, but missed the kick, leaving the score at 34–7. Navy began their drive after a short kickoff from Garoutte was returned by Whiteside to the twenty-six. Following ASU's three-play, nineteen-second long drive, the Midshipmen were left with just fifty-five seconds left in the half. Reynolds started out Navy's drive with two consecutive throws; the first was for a single yard to Brandon Turner while the second was an incompletion, which at least stopped the play clock. Already stuck with a third-and-nine, Reynolds decided to run the ball, picking up thirteen yards and the first down. Now with a little time to plan due to the change of downs, Navy decided to return to passing plays. However, both of Reynolds' two attempts fell incomplete. Faced with very little time left in the half and a third-and-ten, the Midshipmen handed the ball off to Greene for six yards, and simply ran out the clock. Third quarter Navy received the ball to start the second half. Garoutte hit the kickoff deep into the end zone and Greene simply took a knee, resulting in a touchback. Reynolds kicked off the Midshipmen's drive with a seven-yard run up the middle of the field. On the following play, Reynolds overthrew receiver Brandon Turner while under pressure. Arizona State cornerback Robert Nelson, Jr. intercepted the pass at his own forty and tried to return it, but while avoiding Turner, he collided with a teammate and lost six yards. Beginning at their own thirty-four, ASU began the drive with a hand-off to Marshall, who ran it down the middle for six yards. The Sun Devils ran the same play again, this time for a first down after Marshall forced his way through several tacklers. Kelley attempted to run the ball on the next play, but was immediately pressured and forced to dump the ball off to Coyle. Now at midfield, Kelley faked a hand-off to Marshall and threw the ball deep to a wide-open Ross, who outran a tackler and scored another ASU touchdown. Garoutte converted the point after, then hit a deep kick to Greene; Navy's return specialist barely managed to get out to the twenty before getting hit by several Arizona State defenders. Reynolds began the Midshipmen's drive with a five-yard rush after he kept the ball on an option run. Linebacker Brandon Magee, who made the tackle on Reynolds, stayed on the ground after the play, requiring time to be stopped. After he was assisted to the sideline, it was discovered he had severely injured his elbow and was unable to return. The clock was started again, and the following play, Reynolds quickly pitched to Greene, who avoided several tacklers and ran down the sideline for a twenty-yard gain. Copeland took the following hand-off and forced his way forward for six yards. Reynolds kept it the next play and picked up a first down for the Midshipmen. However, Reynolds was sacked both of the following times, taken down each time by Will Sutton. Facing a third-and-twenty-three, Reynolds simply ran the ball to give the punter better field position. Pablo Beltran punted the ball deep into ASU territory, where it was fielded out of bounds by Jamal Miles on the seven. The Sun Devils' next drive added another touchdown to make the score 48–7. Midshipmen Gee Gee Greene returned the ensuing kickoff 95 yards for a touchdown; three plays later, Arizona State's Marion Grice ran 39 yards for another Sun Devils touchdown to make the score 55–14. ASU needed one play to reach the end zone again on its next drive, and the third quarter ended with the Sun Devils ahead 62–14. Fourth quarter Arizona State began the fourth quarter with a turnover on downs, its first drive of the game not ending in a touchdown. During the next drive, Keenan Reynolds was tackled hard after pitching the ball and did not play for the rest of the game. Freshman fullback Chris Swain scored Navy's first offensive points of the second half. Using mostly second-string offensive players, the Sun Devils committed their only turnover of the day when quarterback Michael Eubank fumbled the ball when he was injured on a third-down play near midfield. The Midshipmen scored the final touchdown of the game when quarterback Trey Miller threw a 23-yard pass to Brandon Turner with 5:16 left in the game. Arizona State ran out the clock for its final possession, and the game ended with a final score of 62–28. Scoring summary Final statistics With five tackles and 2.5 sacks, Will Sutton was named the game's most valuable defensive player. Marion Grice, with 159 yards rushing and two touchdowns, was named the game's offensive MVP. Although the Midshipmen led the game in time of possession, Arizona State's offense needed a little under nine minutes of game time to score five touchdowns in the first half and its first nine touchdowns used a total of 13:38. The Sun Devils set 20 Kraft Fight Hunger Bowl records, including most total yards gained and largest margin of victory. With 36 first downs, the team also tied the NCAA Division I bowl-game record set by the Oklahoma Sooners in the 1991 Gator Bowl and the Marshall Thundering Herd at the 2001 GMAC Bowl. Arizona State quarterback Taylor Kelly completed 17 of 19 passes for 268 yards and four touchdowns, setting a school record for completion percentage in a season; with four receptions, tight end Chris Coyle set a school record for completions by a tight end in a season with 57. Midshipmen running back Gee Gee Greene set a Kraft Fight Hunger Bowl game record with his 95-yard kickoff return, also a Navy bowl-game record. After the game The win brought the Sun Devils' record to 8–5. Coach Todd Graham was pleased with his team's offensive production, praising Kelly and offensive coordinator Mike Norvell. The team's overall success surprised ESPN.com Pac-12 analyst Ted Miller, who called his prediction that ASU finished 11th in their conference his "worst projection" of the spring. Arizona State punter Josh Hubner accepted an invitation to play in the 2013 East–West Shrine Game. In January 2013, Will Sutton announced that he would return to play for Arizona State during his senior season before entering the 2014 NFL Draft. The loss brought Navy's record to 8–5. Midshipmen back Gee Gee Greene played in the Raycom All-Star Football Classic on January 19 in Montgomery, Alabama, and receiver Brandon Turner played in the Casino Del Sol All-Star game on January 11 in Tucson, Arizona. References External links Game summary at ESPN Box score via newspapers.com Kraft Fight Hunger Bowl Bowl Redbox Bowl Arizona State Sun Devils football bowl games Navy Midshipmen football bowl games Kraft Fight Hunger Bowl Bowl December 2012 sports events in the United States 2012 in San Francisco
41925286
https://en.wikipedia.org/wiki/Kiva%20Software
Kiva Software
Kiva Software was the leading provider and pioneer of internet application server software. Kiva Software released the industry's first application server in January 1996, offering companies a robust platform on which to develop and deploy transaction-oriented business applications on the Web. Kiva's customers included Bank of America, E-Trade, Travelocity, Internet Shopping Network, Hong Kong Telecom and Pacific Bell Internet. Headquartered in Mountain View, California, Kiva Software was a privately held company (1994 - 1997) backed by venture capitalists, including Wiess, Peck & Greer, Greylock, Discovery Ventures, Sippl MacDonald Ventures, Norwest Venture Capital, and Trinity Ventures. History Kiva Software was founded in May 1994 by Keng Lim, its chairman and CEO, who saw the opportunity to leverage the internet as a platform for running business applications. In January 1996, Kiva Enterprise Server was launched. It was the first Java application server to market, and it also supported application development in C++. By mid-1997, the company had shipped two major releases of Kiva Enterprise Server; grown to over 100 employees at five field offices; raised US$13.9 million in capital investment over two rounds of funding; and, according to Lim in a Red Herring interview, was expecting to go public by the middle of 1998, barring an acquisition. In December 1997, Kiva Software was acquired by Netscape Communications as an "important strategic technology for linking people and businesses together through Intranets, Extranets and the Internet." Netscape issued 6.3 million shares of Netscape stock to purchase 100 percent of Kiva stock and options, a deal valued at US$180 million. Kiva Enterprise Server was folded into Netscape's suite of server products and became Netscape Application Server. In 1999, America Online (AOL) acquired Netscape in a stock swap valued at US$10 billion, and formed a partnership with Sun Microsystems. As part of the three-way deal, Sun Microsystems licensed Netscape's server software for $350 million over three years. The Sun-Netscape alliance rolled out a new brand name, iPlanet, which was used for all the server products. The Netscape Application Server was chosen as the code base for the iPlanet Application Server, even though there had been talk that the iPlanet Application Server would be a combination of both Netscape Application Server and NetDynamics Application Server. NetDynamics, a former competitor of Kiva Software, had been acquired by Sun in 1998, prior to the Sun-Netscape alliance. In 2002, when the three-year alliance between AOL/Netscape and Sun ended, per the agreement, Sun took sole control of the iPlanet software. iPlanet was absorbed into Sun, and iPlanet Application Server was rebranded as Sun ONE Application Server, and, later, the Sun Java System Application Server. Products Kiva Software's products included Kiva Enterprise Server, the platform on which web applications were deployed and managed; Kiva Application Builder, the graphical development tool for building applications; and the Kiva SDK (available in both Java and C++), which packaged foundation classes and methods. Awards and recognition In 1997, Kiva Software was selected as one of the industry's top privately held technology companies by Red Herring, ComputerWorld, and Data Communications magazine. That same year, Kiva Enterprise Server won PC Week's Best of Comdex award for "Best Internet Software" at Comdex. As Kiva Enterprise Server evolved into Netscape Application server and then into iPlanet Application Server, the product continued to lead technologically (although not in market share) in the increasingly competitive and crowded marketplace of application servers. In 2000, Netscape Application 4.0 achieved the highest rating in a head-to-head comparison of application servers by independent researcher D.H. Brown Associates. In 2001, Standard & Poors (S&P) concluded that iPlanet Application Server was the best choice for running its web services business after conducting two rounds of evaluations. References Defunct software companies of the United States Software companies based in the San Francisco Bay Area Companies based in Mountain View, California Software companies established in 1994 Technology companies disestablished in 1997 1994 establishments in California 1997 disestablishments in California Defunct companies based in the San Francisco Bay Area
21162302
https://en.wikipedia.org/wiki/Monosurround
Monosurround
Monosurround is a Berlin-based electronic music and live-act duo, formed in 1999. It is made up of Erik Schaeffer and Ramtin Asadolahzadeh. Since 2002, they have released EPs and albums in France, Germany, and Japan on Vitalic's record label Citizen Records. In 2010 they started their own record label called "MS Records" as a platform for their music and the concept behind their "Maximalism". History Ramtin Asadolahzadeh and Erik Schaeffer were introduced in Berlin in 1999. They started to work together in the same month. Their first joint project was the composition and co-production of music for the German cinema-releases Ants in the Pants (2000) and Liebesluder (2000). Combining their respective music influences, in the summer of 2001 Monossurround moved towards composing electronic music. Their first release in the same year “I warned you baby” was an immediate success, becoming the official festival anthem SonneMondSterne Festival 2002 in Saalburg, Thuringia. The track samples the American jazz singer Spanky Wilson. Through the success of “I warned you baby” connections arose between Monosurround and their peers. They remixed many artists including Sono, Phil Fuldner, Da Hool, TISM Northern Lite and Glamour to Kill. They were simultaneously remixed by artists, including The Gigler, Shir Khan, The Raccoon Brothers, DJ SPUD and Malente. At that time, parallel to regular live performances, Monosurround started working on their first studio album, Hello World. In 2004 the duo stopped performing live in order to concentrate their energies on the development and production of the album. The EP Borschtchick was released in Germany in 2005 through Moonbootique Records owned by Moonbootica. In January 2006 the Cocked, Locked EP was released on techno DJ Vitalic's record label Citizen Records in France. This EP includes the tracks “Borschtchick” and “Cocked, Locked and Ready to Rock”. Having successfully released these EPs into the French market in 2006 and 2007, Monosurround played live again alongside acts such as Vitalic and The Hacker. The album Hello World was released across different countries at the end of 2008: France in September, Germany in November, and Japan in December. Style The Monosurround sound has changed greatly since its formation in 1999. Azadolahzadeh and Schaeffer come from different musical backgrounds: 1960s and 1970s soul funk, and techno and classical music respectively. Their first two years together saw performances including up to nine instrumentalists on stage playing 1960s crossover big beat. In 2001 a notable change in their sound occurred. Azadolahzadeh and Schaeffer moved towards electronic music in their composition, bringing their sound closer to electronic dance music and techno. The major stylistic turning point came in 2003 with the creation of their track “We”. The signature sound from this point onwards combined epic vocal/choral textures with hard industrial noises. These features of the music bear comparison to the Justice (French band) sound which arose at the same time. From this point onwards Azadolahzadeh and Schaeffer worked under the self-imposed description “maximalism”. This theme motivates their music, artwork, and philosophy behind their work. Festival appearances Through their relationship with Citizen Records, Monosurround have played alongside Vitalic and The Hacker at Les Plagess festival in France, 2006 and 2007. This tour with Vitalic continued on to Barcelona to play the Razzmatazz complex in 2007. Their last big festival appearance was Sónar festival 2008, playing alongside The Hacker. Trivia Early in 2008, the bottled mineral water brand Perrier used the track “Cocked, Locked, and Ready to Rock” in a TV advertising campaign in Germany. Since early 2010 Monosurround release their music on their own label MS Records. MS Records dedicated its work to release and spread maximalistic music. Discography Albums 2007 Early Days (Layb - Berlin Artists) 2008 Hello World (Citizen Records) 2010 Hello World (MS Records) Maxis and EPs 2002 I Warned You Baby (1st Decade Records) 2002 I Warned You Baby - Remixes (Superstar Records) 2002 I Warned You Baby (Superstar Records) 2003 Creepy Guys EP (1st Decade Records) 2003 Bo Bullet mini EP (1st Decade Records) 2003 Bo Bullet EP (1st Decade Records) 2004 We EP (1st Decade Records) 2005 Borschtchick (Moonboutique Records) 2006 Cocked Locked EP (Citizens Records) 2006 Cocked, Locked, Reading to Rock (Hammarskjöld) 2008 Cocked, Locked Ready to Rock – Summarized (Citizen Records) 2010 ''Hello World [Remixed] (MS Records) 2010 "All Night Long" (MS Records) 2011 "REworks" (MS Records) Remixes 2002 TISM - Defecate On My Face (TISM) 2002 Neonman - Future Is Pussy (1st Decade Records) 2003 Northern Lite - MyPain (1st Decade Records) 2003 Sono - Heading For (Island Zeitgeist Records) 2003 Phil Fuldner - Never Too Late (Kosmo Records) 2004 Glamour To Kill - Shake Your Body, Baby (Pale Records) 2005 Da Hool feat. Jackie Bredie - Bow Down (Kosmo Records) 2006 Electrixx - SecondLesson (Hadshot Haheizar) 2007 ProCon - Delia (Cochon Records) 2008 Mokkasin - Elazerhead (LeGrain Records) 2009 Depeche Mode - Peace 2010 Music To Drive Tanks To - Granite Eyes 2010 Mujik - Arma Mortal 2011 Nixu Zsun - Exploring Bulgaria External links Official Website: http://www.monosurround.com German electronic music groups Musical groups from Berlin 1999 establishments in Germany Musical groups established in 1999
45256228
https://en.wikipedia.org/wiki/Venu%20Govindaraju
Venu Govindaraju
Venu Govindaraju is an Indian-American whose research interests are in the fields of document image analysis and biometrics. He presently serves as the Vice President for Research and Economic Development. He is a SUNY Distinguished Professor of Computer Science and Engineering, School of Engineering and Applied Sciences at the University at Buffalo, The State University of New York, Buffalo, NY, USA. Education Govindaraju received his undergraduate degree with honors (BTech) in computer science from the Indian Institute of Technology, Kharagpur, India in 1986 and his master's and Ph.D. degrees in computer science in 1988 and 1992 from the University at Buffalo, The State University of New York, Buffalo, NY, USA. Awards Govindaraju is a fellow of the Association for Computing Machinery, the IEEE (Institute of Electrical and Electronics Engineers), the AAAS (American Association for the Advancement of Science), the IAPR (International Association for Pattern Recognition), and the SPIE (International Society for Optics and Photonics). He is the recipient of the 2001 International Conference on Document Analysis and Recognition Young Investigator award, the 2004 MIT Global Indus Technovator Award, the 2010 IEEE Technical Achievement Award, the Indian Institutes of Technology (IIT) Distinguished Alumnus Award (2014), and the 2015 IAPR/ICDAR Outstanding Achievements Award. He was named a Fellow of the National Academy of Inventors in 2015. Research career He has spent his entire career at the University at Buffalo. After graduating with a Ph.D from the University at Buffalo, he worked from 1992 to 2003 as a research scientist at the Center of Excellence for Document Analysis and Recognition (CEDAR), at the University at Buffalo founded and managed by Sargur Srihari. He became Associate Professor in the Department of Computer Science and Engineering at the University at Buffalo in 2000, a full Professor in 2002, and a SUNY Distinguished Professor, the highest faculty rank in the State University of New York system, in 2010. Govindaraju was the founding director of the Center for Unified Biometrics and Sensors and has remained its director since its inception in 2003. References External links Center for Unified Biometrics and Sensors 1964 births Living people Indian academics IIT Kharagpur alumni University at Buffalo faculty Fellows of SPIE Fellows of the American Association for the Advancement of Science Fellows of the Association for Computing Machinery Fellows of the International Association for Pattern Recognition Fellow Members of the IEEE People from Vijayawada University at Buffalo alumni
20779450
https://en.wikipedia.org/wiki/John%20Canny
John Canny
John F. Canny (born in 1958) is an Australian computer scientist, and Paul E Jacobs and Stacy Jacobs Distinguished Professor of Engineering in the Computer Science Department of the University of California, Berkeley. He has made significant contributions in various areas of computer science and mathematics including artificial intelligence, robotics, computer graphics, human-computer interaction, computer security, computational algebra, and computational geometry. Biography John Canny received his B.Sc. in Computer Science and Theoretical Physics from the University of Adelaide in South Australia, 1979, a B.E. (Hons) in Electrical Engineering, University of Adelaide, 1980, a M.S. and Ph.D. from the Massachusetts Institute of Technology, 1983 and 1987, respectively. In 1987, he joined the faculty of Electrical Engineering and Computer Sciences at UC Berkeley. In 1987, he received the Machtey Award and the ACM Doctoral Dissertation Award. In 1999, he was the co-chair of the Annual Symposium on Computational Geometry. In 2002, he received the American Association for Artificial Intelligence Classic Paper Award for the most influential paper from the 1983 National Conference on Artificial Intelligence. As the author of "A Variational Approach to Edge Detection" and the creator of the widely used Canny edge detector, he was honored for seminal contributions in the areas of robotics and machine perception. See also Canny edge detector Existential theory of the reals Kinodynamic planning Publications Canny has published several books, papers and articles. A selection: 1986. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 8, 1986, pp. 679–698. 1988. The Complexity of Robot Motion Planning. The ACM Distinguished Dissertation Series, Cambridge, MA: The MIT Press, 1988. 1993. "An opportunistic global path planner". With M. C. Lin. In: Algorithmica vol. 10, no. 2-4, pp. 102–120, Aug. 1993. 2007. "MultiView: Improving trust in group video conferencing through spatial faithfulness" (Best Paper Prize). With D. T. Nguyen In: Proc. 2007 SIGCHI Conf. on Human Factors in Computing Systems (CHI '07), New York, NY: The Association for Computing Machinery, Inc., 2007, pp. 1465–1474. References External links John F. Canny Homepage at UC Berkeley Researchers in geometric algorithms MIT School of Engineering alumni UC Berkeley College of Engineering faculty 1953 births Living people University of Adelaide alumni Australian computer scientists
39214734
https://en.wikipedia.org/wiki/Xait
Xait
Xait (pronounced "excite") is a software development company, specializing in Web-based database services. The company provides its customers with software for document publishing and collaboration. Its product XaitPorter, is a collaborative-writing software and is used by clients worldwide to create bids, proposals, financial reports, contracts and other business critical documentation. In Norway the majority of all Oil operators use XaitPorter for writing their drilling license applications. At the 22nd licensing round in Norway, 100% of the oil & gas operator licenses on Norwegian continental shelf were awarded to XaitPorter clients. The company has been ISO/IEC 27001 certified since 2016 and has since received re-certification the following years. Xait is headquartered in Sandnes, Rogaland, Norway with a sales and support office in Austin, Texas, United States. Managed Collaborative Authoring Process The Managed Collaborative Authoring Process is a document creation technique and term invented by Xait in 2001 by which a structured and controlled collection of tasks and events create repetitive business value through quality, efficiency and security improvements, when applied to a group of writers, reviewers and/or approvers. The reuse of documentation and a means to improve the processes surrounding this has been an issue since the late 1980s/early 1990s, when wordprocessing was embraced by the masses as the main tool for document production. David M. Levy wrote a paper on this back in 1993, highlighting some of the issues regarding document reuse. "The world, though continually changing, is changing incrementally.Much remains the same (unchanged) at any one time, at least at the granularity of description we typically care about. This means that documents only need to be updated incrementally; and incremental updating is more easily achieved when existing material is reused." Allowing the company to structure and control the rate and quality of the re-use of content, as well as keeping in line with compliance and security, gives an approach for Best practice when reusing content. XaitPorter is currently one of the commonly used document collaboration software packages that is in line with the managed collaborative authoring process. History Early years The company was founded in 2000 in Stavanger. Xait first started developing a content management system called XaitExposure and a customer relationship management product called XaitExonerate. In 2002 however, the company announced collaborative writing software called Publish-As-You-Go, later renamed XaitPorter. Recent years In 2016 Xait received investment from the Nordic venture fund Viking Venture, resulting in an ownership stake of 43%. On December 16, 2019 Xait was awarded 10.8 million NOK by the Norwegian Research Council to improve quality and efficiency in the RFP to production lifecycle. In 2020 Xait acquired BlueprintCPQ, a provider of enterprise-class configure, price and quote software worldwide. With the acquisition, Xait added Configure, price and quote (CPQ) software to its solution offering. In 2021 Xait did their second acquisition with Privia, a worldwide provider of capture and proposal management-specific solutions to the government contractor market. Awards Xait was awarded "Best Web Collaboration Solution" at the 2014 UP-Start Cloud Awards in San Francisco. Xait was named a Gartner Sample Vendor in the 2014 Content Management Hype Cycle, within Collaborative Authoring tools. Xait was named a Gartner Cool Vendor in 2013. XaitPorter was awarded ”Star of Show” at the 2004 PETEX Conference and Exhibition. Technology The database is built on Oracle Corporation architecture. In May 2013, Xait and Mike Parkinsons's GetMyGraphic formed a strategic partnership. Allowing users of Xait to integrate GetMyGraphic's database of editable graphics right into their XaitPorter documents. XaitPorter is offered as a turn-key solution in the form of a cloud-based Software-as-a-Service (SaaS) or as an appliance (Solution-In-A-Box), as well as traditional proprietary software for installation on customer-provided hardware. XaitPorter introduced integration with Salesforce.com, allowing automation of proposals and contracts. XaitPorter implements a Managed Collaborative Authoring Process Customers The company's customers include: Oil & Gas/Energy companies Maritime Ship Design Companies Private healthcare providers Facilities Management Companies Life Sciences Technology companies Engineering Defense Legal Finance Construction Standards Organisations See also Project management Document management system Revision control collaborative editing mass collaboration References Companies established in 2000 Software companies of Norway Business software companies Norwegian brands
67047269
https://en.wikipedia.org/wiki/Inputting%20Esperanto%20text%20on%20computers
Inputting Esperanto text on computers
There are a number of methods to input Esperanto text when using a word-processor or email. All modern email clients and servers accept Unicode as UTF-8 in at least one of 8bit, quoted-printable or base64 Content-Transfer-Encoding types. Esperanto text will normally be transmitted in UTF-8 with a Content-Transfer-Encoding of either 8-bit (if the server supports it) or failing that, quoted-printable. The Esperanto alphabet is part of the Latin-3 and Unicode character sets, and is included in WGL4. The code points and HTML entities for the Esperanto characters not part of the ISO basic Latin alphabet are: An Esperanto locale would use and Time and date format among Esperantists is not standardized, but of course "internationally unambiguous" formats such as 2020-10-11 or 11-okt-2020 are preferred when the date is not spelled out in full ("la 11-a de oktobro 2020"). Input methods depend on a computer's operating system. Microsoft Windows Adjusting a keyboard to type Unicode is relatively simple (all Windows variants of the Microsoft Windows NT family, such as 2000 and XP, for example, support Unicode; Windows 9x does not natively support Unicode). The Canadian Multilingual Standard layout is preinstalled in MS Windows. The US international layout needs to be modified to enable Esperanto letters. This can be done using Microsoft Keyboard Layout Creator or by using a layout provided for this purpose, e.g. EoKlavaro. EoKlavaro gives access also to many other European language characters. Another more recent free download to adapt a Windows keyboard for Esperanto letters is Tajpi - Esperanto Keyboard for Windows 2000 / XP / Vista / 7 / 8 by Thomas James. As cons some configuration could supress hotkeys, like Ctrl+W to close browser tab, it will type ŭ instead. A simple and free utility with all the Esperanto keys already installed is called Esperanto keyboard layout for Microsoft Windows – (QWERTY version) this is available as a free download. A similar tool is Ek, and is available without charge. You can download the keyboard by clicking on Instalilo: ek(version#)inst.exe. Ek uses the cx keying function to produce ĉ. It will work with most programs but there are some that it is not compatible with. A commercial but still cheap tool is Šibboleth, a program that can produce every Latin character. It enables composition of ĝ etc. using the ^ deadkey (like for French letters), so one does not have to learn new key positions. The ŭ is produced by the combination u followed by #. An "Esperanto-Internacia" keyboard is available that assigns the keys Q W X Y to and the sequences DY TX to . If one wants to use a text editor that is Esperanto-compatible, make sure it supports Unicode, as do Editplus (UTF-8), UniRed and Vim. Linux Since 2009 it has been very easy to add key combinations for accented Esperanto letters to one's usual keyboard layout, at least in Gnome and KDE. No download is required. The keyboard layout options can be modified under System Preferences. The options to choose are "Adding Esperanto circumflexes (supersigno)" and the appropriate keyboard layout (Qwerty or Dvorak). A third level shift key is also required: under "Key to choose 3rd level", e.g. LeftWin. In older systems it may be necessary to activate Unicode by setting the locale to a UTF-8 locale. There is a special eo_XX.UTF-8 locale available at Bertil Wennergren's home page, along with a thorough explanation of how one implements Unicode and the keyboard in Linux. If the Linux system is recent, or kept updated, then the system is probably already working with Esperanto keys. For X11 and KDE, it's only necessary to switch to a keyboard layout that has Latin dead keys (for example, the "US International" keyboard), whenever the user wants to write in Esperanto. Some keyboards with dead keys are: In the US International keyboard, the dead circumflex is over the "6" key ("shift-6") and the dead breve is hidden over the "9" key ("altgr-shift-9"). In the Spanish dead tilde input, will produce the caret (^) dead tilde, which can be combined by pressing , , , , and to type ŵ, ŝ, ĝ, ĥ, ĵ and ĉ, respectively. It also can be combined with any vowel to type â, ê, î, ô, û and ŷ. and then will produce the caret symbol itself (^). In the Brazilian ABNT2 keyboard, the dead circumflex has its own key together with dead tilde ("shift-~"), near the "Enter" key. The dead breve is hidden over the backslash ("altgr-shift-\") key. In the Portuguese keyboard, the dead tilde key, near the left shift key, has both the dead circumflex and the dead breve. On French and Belgian keyboards, the same dead key (the one right of ) used to produce French â ê î ô û ŷ when followed by a vowel will usually also produce ĉ ĝ ĥ ĵ ŝ when followed by the appropriate consonant. ++ the key which would be a dead-grave when used with without (on Belgian keyboards, ++ which can be on the top or middle row) is usually a dead-breve, i.e. use it before hitting in order to get ŭ. Another option is to use a keyboard layout that supports the Compose key (usually mapped to the right alt or to one of the windows keys). Then, "compose-u u" will combine the character u with the breve, and "compose-shift-6 s" will combine the character s with the circumflex (assuming "shift-6" is the position of the caret). In GNOME, there exists a separate keyboard layout for Esperanto, replacing unused characters in Esperanto with the non-ASCII characters. A separate keyboard layout for Esperanto is available in KDE, too. If necessary, install and use high quality fonts that have Esperanto glyphs, like Microsoft Web core fonts (free for personal use) or DejaVu (The Bitstream Vera glyphs have the Bitstream Vera license and DejaVu extensions are in public domain). There is also an applet available for the gnome-panel called "Character Palette" and one can add the following characters to a new palette for quick placement from their panel menu bar: Ĉ ĉ Ĝ ĝ Ĥ ĥ Ĵ ĵ Ŝ ŝ Ŭ ŭ The Character Palette applet makes for a quick and easy way to add Esperanto characters to a web browser or text document. One need only select their newly created palette and click a letter, and that letter will be on their system clipboard waiting to be pasted into the document. macOS On macOS systems Esperanto characters can be entered by selecting a keyboard layout from the "Input Sources" pane of "Language & Text" preferences, found in the "System Preferences" application, and the pre-installed ABC Extended keyboard layout can be used to type Esperanto's diacritics. When this layout is active, Esperanto characters can be entered using multiple keystrokes using a simple mnemonic device: contains the caret character, which looks like a circumflex, so places a caret over the following character. Similarly, stands for breve, so adds the breve mark over the next character. One can also download an Esperanto keyboard layout package that will, once installed, function in the same way as other languages' keyboards. When installed, this gives users two different methods of typing. The first, Esperanto maintains a QWERTY layout, but switches the letters that are not used in Esperanto (q, w, y, and x) for diacritical letters and makes a u into a ŭ if it follows an a or an e. The second method, Esperanto-sc, is more familiar to QWERTY users and allows the user to type in most Latin-scripted languages and Esperanto simultaneously. It treats the keys that take diacritics (a, s, e, c, g, h, u, and j) as dead keys, if a combining character is pressed afterwards—usually the semicolon (;). Both methods are also available using the less common Dvorak Keyboard. A table of the input methods: Swedish Esperantists using Mac OS X can use the Finnish Extended layout, which comes with the OS. Finnish has the same alphabet and type layout as Swedish; the Finnish Extended layout adds functionality just like ABC Extended, only using other key combinations (the breve appears when one types | and the circumflex when one types |). Similarly, British users may use the Irish Extended layout, which differs from the ABC Extended keyboard layout in several ways (preserving the simple option+vowel method of applying acute accents, important for the Irish language, and the £ sign on shift-3 like the UK layout), but uses the same "dead-keys" for modifiers as ABC Extended for Esperanto characters. In OS X it is also possible to create one's own keyboard layouts, so it is relatively easy to have more convenient mappings, like for example one based on typing an x after the letter. There is still no integrated solution for typing Esperanto-characters with AZERTY keyboards. Dead-circumflex followed by a consonant may or may not work for ĉ ĝ ĥ ĵ ŝ; and if nothing else avails, ù is a tolerable if imperfect approximation for ŭ. References External links Computer input Amiketo is software that support the Esperanto alphabet in Windows, Mac OS, and Linux Online Esperanto keyboard Esperanto QWERTY keyboard for Windows using spare keys Esperanto GKOS keyboard for Android phones/tablets with genuine support (language option in Tools menu) Tajpi - Esperanto Keyboard for Windows 2000 / XP / Vista / 7 / 8 – free download Unired – Unicode plain text editor for Windows 95/98/NT/2000 (with E-o support) Esperanto Latin-script keyboard layouts Natural language and computing
9751066
https://en.wikipedia.org/wiki/Nick%20McKeown
Nick McKeown
Nicholas (Nick) William McKeown FREng, is the SVP/GM of the Network and Edge Group at Intel and a professor in the Electrical Engineering and Computer Science departments at Stanford University. He has also started technology companies in Silicon Valley. Biography Nick McKeown was born April 7, 1963 in Bedford, England. He received his bachelor's degree from the University of Leeds in 1986. From 1986 through 1989 he worked for Hewlett-Packard Labs, in their network and communications research group in Bristol, England. He moved to the United States in 1989 and earned both his master's degree in 1992 and PhD in 1995 from the University of California at Berkeley. During spring 1995, he worked briefly for Cisco Systems where he helped architect their GSR 12000 router. His PhD thesis was on "Scheduling Cells in an Input-Queued Cell Switch", with advisor Professor Jean Walrand. He joined the faculty of Stanford University in 1995 as assistant professor of electrical engineering and computer science. In 1997, McKeown co-founded Abrizio Inc. with Anders Swahn, where he was CTO. Abrizio was acquired by PMC-Sierra in 1999 for stock shares worth $400 million. He was promoted to associate professor in 2002. He was co-founder in 2003 (with Sundar Iyer) and CEO of Nemo Systems, which Cisco Systems bought for $12.5 million cash in 2005. He became faculty director of the Clean Slate Program in 2006, and was promoted to full professor at Stanford in 2010. In 2007, Casado, McKeown and Shenker co-founded Nicira Networks, a Palo Alto, California based company working on network virtualization, acquired by VMWare for $1.26 billion in July 2012. Research McKeown is active in the software-defined networking (SDN) movement, which he helped start with Scott Shenker and Martin Casado. SDN and OpenFlow arose from the PhD work of Casado at Stanford University, where he was a student of McKeown. OpenFlow is a novel programmatic interface for controlling network switches, routers, WiFi access points, cellular base stations and WDM/TDM equipment. OpenFlow challenged the vertically integrated approach to switch and router design of the past twenty years. McKeown works closely with Guru Parulkar, Executive Director of the Stanford Open Network Research Centre (ONRC) and the Open Networking Lab (ON.Lab). In 2011, McKeown and Shenker co-founded the Open Networking Foundation (ONF) to transfer control of OpenFlow to a newly created not-for-profit organization. Since 2013, McKeown has promoted the idea that network switches should be programmable rather than fixed. A collaboration between TI and Stanford, led to the PISA (protocol independent switch architecture), published originally under the name RMT. The P4 language was created to specify how packets should be processed in programmable switches. P4 is an open-source language maintained by P4.org, a non-profit McKeown founded with Jennifer Rexford and Amin Vahdat. McKeown co-founded Barefoot Networks to build and sell PISA switches, to demonstrate that programmable switches can be built at the same power, performance and cost as fixed-function switches. In June 2019, Intel Corporation announced its intent to acquire Barefoot Networks to support their focus on end-to-end networking and infrastructure leadership for their data center customers. Awards and distinctions In 2000, the Institute of Electrical and Electronics Engineers (IEEE) Communications Society Stephen O. Rice Prize for the best paper in communications theory went to a paper on "Achieving 100% Throughput in an Input-Queued Switch", which McKeown co-authored with Adisak Mekkittikul, Venkat Anantharam and Jean Walrand. The paper discussed dealing with the problem of head-of-line blocking using Virtual Output Queues. McKeown holds an honorary doctorate from ETH Zurich. He is a Distinguished Alumnus of Electrical Engineering at University of California, Berkeley. In 2012, McKeown received the ACM Sigcomm "Lifetime Achievement" Award "for contributions to the design, analysis, and engineering of high-performance routers, resulting in a major impact on the global Internet". McKeown was elected to the US National Academy of Engineering in 2011. He is a Fellow of the Royal Academy of Engineering (UK), a Fellow of the IEEE and the Association for Computing Machinery (ACM). In 2005, he was awarded the Lovelace Medal from the British Computer Society where he gave a lecture on "Internet Routers (Past Present and Future)". The citation described him as "the world's leading expert on router design." In 2009, he received the IEEE Koji Kobayashi Computers and Communications Award. In 2015 he shared the NEC C&C Award with Martin Casado and Scott Shenker for their work on SDN. In 2021, McKeown was awarded the IEEE Alexander Graham Bell medal for exceptional contributions to communications and networking sciences and engineering. At Stanford he has been the STMicroelectronics Faculty Scholar, the Robert Noyce Faculty Fellow, a Fellow of the Powell Foundation and the Alfred P. Sloan Foundation, and recipient of a CAREER award from the National Science Foundation. Vint Cerf and McKeown created two entertaining videos to introduce Cerf at conferences. McKeown performed at the TED 2006 Conference in Monterey, where he took the stage to juggle while reciting Pi. McKeown was an international swimmer and competed for Great Britain in the 1985 World Student Games in Kobe, where he swam 100m breaststroke. Opposition to the death penalty McKeown is involved in the movement to abolish the death penalty, including leadership roles in the 2012 and 2016 (failed) California ballot initiatives to end capital punishment, but ultimately leading to a moratorium put in place by Governor Gavin Newsom on March 13, 2019. In 2001, he co-funded the Death Penalty Clinic at the UC Berkeley School of Law in Berkeley, California. In 2009, he received the Abolition Award from Death Penalty Focus. He gave a TedX talk about abolition in 2016. References External links List of pioneers in computer science 1963 births English emigrants to the United States UC Berkeley College of Engineering alumni Stanford University School of Engineering faculty Stanford University Department of Electrical Engineering faculty Fellows of the Association for Computing Machinery Fellow Members of the IEEE Fellows of the Royal Academy of Engineering Living people Members of the United States National Academy of Engineering
6333492
https://en.wikipedia.org/wiki/Lexmark%20International%2C%20Inc.%20v.%20Static%20Control%20Components%2C%20Inc.
Lexmark International, Inc. v. Static Control Components, Inc.
Lexmark International, Inc. v. Static Control Components, Inc., is an American legal case involving the computer printer company Lexmark, which had designed an authentication system using a microcontroller so that only authorized toner cartridges could be used. The resulting litigation (described by Justice Scalia in 2014 as "sprawling", and by others as having the potential to go on as long as Jarndyce v. Jarndyce) has resulted in significant decisions affecting United States intellectual property and trademark law. In separate rulings in 2004 and 2012, the United States Court of Appeals for the Sixth Circuit ruled that: circumvention of Lexmark's ink cartridge authentication does not violate the Digital Millennium Copyright Act (DMCA), and Static Control Components had standing basis under the Lanham Act to sue Lexmark for false advertising in relation to its promotion of the program, which was unanimously affirmed in 2014 by the Supreme Court of the United States. The Supreme Court's 2014 ruling also affects statutory interpretation in the area of standing in pursuing lawsuits on statutory grounds in a wide variety of areas in federal court. Background Lexmark is a large manufacturer of laser and inkjet printers, and Static Control Components (SCC) is a company that makes "a wide range of technology products, including microchips that it sells to third-party companies for use in remanufactured toner cartridges." In an effort to control and reduce the refilling and redistribution of toner cartridges, Lexmark began distributing two distinct varieties of its toner cartridges. Under its Prebate Program (now known as the Lexmark Return Program), through a shrinkwrap license, Lexmark sold certain printer cartridges at a discount (as much as $50 less) to customers who agreed to "use the cartridge only once and return it only to Lexmark for remanufacturing or recycling". Lexmark's "Non-Prebate" cartridges could be refilled by the user without restrictions and were sold without any discount. Lexmark touted the Prebate Program as a benefit to the environment and to their customers, since it would allow customers to get cheaper cartridges, and the benefit to Lexmark was that it could keep empty cartridges out of the hands of competing rechargers. Many users purchased such cartridges under the stated conditions. To enforce this agreement, Lexmark cartridges included a computer chip that included a 55-byte computer program (the "Toner Loading Program") which communicated with a "Printer Engine Program" built into the printer. The program calculated the amount of toner used during printing: when the calculations indicated that the original supply of Lexmark toner should be exhausted, the printer would stop functioning, even if the cartridge had been refilled. In addition, if the chip did not perform an encrypted authentication sequence, or if the Toner Loading Program on the chip did not have a checksum matching exactly a value stored elsewhere on the chip, the printer would not use the cartridge. In 2002, SCC developed its own computer chip that would duplicate the 'handshake' used by the Lexmark chip, and that also included a verbatim copy of the Toner Loading Program, which SCC claimed was necessary to allow the printer to function. A Prebate cartridge could successfully be refilled if Lexmark's chip on the cartridge was replaced with the SCC chip. SCC began selling its "Smartek" chips to toner cartridge rechargers. Copyright and DMCA claims: 2004 Circuit Court ruling At the district court On December 30, 2002, Lexmark sued SCC in the United States District Court for the Eastern District of Kentucky. The suit claimed that SCC had: violated copyright law by copying the Toner Loading Program, and violated the DMCA by selling products that circumvented the encrypted authentication sequence between the Lexmark cartridge chip and the printer. On March 3, 2003, Judge Karl S. Forester granted a preliminary injunction to Lexmark, blocking SCC from distributing its cartridge chips. The ruling was seen to be controversial. On the copyright claim, the court found that: the use of the Toner Loading Program was indeed a likely copyright violation, because the Toner Loading Program was not a "lock-out code" that SCC was entitled to copy under the DMCA, and because the Toner Loading Program could be rewritten in different ways (and therefore had enough creativity to qualify for copyright protection). the Copyright Office's decision to grant copyright registration to the two programs showed that the programs were probably copyrightable. because of the complexity of the authentication system, SCC could not have known that it could bypass the authentication without using Lexmark's copyrighted program; but it held that this did not matter because "Innocent infringement, however, is still infringement." fair use did not apply. On the DMCA claims, the court found that the SCC microchip circumvented Lexmark's authentication sequence, and that the reverse engineering exception to the DMCA did not apply, because it only covers the independent creation of new programs that must interoperate with existing ones, and SCC did not create any new program. The appellate ruling SCC appealed the district court's ruling to the Sixth Circuit Court of Appeals. As is usual for federal appeals in the United States, a three judge panel heard the appeal– for this matter, the panel consisted of appellate judges Gilbert S. Merritt and Jeffrey S. Sutton, and John Feikens (a district court judge temporarily assisting the appeals court). On October 26, 2004, the judges issued their ruling, in which all three judges wrote separate opinions. Majority opinion In the majority opinion, Judge Sutton (with Judge Merritt agreeing) reversed the lower court's ruling and vacated the temporary injunction, holding that Lexmark was unlikely to succeed in its case. The case was remanded to the district court for further proceedings consistent with the opinion. On the copyright claim, the court noted that unlike patents, copyright protection cannot be applied to ideas, but only to particular, creative expressions of ideas. Distinguishing between an unprotectable idea and a protectable creative expression is difficult in the context of computer programs; even though it may be possible to express the same idea in many different programs, "practical realities"—hardware and software constraints, design standards, industry practices, etc.—may make different expressions impractical. "Lock-out" codes—codes that must be performed in a certain way in order to bypass a security system—are generally considered functional rather than creative, and thus unprotectable. With these principles in mind, it was held that the district court had erred in three ways: It had held that the Toner Loading Program was copyrightable simply because it "could be written in a number of different ways", without considering the practical realities. Because of this mistaken standard, it had refused to consider whether or not the alternative Toner Loading Programs proposed by Lexmark were practical. It had concluded that the Toner Loading Program was not a "lock-out code", because it had not sufficiently considered how difficult it would be for SCC—without Lexmark's knowledge of the code's structure and purpose—to alter the code and still pass the printer's authentication mechanisms. On the DMCA claims, the majority first considered Lexmark's claim that the SCC chip circumvented the access controls on the Printer Engine Program. It held that Lexmark's authentication sequence did not "control access" to the program; rather, the purchase of the printer itself allowed access to the program: Likewise, the majority opinion held that anyone purchasing a printer and toner cartridge could read the Toner Loading Program from the printer; so SCC did not circumvent an access control on the Toner Loading Program either. The court also rejected the district court's conclusion that the interoperability defense did not apply. Since SCC had offered testimony that its chips did indeed contain independently-created programs in addition to Lexmark's Toner Loading Program, the Toner Loading Program could be seen as necessary to allow interoperation between SCC's own programs and the Lexmark printer. Concurring opinion In a concurring opinion, Judge Merritt agreed with Judge Sutton on the outcome of this particular case, but also indicated that he would go farther: He opined that even if the programs involved were more complex (and thus more deserving of copyright protection), the key question would be the purpose of the circumvention technology. Under his proposed framework, if a third party manufacturer's use of a circumvention technology was intended only to allow its products to interoperate with another manufacturer's—and not to gain any independent benefit from the functionality of the code being copied—then that circumvention would be permissible. Concurring/dissenting opinion Judge Feikens also wrote an opinion, agreeing with many of the majority opinion's results (though sometimes for different reasons), but disagreeing with its conclusion on the Toner Loading Program. Concerning the copyrightability of the Toner Loading Program, he found that the record supported Lexmark's claim that the program could have been implemented in any number of ways, and therefore Lexmark's implementation was creative and copyrightable. Agreeing that the record was inadequate for the district court to conclude that the Toner Loading Program was a "lock-out code", he noted that Lexmark's expert had testified that the entire Toner Loading Program process could be turned off by flipping a single bit in the chip's code, and that it should have been possible for SCC to discover this; so copying the program may not have been practically necessary too. On the DMCA counts, Feikens agreed that Lexmark had not established a violation with regards to the Toner Loading Program, but for a very different reason than that found by the majority opinion. He noted that SCC had testified that it had not even been aware that the Toner Loading Program existed; it had copied the data on the Lexmark printer chip (including the Toner Loading Program) purely in an attempt to bypass the protection on the Printer Engine Program. Since the DMCA requires that an infringer knowingly circumvent access controls on the protected program, SCC could not have knowingly circumvented protections on a program it did not know existed. With regards to the Printer Engine Program, he agreed with the majority opinion, but also noted in his belief that the consumer had acquired the rights to access this program by purchasing the printer, and therefore the DMCA would not apply to attempts to access it. Request for a rehearing Lexmark filed a request for the full Sixth Circuit to hear the case en banc. The Sixth Circuit rejected this request in February 2005. Rule 13 of the United States Supreme Court Rules of Procedure requires the losing party in a case before a court of appeals to file a petition for a writ of certiorari within 90 days from the date the court of appeals enters its judgment, or from the date of the denial of a petition for rehearing in the court of appeals. The Sixth Circuit's judgment became final for all purposes when the 90-day period expired without Lexmark filing a cert petition. Impact The Sixth Circuit's decision is noteworthy for at least two reasons: All three judges took pains to emphasize in their opinions that the DMCA must be interpreted consistently with the broader public purposes of the copyright statute, rather than as a grant of new powers to makers of technology products to impose additional restrictions not contemplated by copyright. It represents a rare defeat for large printer manufacturers like Lexmark, Hewlett-Packard and Epson in their ongoing battle with third-party ink sellers. Lexmark 2004 is also consistent with subsequent jurisprudence in the United States Court of Appeals for the Federal Circuit in The Chamberlain Group, Inc. v. Skylink Technologies, Inc., and therefore emphasizes that the DMCA was intended to create a new type of liability not a property right over durable goods incorporating copyrighted material. Trademark and false advertising: 2012 Circuit Court ruling District Court Before the Sixth Circuit's ruling, Static Control initiated a separate action in 2004 seeking declaratory judgment under federal copyright laws and the DMCA that its newly modified chips did not infringe Lexmark's copyrights, and Lexmark counterclaimed raising patent infringement, DMCA violations, and tort claims, and added three remanufacturers as third-party defendants. On remand, Lexmark successfully moved to dismiss all of Static Control's counterclaims. During the course of the proceedings, the court ruled that: nine of Lexmark's mechanical patents were valid, but two of its design patents were invalid, summary judgment would be granted to Lexmark on its claims of direct patent infringement against three co-defendants, and Lexmark's single-use license for Prebate cartridges was valid, which prevented Lexmark's patents from exhausting following the initial sale of the Prebate toner cartridges to end users. However, this was subsequently modified by the judge later, after the United States Supreme Court's decision in Quanta Computer, Inc. v. LG Electronics, Inc. Therefore, the trial's issues consisted only of Lexmark's claim of induced patent infringement against Static Control and Static Control's defense of patent misuse. The district judge Gregory Frederick Van Tatenhove instructed the jury that its findings on patent misuse would be advisory; the jury held that Static Control did not induce patent infringement and advised that Lexmark misused its patents. Lexmark renewed its earlier request for a judgment as a matter of law and also filed a motion for a retrial on its patent inducement claim, both of which the district court denied. Both parties timely appealed. The appellate ruling In a unanimous ruling, the district court's findings were affirmed, except for its dismissal of Static Control's counterclaims under the Lanham Act and North Carolina state law. These were reversed and remanded for further consideration. In particular, it was held: the 6th Circuit had jurisdiction to hear the appeal under (as opposed to being referred to the United States Court of Appeals for the Federal Circuit under ), the district court did not abuse its discretion in increasing the amount of the injunction bond entered during the preliminary injunction hearing, SCC's federal antitrust counterclaims under §§ 4 and 16 of the Clayton Act for violations of §§ 1 and 2 of the Sherman Act failed for lack of standing, under the standard set in Associated Gen. Contractors of Cal., Inc. v. Cal. State Council of Carpenters ("AGC"), SCC's counterclaim for false advertising under the Lanham Act is valid, as the 6th Circuit applies a "reasonable interest" standard to determine standing (in common with the 2nd Circuit, as opposed to the categorical test used in the Seventh, Ninth and Tenth Circuits, or the AGC approach used in the Third, Fifth, Eighth and Eleventh Circuits), SCC's counterclaim for unfair competition and false advertising under North Carolina's Unfair Deceptive Trade Practices Act is valid under state law, as determined by the North Carolina Court of Appeals Impact The ruling also let stand the district court's ruling of the impact of Quanta Computer, Inc. v. LG Electronics, Inc. on the exhaustion doctrine in the area of patent law. By finding that the sale of patented goods, even when subject to valid license restrictions, exhausts patent rights, it essentially gives Quanta a broad interpretation, which threatens to render unenforceable through patent law differential licensing schemes that attempt to distinguish separate fields of use for a patented item. However, the United States Court of Appeals for the Federal Circuit's ruling in Lexmark Int'l, Inc. v. Impression Prods., Inc. reopened the issue. The Court held that after the sale of a patented item, the patent holder cannot sue for patent infringement relating to further use of that item, even when in violation of a contract with a customer or imported from outside the United States. Scope of federal statutory torts: 2014 ruling at the Supreme Court Appeal of the 2012 ruling The Circuit Court's ruling with respect to standing under the Lanham Act was appealed by Lexmark to the Supreme Court of the United States, on which certiorari was granted on June 3, 2013. The case was heard on December 3, 2013, and the question presented to the Court was: Lexmark argued in favour of the AGC test, while SCC argued that the appropriate test should actually be that of the "zone of interests" protected by the statute that has been applied in cases involving the Administrative Procedure Act, the Endangered Species Act, and Title VII of the Civil Rights Act. At the hearing, it appeared that Lexmark's submission received more intensive examination than SCC's. Decision On March 25, 2014, the US Supreme Court unanimously affirmed the Sixth Circuit's holding that Static Control did have standing to sue under the Lanham Act. The Court developed a new test for assessing standing in false advertising, rejecting the existing tests, including the Sixth Circuit's "reasonable interest test". In that regard, the approach adopted by Scalia J. consists of several steps: Under Article III, the plaintiff must have suffered or be imminently threatened with a concrete and particularized "injury in fact" that is fairly traceable to the challenged action of the defendant and likely to be redressed by a favorable judicial decision. AGC requires the ascertainment, as a matter of statutory interpretation, of the "scope of the private remedy created by" Congress, and the "class of persons who [could] maintain a private damages action under" a legislatively conferred cause of action. A statutory cause of action extends only to plaintiffs whose interests "fall within the zone of interests protected by the law invoked," and the "zone of interests" formulation applies to all statutorily created causes of action, as it is a "requirement of general application" and Congress is presumed to "legislat[e] against the background of" it, "which applies unless it is expressly negated." A statutory cause of action is also presumed to be limited to plaintiffs whose injuries are proximately caused by violations of the statute. A plaintiff suing under §1125(a) ordinarily must show that its economic or reputational injury flows directly from the deception wrought by the defendant's advertising; and that occurs when deception of consumers causes them to withhold trade from the plaintiff. Direct application of the zone-of-interests test and the proximate-cause requirement supplies the relevant limits on who may sue under §1125(a). In discussing the scope of proximate cause, Scalia noted: The previous tests adopted by the various Circuit Courts were dismissed as being problematical on several grounds: Impact The Court's ruling was described as being "a tour de force treatment of statutory standing," and being "certain to earn reprinting in casebooks and citations in briefs for decades to come." It was seen to have greater scope than what was directly related to the case at hand: it was noted as being unusual for the current Court to open the door to more lawsuits the Court has rejected a notable body of existing doctrine relating to standing the Court also took the unusual step of rejecting all Circuit Court interpretations on the question through substituting its own take on the matter the new standard does not define standing requirements as narrowly as some circuits did, but by adding the proximate causation test, it may ultimately make it more difficult for plaintiffs to show standing the Court's focus on statutory purposes and their implication for what a statute authorizes, rather than so-called "prudential" considerations, may limit standing and shift the debate over who can sue under a wide variety of federal laws See also Impression Prods., Inc. v. Lexmark Int'l, Inc.: similar ink cartridge case Chamberlain v. Skylink, another copyright case posing similar DMCA questions Sega v. Accolade, a copyright case involving interoperability issues with unlicensed Sega Genesis games. References External links United States Supreme Court cases United States copyright case law United States Court of Appeals for the Sixth Circuit cases 2004 in United States case law 2012 in United States case law 2014 in United States case law Digital Millennium Copyright Act case law False advertising law United States trademark case law United States competition law Lexmark Computer printing United States Supreme Court cases of the Roberts Court
44653917
https://en.wikipedia.org/wiki/National%20Institute%20of%20Electronics%20%26%20Information%20Technology
National Institute of Electronics & Information Technology
National Institute of Electronics & Information Technology (NIELIT), formerly known as the DOEACC Society, is a society that offers Information Technology and Electronics training at different levels. It is associated with the Ministry of Electronics and Information Technology of the Government of India.It is working as an autonomous Statutory Organisation under Indian Government. It has the same power as other Organisations like CBSE ,UGC etc.There are so many Computer Institutions all over India affiliated with NIELIT and conducting classes on computer/IT educational sector. References External links Ministry of Communications and Information Technology (India) Scientific societies based in India Electronics industry in India 1994 establishments in Delhi Educational institutions established in 1994 Education in Patna
2564355
https://en.wikipedia.org/wiki/Personal%20Computer%20World
Personal Computer World
Personal Computer World (PCW) (February 1978 - June 2009) was the first British computer magazine. Although for at least the last decade it contained a high proportion of Windows PC content (reflecting the state of the IT field), the magazine's title was not intended as a specific reference to this. At its inception in 1978 'personal computer' was still a generic term (the Apple II, PET 2001 and TRS-80 had been launched as personal computers in 1977.) The magazine came out before the Wintel (or IBM PC compatible) platform existed; the original IBM PC itself was introduced in 1981. Similarly, the magazine was unrelated to the Amstrad PCW. History PCW was founded by the Croatian-born Angelo Zgorelec in 1978, and was the first microcomputer magazine in Britain. PCW’s first cover model, in February 1978, was the Nascom-1, which also partly inspired Zgorelec to launch the magazine. Its August 1978 issue featured the colour capabilities of the Apple II. PCW went monthly from the second edition. Zgorelec went into partnership with Felix Dennis who published his first issue in September 1979. before selling the title to VNU in 1982. The magazine was later owned by Incisive Media, which announced its closure on 8 June 2009. As the magazine was launched four years before the first IBM PC (reviewed in the magazine in November 1981) the magazine originally covered early self-build microcomputers. It later expanded its coverage to all kinds of microcomputers from home computers to workstations, as the industry evolved. Regular features in the earlier years of the magazine were Guy Kewney's Newsprint section, Benchtests (in-depth computer reviews), Subset, covering machine code programming, type-in program listings, Bibliofile (book reviews), the Computer Answers help column, Checkouts (brief hardware reviews) TJ's Workshop (for terminal junkies), Screenplay for game reviews and Banks' Statement, the regular column from Martin Banks. In 1983 Jerry Sanders joined the staff as Features Editor and wrote the first published review of Microsoft Word 1.0 for the magazine. The cover style, with a single photo or illustration dominating the page, was adopted soon after its launch and continued until the early 1990s. The cover photos were often humorous, such as showing each new computer made by Sinclair being used by chimpanzees, a tradition that started with the ZX81. PCW eagerly promoted new computers as they appeared, including the BBC Micro. The magazine also sponsored the Personal Computer World Show, an annual consumer and trade fair held in London every September from 1978 to 1989. The magazine underwent a major reader marketing push in 1992, resulting in its circulation figure rising from a middle-ranking 80,000 to more than 155,000 at a time when personal computing was becoming hugely popular thanks to Windows 3.1 and IBM PC clones flooding the market. PCW battled with rivals Computer Shopper, PC Direct, PC Magazine and PC Pro for several thousand pages of advertising each month, resulting in magazines that could run to over 700 pages. The magazine typically came with a cover-mounted CD-ROM or DVD-ROM, which held additional content. Although the magazines themselves were identical, the DVD version cost more than the CD-ROM version. During a brief period in 2001, the magazine was (effectively) sold as 'PCW' as part of a major overhaul of the magazine design and content, but this abbreviation was dropped from the cover after just a few issues. The content also reverted from having been a bit more consumer electronics focused to return to its roots. The magazine changed (both in terms of style and content) on many occasions after its launch. The last major change took place with the November 2005 issue, when the magazine was relaunched with an updated look (including glossier paper and a redesigned layout), new features, fewer advertising pages, and a slightly higher price tag. Editors of the 1990s include Guy Swarbrick, Ben Tisdall, Simon Rockman, Gordon Laing and Riyad Emeran. At the time of its closure, the editor was Kelvyn Taylor. Closure The magazine was closed in June 2009, with owners Incisive Media quoting poor sales and a difficult economic climate for newsstand titles. At the time of closing, it was the second most popular monthly technology title in the UK, with an audited circulation figure of 54,069 Its last issue, dated August 2009, was published on 8 June 2009. This final issue made no mention of its being the last one, and advertised a never to be published September issue. Subscribers were offered the option of a refund, or transferring their subscriptions to PCW'''s sister magazine, Computeractive. At its close PCW'' featured a mixture of articles, mainly related to the Windows PC, with some Linux and Macintosh-related content. The news pages included reports on various new technologies. Other parts of the magazine contained reviews of computers and software. There was also a 'Hands On' section which was more tutorial-based. Advertising still made up a proportion of its bulk, although it had diminished somewhat since its peak in the 1990s. References External links Computer magazine history featuring PCW Information on PCW-founder Angelo Zgorelec 1978-June 1989 Personal Computer World magazine Library at the Centre for Computing History UK Press Gazette reports closure of title Computer Answers help columns from 1985 to 1987 Archived Personal Computer World magazines on the Internet Archive Home computer magazines Defunct computer magazines published in the United Kingdom Video game magazines published in the United Kingdom VNU Business Media publications Magazines established in 1978 Magazines disestablished in 2009 1978 establishments in the United Kingdom 2009 disestablishments in the United Kingdom Monthly magazines published in the United Kingdom Magazines published in London
4541188
https://en.wikipedia.org/wiki/MoodLogic
MoodLogic
MoodLogic was a software company founded in 1998 by Tom Sulzer, Christian Pirkner, Elion Chin and Andreas Weigend, and was one of the first online music recommendation systems. The company obtained ratings on over 1 million songs by over 50,000 distinct listeners as part of its proprietary method for modeling user preference space. Software In addition to their web presence, the company created a software application that uses a central database to allow users to collaboratively profile music by mood. Each user has a certain number of "credits" they can use to identify song profiles. Credits could be obtained by either paying for them or profiling songs. This software allowed the user to generate "mood" based playlists based on the mood of the user. The program could also mix a playlist based on a selected song. This would return a playlist with songs of similar tempo, mood, genre, etc. The software was also capable of organizing a music collection based on a "fingerprint" of the song. Moodlogic would generate this fingerprint of the song, upload it to the server and wait for a response. This process could take anywhere from a few seconds to a few minutes depending on computer power, internet connection speed and server load. Once the corrected tag information had been downloaded, the ID3 tag was updated and written to the file. This meant a user could have a collection of incorrectly tagged mp3's and the software would be able to correctly identify, tag, and even organize the songs into folders based on artist. Patents Chief Scientist Rehan Khan and his team filed a number of patents, 2 of which have currently been granted: US Patent 6539395 "Method for creating a database for comparing music" and US Patent 7277766 "Method and system for analyzing digital audio files." The latter patented a system for audio fingerprinting that was fast, compact and robust. 2006 buy-out Despite a high-profile launch, and apparently active community, the website was short-lived. The last release of the software was on November 13, 2003, with version 2.7.1. The last official traffic on the forums was in late 2004. Repeated forum posts by users after that time resulted in no response, and inquiries from subscribers ceased to be answered, although the software database seemed to continue operating. Moodlogic was bought by All Media Guide, the company that runs allmusic.com, in May 2006. It is not yet clear whether All Media intends to resume development and reactivate the community. Prominent employees and consultants to the company included musicologist Dr. Robert Gjerdingen, psychologist Daniel Levitin and record producer/e-music.com co-founder Sandy Pearlman. 2008 move to Macrovision The MoodLogic site resolves to Macrovision's site with this message: "Effective March 3, 2008, Macrovision announces the end of life (EOL) of the Moodlogic music management and recommendation software. Service will be discontinued due to intensive operational and infrastructure resources are required to sustain the application. Macrovision’s efforts in music recommendation will continue through the AMG Data Services Tapestry business-to-business product." 2009 renaming to Rovi Corporation On July 15, 2009, Macrovision Solutions Corporation was renamed Rovi Corporation. According to the company's website, "Rovi Corporation is focused on revolutionizing the digital entertainment landscape by delivering solutions that enable consumers to intuitively discover new entertainment from many sources and locations. The company also provides extensive entertainment discovery solutions for television, movies, music and photos to its customers in the consumer electronics, cable and satellite, entertainment and online distribution markets. These solutions, complemented by industry leading entertainment data, create the connections between people and technology, and enable them to discover and manage entertainment in its most enjoyable form. Rovi holds over 4,000 issued or pending patents and patent applications worldwide [including those by MoodLogic], and is headquartered in Santa Clara, California, with numerous offices across the United States and around the world including Japan, Hong Kong, Luxembourg, and the United Kingdom." References External links Story about the development of MoodLogic's Magnet Browser interface "Call the tune" from The Sydney Morning Herald "Start-up finds Muze with mood music" from CNET Recommender systems Online music and lyrics databases American music websites
7228413
https://en.wikipedia.org/wiki/Watermarking%20attack
Watermarking attack
In cryptography, a watermarking attack is an attack on disk encryption methods where the presence of a specially crafted piece of data can be detected by an attacker without knowing the encryption key. Problem description Disk encryption suites generally operate on data in 512-byte sectors which are individually encrypted and decrypted. These 512-byte sectors alone can use any block cipher mode of operation (typically CBC), but since arbitrary sectors in the middle of the disk need to be accessible individually, they cannot depend on the contents of their preceding/succeeding sectors. Thus, with CBC, each sector has to have its own initialization vector (IV). If these IVs are predictable by an attacker (and the filesystem reliably starts file content at the same offset to the start of each sector, and files are likely to be largely contiguous), then there is a chosen plaintext attack which can reveal the existence of encrypted data. The problem is analogous to that of using block ciphers in the electronic codebook (ECB) mode, but instead of whole blocks, only the first block in different sectors are identical. The problem can be relatively easily eliminated by making the IVs unpredictable with, for example, ESSIV. Alternatively, one can use modes of operation specifically designed for disk encryption (see disk encryption theory). This weakness affected many disk encryption programs, including older versions of BestCrypt as well as the now-deprecated cryptoloop. To carry out the attack, a specially crafted plaintext file is created for encryption in the system under attack, to "NOP-out" the IV such that the first ciphertext block in two or more sectors is identical. This requires that the input to the cipher (plaintext, , XOR initialisation vector, ) for each block must be the same; i.e., . Thus, we must choose plaintexts, such that . The ciphertext block patterns generated in this way give away the existence of the file, without any need for the disk to be decrypted first. See also Disk encryption theory Initialization vector Block cipher modes of operation Watermark References Cryptographic attacks Disk encryption
65166559
https://en.wikipedia.org/wiki/CrystalExplorer
CrystalExplorer
CrystalExplorer or CE is a freeware designed to analysis the crystal structure with *.cif file format. CE is helpful to investigate different areas of solid-state chemistry such as Hirshfeld surface analysis, intermolecular interactions, polymorphism, effect of pressure and temperature on crystal structure, single-crystal to single-crystal reactions, analyzing the voids present in crystal, and structure-property relationships. The graphical interface of CE towards the 3D crystal structure visualization aids in drawing the crystal structure with or without Hirshfeld surface. History CrystalExplorer launched as a graphical user interface which facilitates the visualization of interactions in molecular crystal structures. In 2006, M. A. Spackman's student Dylan Jayatilaka and coworkers presented a paper about their new crystallographic software in the occasion of 23rd European Crystallographic Meeting (ECM23) conducted in Leuven. This software was designed by School of Biomedical and Chemical Sciences, University of Western Australia, Nedlands 6009, Australia. From 2006 onward researchers started citing the program in their research papers. CrystalExplorer 2.1 designed for Mac OS X, Windows and Linux platforms for the analysis of crystal structures and can be used to investigate many areas of solid-state chemistry such as studying intermolecular interactions, polymorphism, the effects of pressure and temperature on crystal structures, single-crystal to single-crystal reactions, analyzing crystal voids, structure-property relationships, isostructural compounds, and calculate intermolecular interaction energies. Currently in 2020 September, there are more than 2000 research papers that cite CrystalExplorer software as per google scholar analysis. Licence CrystalExplorer17 is licensed free-of-charges under conditions, such as not use the free version of CrystalExplorer to conduct commercial research, confidential research, or research that is not likely to be published in a peer-reviewed publication. See also Cambridge Crystallographic Data Centre Crystallographic Information File International Union of Crystallography References External links Tutorial-1 Tutorial-2 Tutorial-3 Computational chemistry software Chemistry software Chemistry software for Linux
6963544
https://en.wikipedia.org/wiki/Apple%20TV
Apple TV
Apple TV is a digital media player and microconsole that is developed and marketed by Apple. It is a small network appliance hardware that plays received media data such as video and audio to a television set or external display. An HDMI-compliant source device, it has to be connected to an enhanced-definition or high-definition widescreen television through an HDMI cable to function. It lacks integrated controls and can only be controlled remotely, either through Apple Remote and Siri Remote or some third party infrared remotes. Apple TV runs tvOS with multiple pre-install software applications. Its media services include streaming subscriptions, TV Everywhere-based cables and broadcastings, and sports league journalisms. At the March 2019 special event, Apple highlighted their reorientation to withdraw attention on the Apple TV because of its low-rate success against the competition. To generate a higher revenue, they released Apple TV+ and Apple TV Channels a la carte. Background In 1993, in an attempt to enter the home-entertainment industry, Apple released the Macintosh TV. The TV had a 14-inch CRT screen alongside a TV tuner card. It was not a commercial success, with only 10,000 units sold before its discontinuation in 1994. The company's next industrial foray was in 1994 with the Apple Interactive Television Box 1994. The Box was a collaboration venture between Apple, BT, and Belgacom, but it was never released to the general public. Apple's final major industrial attempt before the Apple TV was the commission of the Apple Bandai Pippin in 1990s, which combined home game console with a networked computer. Starting as early as 2011, Gene Munster, a longtime investment banking analyst at Piper Jaffray, rumored that Apple would announce a HDTV television set hardware to compete with Sony, LG, Samsung, and other TV manufacturers. Apple, however, never released such product. In 2015, Munster relented and recanted his rumor. This was mentioned as a potential breakthrough product in the biography Steve Jobs. Apple TV is an HDMI-compliant source device without integrated controls that can only be controlled remotely. The latter can either be accessed through Apple Remote or Siri Remote or other third party infrared remotes. At the March 2019 Apple special event, Apple announce to withdraw focuses from Apple TV because of its low-rate success against competition. To achieve more profit, they released Apple TV+ and Apple TV Channels a la carte. Models First generation In the September 2006 Apple Special Event, Apple announced the first generation Apple TV under the brand "iTV" to arrange it with the rest of their "i"-based products, but the name was later renamed to "Apple TV" because the British broadcasting network ITV owns the trademark in the United Kingdom. Pre-orders were available in January 2007, and the Mac Mini, bundled with a 40 GB hard disk, was released in March 2007. An update with a 160 GB HDD was released in May 2007. Subsequently, the company ceased sales of the former 40 GB HDD version in September 2009. In January 2008, it became a stand-alone device through a software update, and it removed the requirement of iTunes syncing, making the hard disk redundant. That update also allowed media from services such as iTunes Store, MobileMe, and Flickr to be rented or purchased directly. Notably in July 2008, Apple released the tvOS 2.1 software update which added external recognition for the iPhone and iPod Touch as alternative remote control devices to the Apple Remote. In September 2015, Apple discontinued the first generation Apple TV, with the iTunes Store accessibility being obstructed from such devices due to obsolete security standards. Second generation Apple unveiled the second generation Apple TV in September 2010, and it added support to run a variation of iOS. It was housed in black and was one-quarter the size of the original. The device replaced the internal hard drive with a 8GB flash storage. It supported output up to 720p with HDMI. Third generation In the March 2012 Apple Special Event, Apple unveiled the third generation Apple TV. It was identical to its predecessor in external appearance, and it included a one-core deactivated dual A5 processor and 1080p output support. Apple quietly released an update "Rev A" in January 2013. It added support for peer-to-peer AirPlay, and it replaced the processor to a single-core variant of the former A5 chip. The device also drew less power than the original third generation model. In October 2016, the company phased the updated third generation Apple TV, and its retail employees were instructed to pull units and demo units from storage shelves. In December 2017, Apple added support for the streaming service Amazon Video. The Apple TV application, bundled with Apple TV Software 7.3 (the second and third generation Apple TV version of iOS), was released in May 2019. HD (previously fourth generation) On September 9, 2015, Apple announced the fourth-generation Apple TV at an Apple Special Event. The fourth-generation model uses a new operating system, tvOS, with an app store, allowing downloads of third-party apps for video, audio, games and other content. Upon release, third-party apps were available from a limited range of providers, with new APIs providing opportunities for more apps. A requirement of new apps and games was that they must include interfacing with the new touchpad-enabled Siri remote, which was later relaxed for games. In March 2019 Apple rebranded the device as Apple TV HD. The 4th generation includes a 64-bit Apple A8 processor, and adds support for Dolby Digital Plus audio. While similar to the form factor of the 2nd and 3rd generation models, the 4th generation model is taller. In contrast to the old remote's arrow button, the 4th generation Apple TV's touch remote uses swipe-to-select features, Siri support, a built-in microphone, volume control over HDMI CEC and IR, and an accelerometer (IMU). The fourth-generation Apple TV started shipping in October 2015. Upon launch, there were several unexpected issues such as incompatibility with Apple's own Remote app for iOS and watchOS. These issues were fixed by Apple on December 8, 2015 in tvOS 9.1. On September 13, 2016, Apple released tvOS 10, bringing an all new remote app, single-sign on, dark mode, HomeKit support, and other features. Amazon initially declined to develop an Amazon Video application for Apple TV, and announced in October 2015 it would stop selling Apple TVs, and removed 3rd generation SKUs. In late 2017 Amazon reversed their stance and released an Amazon Video app, and resumed sales of Apple TVs. 4K (first generation) At an Apple Special Event on September 12, 2017, Apple announced the Apple TV 4K which supports 2160p output, HDR10, Dolby Vision, and includes a faster Apple A10X Fusion processor supporting HEVC hardware decoding. Dolby Atmos support was added in tvOS 12. Following the announcement of the new models, the 64 GB version of the Apple TV HD was discontinued. Externally it is similar to the 4th generation model, with the only differences being the addition of vents on the base, the removal of the USB-C port, and the addition of a tactile white ring around the Menu button on the included Siri Remote. 4K (second generation) On April 20, 2021, Apple announced an updated Apple TV 4K with the A12 Bionic processor, support for high frame rate HDR, HDMI 2.1, and Wi-Fi 6. Its HDMI port supports ARC and eARC, which allows other sources plugged into the television to output audio through Apple TV, including to AirPlay speakers like HomePod. It also has the ability to pair with the ambient light sensor on iPhones with Face ID to optimize its color output, a feature that was also extended to older Apple TVs with tvOS 14.5. AirPlay supports high frame rate HDR playback, allowing videos shot on the iPhone 12 Pro in Dolby Vision 4K 60fps to be mirrored in full resolution. Following the announcement, the previous Apple TV 4K with an A10X Fusion chip was discontinued. The model also comes with a thicker redesigned Siri Remote with a circular touchpad with navigational buttons, as well as power and mute buttons. The remote does not include an accelerometer and gyroscope, which were present in the previous Siri Remote, making it incompatible with some games. The remote is compatible with previous generation tvOS-based Apple TVs and ships with an updated SKU of the Apple TV HD. Features Apple TV allows consumers to use a HDTV with any Apple TV or a UHDTV with Apple TV 4K or later, to stream video, music, and podcasts as well as downloading apps and games from the tvOS App Store. The first, second, and third generations offered limited content which Apple had provisioned to work with Apple TV. These have now been discontinued in favor of the fourth generation Apple TV, with an OS based on iOS called tvOS which lets developers create their own apps with their own interface that run on Apple TV. These include multimedia, music apps, and games. Features of Apple TV include: Video Streaming Users of Apple TV can rent or buy movies and TV shows from the iTunes Store, or stream video from a variety of services found in the tvOS App Store such as Netflix, Prime Video, Twitch, Paramount+, Noggin, Peacock, Hulu, Hotstar, HBO Max, Discovery+, Disney+, Star+, Showmax, Tencent Video, Kocowa, AbemaTV, Crunchyroll, SonyLIV, ZEE5, Tubi, Eros Now, Okko, SF Anytime, Dailymotion, and YouTube. Users can stream live and on-demand content from apps that support login through a cable provider by way of one universal app called TV. The single-sign on feature in tvOS 10.1 and later allows users to log in to all of these apps at once, bypassing the need to authenticate each individually. Music and Podcasts Streaming Users can access their music and podcasts libraries that they purchased in iTunes through iCloud through the Music and Podcasts apps, respectively. In addition, users can also subscribe to music streaming services such as Apple Music, Amazon Music, Spotify, Pandora Music, Tidal, Qello, KKBOX, and Anghami and access content that way. Photos The built in Photos app syncs user photos from iCloud Photo Library and displays them on TV. In addition, users can download third-party apps like Adobe Lightroom to view, edit and share them. Apps and Games With the fourth generation Apple TV and later, users can download apps and games from the tvOS App Store. This app store is similar to the one found on the Apple iPhone and iPad. Apps can now be ported from iOS easily by developers since tvOS and iOS share a common codebase and kernel. Examples include the Papa John's Pizza and Grubhub apps which allows for users to order food right from Apple TV and Zillow which allows users to search for homes right on their TV. A NASA app for Apple TV includes live streaming of NASA TV content, including International Space Station missions. Games use the Accelerometer and Gyroscope along with the touchpad found on the Siri Remote for control. External Bluetooth game controllers can also be paired. Examples include Asphalt 8, which can be played using the Siri Remote. Casting and Mirroring With AirPlay, users can stream or mirror content wirelessly from an iOS device or Mac. AirPlay can be accessed by swiping up from the bottom of the screen (swipe down from top right on newer models) in Control Center on iOS or in the Menu Bar on a Mac. Its functions include: Casting, which allows users to wirelessly send video or audio from their iPhone, iPad, or Mac to the Apple TV. Mirroring, which allows users to wirelessly mirror their Mac screen or AirPlay device which to the TV, using it as a second monitor. Peer-to-Peer AirPlay, which uses Bluetooth to connect if the Apple TV and the iOS Device/Mac are not on the same Wi-Fi network. Siri Siri is built into the fourth generation and later Apple TV. It enables voice dictation in text fields, including usernames and passwords. Universal search is available for a wide number of apps in the United States, but the feature is limited to iTunes and Netflix in Canada, France, Germany, and the United Kingdom. In Australia, universal search supports movies and TV shows in iTunes, Netflix, and Stan. Apple has been expanding the feature to encompass additional channels worldwide. A Live Tune-In feature that allows the viewer to ask Siri to tune to live streams of Pluto TV, DAZN, WWE Network, FuboTV, FITE TV, and Xumo among many others that support Live Tune-In. HomeKit The third-generation Apple TV and later can also be used as a home hub to control HomeKit devices, such as locks, thermostats, or garage doors either locally or over the Internet. HomeKit Automation, such as automatic implementation of scenes, multiple user support, and using Siri to control devices, and remote access for shared users or HomeKit-enabled cameras is only possible with a fourth generation Apple TV or later. General HDMI CEC to control other devices in a user's home theater setup. App Switcher which enables users to switch apps. Aerial Screensaver which allows the TV to display a flyover view of a city when Apple TV is inactive. Screensavers can also be invoked from the home screen by pressing menu on the Siri Remote once. App Store With the fourth-generation Apple TV (Apple TV HD) and tvOS, Apple announced an App Store which allows any developer to make apps using the APIs available specifically tailored towards the TV. Also, since tvOS is based on iOS, any developer can port over apps from iOS and with a few modifications, as Apple stated on stage, and can make them available for all tvOS users with the App Store. The App Store is not available to previous Apple TVs and is a feature of the fourth generation Apple TV onward. Accessibility Since tvOS and watchOS are based on iOS, they have inherited many of the accessibility features of iOS and macOS and are compatible with Apple's entire product line including the Apple Watch as a remote controller for the Apple TV. tvOS includes the Apple technologies of VoiceOver, Zoom, and Siri to help the blind and those with low vision. Pairing a Bluetooth keyboard with the tvOS on the Apple TV enables another accessibility feature that also is an incorporation of VoiceOver. When typing, VoiceOver mirrors with an audio voice, each character pressed on the keyboard and repeated again when it is entered. The Apple TV is designed to work with the Apple Wireless Keyboard or the Apple Magic Keyboard. Apple TV with and without tvOS supports closed captioning, so the deaf or hard of hearing can properly watch TV episodes and feature-length movies. Compatible episodes and movies are denoted with a CC (closed captioning) or SDH (Descriptive Audio) icon in the iTunes Store either on the Apple TV or in iTunes itself. The viewer can customize the captions in episodes or movies with styles and fonts that are more conducive to their hearing and/or visual impairment. Apple's Remote app on iOS devices allows control of the Apple TV from an iPhone, iPad or iPod Touch. Restrictions Similar to Google's redesign of YouTube, Apple has restricted access to most viewed charts on movies and podcasts. They are replaced by "Top Movies", "Top Podcasts", and "Editor's Picks". Parental controls allow consumers to limit access to Internet media service content via "Restrictions" settings; individual services can be turned off (e.g., to reduce clutter), icons can be rearranged via the tap-and-hold technique à la iOS. Internet media is split into four categories: "Internet Photos", "YouTube", "Podcasts", and "Purchase and Rental". Each of the categories is configured by a parental control of "Show", "Hide" or "Ask" to prompt for a 4-digit PIN. In addition, movies, TV shows, music and podcasts can be restricted by rating. Streaming video sources Apps available for Apple TV can stream video from a variety of sources, including Netflix, ESPN+, Disney+, Star+ (Latin America only), Paramount+, Hotstar, Hulu, Movies Anywhere, Niconico, AbemaTV, Kocowa, Eros Now, YuppTV, iQIYI, Catchplay, Viu, AMC+, Struum, Start, Okko, Viaplay, Jungo+, SF Anytime, Ibakatv, FuboTV, Curiosity Stream, Nebula, BET+, Voot, Noggin, Pluto TV, Philo, BritBox, Globoplay, Acorn TV, Videoland (Netherlands only), WWNLive, Viki, Rakuten TV, ALTBalaji, ABC iview (Australia only), 7plus (Australia only), 9Now (Australia only), Stan (Australia only), Foxtel Now (Australia only), Kayo Sports (Australia only), Binge (Australia only), Neon (New Zealand only), Vidio (Indonesia only), iWantTFC, meWATCH (Singapore only), RTÉ Player, TVB Anywhere, ZEE5, FunimationNow, Wakanim, VRV, Crunchyroll, Pure Flix, SonyLIV, Crackle, Popcornflix, FilmOn, Blim TV, GuideDoc, Irokotv, Minno, TVPlayer, Zeus Network, Flix Premiere, BBC iPlayer (UK only), ITV Hub (UK only), STV Player (UK only), All 4 (UK and Ireland only), My5 (UK only), NOW (UK, Ireland and Italy only), UKTV Play (UK and Ireland only), Hallmark Movies Now, Toon Goggles, Yippee TV, the Bally Sports app, Stirr, Honor Club, Side+, UFC Fight Pass, Shudder, Mubi, Crave (Canada only), RiverTV (Canada only), Highball TV (Canada only), OneSoccer (Canada only), Allblk, Plex, Sling TV, Sun NXT, Aha, Hoichoi, Amazon Prime Video, Twitch, WWE Network, DAZN, MyCanal (France only), Showmax, Tencent Video, Dropout, Tastemade, Discovery+, GolfTV, Spectrum TV Stream, DirecTV Stream, Vudu, NBC Sports Gold, Hayu, Xumo, Xfinity Stream, Craftsy, Night Flight Plus, TED, YouTube, YouTube TV, Dailymotion, Red Bull TV, FloSports, FITE TV, Impact Plus, Shahid, Frndly TV, Tubi, the Fox Soul app, and Fox Nation along with the Starz app, HBO Max, Peacock, Showtime Anytime, and the TV Everywhere portals of several cable and broadcast networks, and the video subscription portals of all of the four major North American sports leagues: the NFL TV app, MLB.tv, NBA League Pass, and NHL.tv. Local sources Apple TV allows users on a computer running iTunes to sync or stream photos, music and videos. A user can connect a computer on a local network to maintain a central home media library of digitized CD, DVD or HD content, provide direct connectivity to photo organization software such as iPhoto, limit home video access to a local network only, play Internet radio, or preload content on Apple TV to be used later as a non-networked video player. For users who wish to connect the Apple TV to a computer, synchronization and streaming modes are supported. Apple TV in synchronization mode works in a way similar to the iPod. It is paired with an iTunes library on a single computer and can synchronize with that library, copying all or selected content to its own storage. Apple TV need not remain connected to the network after syncing. Photos can be synced from iPhoto, Aperture, or from a folder on a Mac, or Adobe Photoshop Album, Photoshop Elements, or from a hard disk folder in Windows. Apple TV can also function as a peer-to-peer digital media player, streaming content from iTunes libraries and playing the content over the network. First-generation Apple TVs can stream content from up to five computers or iTunes libraries. Also, five Apple TVs can be linked to the same iTunes library. The second-generation Apple TV onwards allows users to stream content from more than one iTunes library: these additional iTunes libraries can be on the same or on different computers. This is possible when Apple TV and every iTunes library from which you want to stream content meet all of the following conditions: (1) the Apple TV and the iTunes library you are streaming from are both on the same local network, (2) each uses the iTunes "Home Sharing" feature, and (3) each are using the same "Home Sharing" Apple ID. Apple TV HD and newer can also stream content locally using third-party apps such as Plex, Kodi, VLC media player, Emby and MrMC. Supported formats Apple TV natively supports the following audio, video, and picture formats (although with the Apple TV HD and later, apps may use alternative built-in software in order to play other codecs and formats, e.g. Emby, MrMC, VLC media player, Kodi and Plex): Video HEVC H.265 Dolby Vision (Profile 5)/HDR10 (Main 10 profile) up to 2160p at 30 frames per second (5th generation) or 60 frames per second (6th generation) HEVC H.265 SDR up to 2160p at 60 frames per second (5th and 6th generation) or 1080p at 30 frames per second (4th generation) Main/Main 10 profile, hardware decoding on 5th and 6th generation, software decoding on 4th generation running tvOS 11 and later. AVC H.264 up to 720p at 30 frames per second (1st and 2nd generation) AVC H.264 up to 1080p at 30 frames per second (3rd generation) AVC H.264 up to 1080p at 60 frames per second (4th generation) AVC H.264 up to 2160p at 60 frames per second (5th generation) High or Main Profile level 4.0 or lower, or High or Main Profile level 4.2 or lower (4th generation), Baseline profile level 3.0 or lower with AAC-LC audio up to 160 kbits/s per channel, 48 kHz, stereo audio in .m4v, .mp4, and .mov file formats. MPEG-4 up to 720×432 (432p) or 640×480 pixels at 30 fps MPEG-4 video up to 2.5 Mbit/s, 640×480 pixels, 30 frames per second, Simple Profile with AAC-LC audio up to 160 kbit/s, 48 kHz, stereo audio in .m4v, .mp4, and .mov file formats. Motion JPEG up to 720p at 30 fps Motion JPEG (M-JPEG) up to 35 Mbit/s, 1280×720 pixels, 30 fps, audio in ulaw, PCM stereo audio in .avi file format. Picture JPEG, GIF, TIFF, and HEIF (4th generation and later) Audio HE-AAC (V1) AAC (16–320 kbit/s) FairPlay protected AAC MP3 (16–320 kbit/s, or optionally VBR) Audible (formats 2, 3, and 4) Apple Lossless FLAC AIFF WAV Dolby Digital (AC-3) surround sound pass-through, up to 5.1 channels Dolby Digital Plus (E-AC-3) surround sound pass-through, up to 7.1 channels (4th generation), Dolby Atmos 7.1.4 channels (5th and 6th generation) TV compatibility Compatible with high-definition TVs with HDMI and capable of 1080p or 720p at 60/50 Hz. Requires HDCP when playing copy-protected content. A sustained 8 Mbit/s or faster Internet connection is recommended for viewing 1080p HD movies and TV shows, 6 Mbit/s or faster for viewing 720p content, and 2.5 Mbit/s or faster for SD content. Others Attempts to sync unsupported content to Apple TV will draw an error message from iTunes. The first- and second-generation Apple TV video output can be set to either 1080i or 1080p; however, this resolution is limited to the user interface and the viewing of photographs – all other content is simply upscaled to those resolutions. Those models cannot play 1080i or 1080p video content (e.g., HD camera video). The third- and fourth-generation Apple TV support 1080p video content. The Apple TV 4K, as the name suggests, supports 4K resolutions and HDR, including Dolby Vision. 4K content from sources such as iTunes can be played on a compatible 4K television set. Apple offers H.264 1080p movies and video podcasts on iTunes. In comparison, Blu-ray Disc films are 1080p H.264 or VC-1 video encoded at rates of up to 40 Mbit/s. Apple TV's audio chip supports 7.1 surround sound, and some high definition rentals from iTunes are offered with Dolby Digital 5.1 surround sound. There is an Apple TV export option in QuickTime which allows content in some formats that the device does not support to be easily re-encoded. Applications that use QuickTime to export media can use this; e.g., iMovie's Share menu, iTunes' advanced menu, and some third-party content conversion tools. Connectivity Apple TV streams video through an HDMI cable (Type A) connected to the TV's HDMI port. Audio is supported through the optical or HDMI ports. The device also has a Micro-USB port, which is reserved for service and diagnostics. The device connects through Ethernet or Wi-Fi to the computer for digital content from the Internet and local networks. Apple TV does not come with audio, video or other cables, which must be acquired additionally as required. On the previous Apple TV, media files could be transferred directly onto the device by syncing with another computer. Once content was stored on the device's hard drive, Internet connectivity was no longer needed to view content. This is not the case with the later models, which do not have a hard drive for storing media. The first-generation Apple TV had component video and RCA connector audio ports, both removed in the 2nd generation. The device does not have RCA/composite video or F/RF connectors, but can be tricked into outputting color via composite. Starting with the Apple TV HD, Apple removed the optical audio port. Apple also enhanced the HDMI port by adding support for HDMI 1.4. The 4th generation also removed the Micro-USB port in favor of the reversible USB-C port and the 5th generation removed USB entirely. AirPlay AirPlay allows iOS devices or an AirPort-enabled computer with the iTunes music player to send a stream of music to multiple (three to six, in typical conditions) stereos connected to an AirPort Express (the audio-only antecedent of Apple TV) or Apple TV. The AirPort Express' streaming media capabilities use Apple's Remote Audio Output Protocol (RAOP), a proprietary variant of RTSP/RTP. Using WDS-bridging, the AirPort Express can allow AirPlay functionality (as well as Internet access, file and print sharing, etc.) across a larger distance in a mixed environment of wired and up to 10 wireless clients. Speakers attached to an AirPort Express or Apple TV can be selected from within the "Remote" iPhone/iPod Touch program, allowing full AirPlay compatibility (see "Remote control" section below). A compatible Mac running OS X Mountain Lion or later can wirelessly mirror its screen to an Apple TV through AirPlay Mirroring while one running OS X Mavericks or later can also extend its display with AirPlay Display. Remote control Apple TV can be controlled by many infrared remote controls or paired with the included Apple Remote to prevent interference from other remotes. Either kind of remote can control playback volume, but for music only. The Apple Wireless Keyboard is supported on the second-generation Apple TV and later using the built-in Bluetooth. The consumer has the ability to control media playback, navigate menus and input text and other information. Third-party keyboards that use the Apple layout may also be compatible. On July 10, 2008, Apple released Remote, a free iOS application that allows the iPhone, iPod Touch, and iPad to control the iTunes library on the Apple TV via Wi-Fi. The Apple Watch also has a remote app to control Apple TV. The Remote App was updated on September 13, 2016, to take advantage of all the features of the Apple TV 4. This includes Siri, Touchpad, and Home Buttons, along with a now playing screen. On September 9, 2015, Apple announced the new Siri Remote for the fourth-generation Apple TV (Apple TV HD) (although in some territories, Apple have kept the name Apple TV Remote, due to Siri functionality not being enabled on it in that territory). It is a completely redesigned remote that features dual microphones for Siri support and a glass touch surface for navigation around the interface by swiping or tapping and scrubbing to fast forward or rewind. Also, it has a menu and home button, a Siri button to invoke Siri, a Play/Pause button, and a Volume Up/Down button to control the volume on the TV. The Siri Remote communicates with the Apple TV via Bluetooth rather than infrared, removing the requirement of a line-of-sight with the device. This new remote is only supported by the Apple TV HD and later and will not work with earlier generations. Siri Beginning with the Apple TV HD, the remote includes two microphones and a button to activate Siri. Siri on the Apple TV has all of the functions of Siri on iOS 9; it can also respond to requests specifically for the TV. For instance, the viewer can ask Siri to search for a TV show or movie and it will search across multiple different sources to tell the user where the content is available to watch. It can also do things such as Play/Pause, Rewind/Fast Forward, skip back 15 seconds and temporarily turn on captioning when asked "what did he say?" or "what did she say?", open a specific app, and more. Software First generation The original Apple TV ran a modified build of Mac OS X v10.4 Tiger. Apple TV Software 1.0 Apple TV software 1.0 presented the user with an interface similar to that of Front Row. Like Front Row on the Mac, it presents the user with seven options for consuming content. Movies, TV Shows, Music, Podcasts, Photos, Settings, and Sources. It was a modified version of OS x v10.4 Tiger. Apple TV Software 2.0 In February 2008, Apple released a major and free upgrade to the Apple TV, labelled "Take Two" (2.0). This update did away with Front Row and introduced a new interface in which content was organized into six categories, all of which appeared in a large square box on the screen upon startup (movies, TV shows, music, YouTube, podcasts, and photos) and presented in the initial menu, along with a "Settings" option for configuration, including software updates. Apple TV Software 3.0 In October 2009, Apple released a minor upgrade for the Apple TV called "Apple TV Software 3.0". This update replaced the interface in version 2.0 with a new interface which presented seven horizontal columns across the top of the screen for the different categories of content (Movies, TV Shows, Music, Podcasts, Photos, Internet, and Settings). This update also added features such as content filtering, iTunes Extras, new fonts, and a new Internet radio app. One new feature in particular was the 'Genius' playlist option allowing for easier and more user friendly playlist creating. Second and third generation The 2nd and 3rd generation Apple TVs run a version of iOS, rather than the modified Mac OS X of the original model. The interface on Apple TV Software 4 is similar to that of previous versions, with only minor changes and feature additions throughout. In March 2012, Apple released a major new software update, with the Apple TV 3rd generation, labeled as Apple TV Software 5 (iOS 5.1), which shipped with the new 3rd generation Apple TV. This update completely revised the look of the home screen to make it resemble the icon grid seen on iOS. Instead of 7 columns, content and third-party channels are shown in a tiled grid format, which can be rearranged. Throughout the years, for Apple TV Software 5–6, Apple released minor revisions, content additions, and feature updates. The Apple TV Software 7.0 features a flat look similar to iOS 7 and OS X Yosemite and adds features such as Peer-To-Peer AirPlay. Version 8.0 was skipped. Apple TV Software 7.2.2 (iOS 8) is currently available for the Apple TV (3rd generation), as of March 2019. It does not support tvOS 9.0 or later. However, it does support Amazon Video, which was automatically added to those Apple TVs running 7.2.2 on December 6, 2017. In May 2019 Apple TV Software 7.3 (iOS 8.4.2) was released to the public. This update was the first update for the 3rd generation Apple TV since 2016. This update adds the new Apple TV app to the home screen. The Apple TV app brings compatibility to the Apple TV Channels service. This update also fixes some security flaws found in Apple TV Software 7.2.2 and earlier. On September 24, 2019, Apple TV Software 7.4 (iOS 8.4.3) was released to the public. On March 24, 2020, Apple TV Software 7.5 (iOS 8.4.4) was released to the public. HD and 4K The Apple TV HD and later run an operating system called tvOS which does not support the earlier generations of Apple TV. It features an app store, allowing third-party app developers to release their own apps on the platform. The new software also features support for Siri voice control. The tvOS software development kit (SDK) for developing tvOS apps is included in Xcode 7.1 and later. A new development feature, App Thinning, is used in the Apple TV, running on tvOS, due to the storage restrictions of the device (32 GB and 64 GB) and the dual-use of the NAND Flash Memory to precache movies from Apple's content servers as well as storage for downloaded applications from the tvOS App Store. Apple's aim is to limit the size of application downloads and steering users toward downloading individual segments of apps in order to better manage storage space. Developers have reacted with criticism toward the download size limits, arguing that it leads to situations where game data is purged and has to be re-downloaded. Technical specifications Limitations Functionality Apple TV contains neither a TV tuner nor a personal video recorder. Both capabilities can be applied to the connected home computer through various third-party products, such as allowing PVR software to connect to iTunes and enable scheduled HDTV recordings to play automatically via Apple TV for playback. Apple TV HD and newer can be linked with Wi-Fi-based tuners such as HDHomeRun. The Front Row interface lacks some iTunes functionality, including rating items, checking the account balance, adding funds to the account, synchronizing from more than one computer, full Internet radio support, and games. The Movies search box only searches the iTunes Store, not local hard drives and networks. Movies rented on Apple TV must be watched on Apple TV, unlike iTunes rentals, which can be transferred to any video-enabled iPod, iPhone, or Apple TV. Movies purchased on Apple TV can be moved to a video-enabled iPod or iPhone via iTunes. Apple TV prior to 4th generation (Apple TV HD) does not support the HDMI Consumer Electronics Control (HDMI CEC) protocol. On the Apple TV (2nd generation), digital output audio is up-sampled to 48 kHz, including lossless CD rips at 44.1 kHz. Although this is a higher frequency and the difference is not audible, it is seen by some as falling short of digital transmission of data standards due to the audio not being 'bit perfect'. Sales 1st generation Within the first week of presales in January 2007, Apple TV was the top pre-selling item at the Apple Store. Orders exceeded 100,000 units by the end of January and Apple began ramping-up to sell over a million units before the 2007 holiday season. Analysts began calling it a "DVD killer" that could enable multiple services. Analysts also predicted that Apple could sell up to 1.5 million units in the first year. Besides the Apple Store, Best Buy was one of the first retailers to carry the device; Target and Costco followed shortly thereafter. Two months into sales, Forrester Research predicted at the time that Apple would only sell a million Apple TV units, because consumers prefer advertisement-supported content over paid content. Forrester predicted that cable companies would be the clear winners over content providers such as the iTunes Store. Shortly after, Apple released YouTube functionality and Jobs stated that Apple TV was a "DVD player for the Internet". Some market analysts predicted that YouTube on Apple TV "provides a glimpse of this product's potential and its future evolution", but overall, analysts had mixed reactions regarding the future of Apple TV. Some negative reactions followed after Jobs referred to the device as a "hobby", implying it was less significant than the Macintosh, iPod, and iPhone. In the fourth quarter of 2008, sales were triple that of the fourth quarter of 2007. In Apple's first-quarter 2009 financial results conference call, acting chief executive Tim Cook stated that Apple TV sales increased three times over the same quarter a year ago. Cook mentioned that the movie rental business was working well for Apple, Apple would continue investment in movie rentals and Apple TV, but Apple TV is still considered a hobby for the company. Due to the growth of digital TV and consumers turning to Internet media services, an analyst at the time predicted sales of 6.6 million Apple TVs by the end of 2009. 2nd generation The second generation sold 250,000 units in the first two weeks it was available. On December 21, 2010, Apple announced that they had sold 1 million units. In the second fiscal quarter of 2011, it had topped 2 million in total sales, with 820,000 sold in that quarter alone. On January 24, 2012, Apple announced they had sold 1.4 million units in the first fiscal quarter of 2012, and 2.8 million units in all of fiscal year 2011. (4.2 million units through January 1, 2012). 3rd generation Tim Cook announced at the All Things Digital conference in May 2012 that Apple had sold 2.7 million of the 3rd generation model in 2012. In the Q4 FY2012 earnings call, Engadget reported comments from Tim Cook that Apple had shipped 1.3 million Apple TV units in the 4th Quarter (presumed to be 3rd generation). MacObserver reported statements by Tim Cook in the Q1 FY2013 earnings call that Apple sold over 2 million Apple TV units in the December Quarter (presumed to be 3rd generation). These reports lead to a cumulative volume of the 3rd generation device of 6 million units, as of January 1, 2013. On February 28, 2014, at Apple's shareholders meeting, Apple CEO Tim Cook announced that in 2013 Apple TV brought in 1 billion dollars of revenue for Apple. A market survey published by Parks Associates in December 2014 found that Apple TV has lost consumer traction to Google Chromecast, garnering only a 17% market share. Tim Cook announced at the Apple Watch conference on March 9, 2015, that Apple had sold a total of 25 million Apple TVs up to that point. HD and later In the January 27, 2016, Apple earnings call, CEO Tim Cook stated that the Apple TV had record sales. However, no specific sales figures were mentioned; Apple TV is included in an "Other products" category, which also includes the Apple Watch, iPods, and Beats products, and is not broken down by individual products. In June 2019 it was estimated that there are 53 million units of all generations in use worldwide. In 2019, Apple analyst John Gruber stated the Apple TV sells at a low profit margin or a loss, saying units are effectively sold at cost. See also Comparison of set-top boxes Mac Mini, which originally featured the Front Row application, a similar remote 10-foot user interface as the Apple TV References Footnotes External links – official site 2007 establishments in the United States Apple Inc. hardware Computer-related introductions in 2007 Digital media players ITunes Products introduced in 2007 Smart TV Television technology
19924734
https://en.wikipedia.org/wiki/TnFOX
TnFOX
The TnFOX portability toolkit is a fork of the FOX GUI toolkit with most of the added code implementing orthogonal features (i.e. not substantially changing the FOX API) such that applications developed for FOX can be easily recompiled for TnFOX. Unusually, TnFOX also contains API emulations of some Qt library classes - according to the author, this was done to port a substantial project from Qt to FOX. Most of the added features implement a full "portability toolkit" library such that one can write one program to run anywhere - this differs from FOX which mostly just provides the GUI portability alone. TnFOX is therefore an example of an operating system abstraction layer. Unlike FOX, TnFOX supports only a restricted set of C++ compilers and operating systems. It only supports Microsoft's C++ compiler v7.1 and later, GCC v3.2 and later and Intel C++ compiler v8 or later. It also only runs on Windows 2000 or later, Linux 2.6 kernels or later, FreeBSD and Mac OS X 10.4 or later, though on any processor or architecture. TnFOX contains its own automatically generated bindings for Python based on Boost.Python library - such were the complexity of these it required adding the -fvisibility feature to GCC v4 onwards. However, the quality and state of these have languished in recent years. The library is no longer maintained by its author, as of 2012. Substantial Features of TnFOX One of the more original features of TnFOX is its heavy usage of C++ metaprogramming though this is entirely kept away from the GUI side of things (for FOX compatibility). A full lightweight metaprogramming toolkit is included which allows many of the operations provided by the Boost metaprogramming library, including typelists, type traits, compile-time logic, functors, virtual table compilation, horizontal type list instantiation and more. Unlike Boost, due to the requirement for a modern C++ compiler, all the metaprogramming operates consistently across compilers without the need for ugly macros. Another unusual feature is the use of pervasive exception handling and safety whereby any line is assumed to always potentially throw an exception. TnFOX provides a C++ rollback mechanism (as originally proposed by Andrei Alexandrescu) to keep track of what operations need to be undone at any given stage should an error occur - rather like a SQL transaction. One problem with this approach is the possibility of nested exception throws during object destruction which under the ISO C++ standard requires an immediate program termination - TnFOX works around this problem via preprocessing of the source to add extra support code. A quirky addition of the v0.88 release is to use Google Translate to convert its locale strings. While of dubious utility, especially for shorter strings, it nevertheless may save time during translations. Unlike most portability toolkits, TnFOX has been very extensively profiled and performance tuned for maximum speed and minimal memory usage. It has strong multithreading support including tuning to avoid two threads writing to the same cache line (which causes cache line bouncing and greatly bottlenecks parallel throughput). It optionally uses its own thread caching memory allocator, automatically uses dynamic algorithms which will trade speed for memory usage according to memory low conditions and has very strong x86 and x64 specific optimisations including a metaprogramming implementation of SIMD vectors which will automatically compile down into SSE operations (right up to SSE4 support). It has its own assembler written fast mutex implementation, extensive internal caching to avoid syscalls to the kernel and its own inter-process communication framework which can transport arbitrary C++ object instances from one place to another by leveraging metaprogramming to automatically implement serialisation & deserialisation. This is particularly evident in its SQL database interface where one can simply bind an arbitrary C++ object instance to a BLOB field and no further code is required. TnFOX optionally includes copies of the OpenSSL library and the SQLite library in order to implement its strong encryption and its default SQL database implementation respectively. It can be built modularly as a set of separate DLLs or monolithically. It also has full portable support for host operating system ACL security and knows how to protect sensitive data from entering the swap file, plus automatically shredding any deleted portions. It can access which user is running the process and how to escalate its privileges as necessary. It has a no GUI build suitable as a portability toolkit for daemon and system processes. This has no X11/GDI dependencies. TnFOX has already begun to add C++0x features for those compilers which support them - this being the next revision of the ISO C++ standard. See also Widget toolkit Operating system abstraction layer List of widget toolkits FOX toolkit Qt References External links Project homepage Widget toolkits Free software programmed in C++ Software using the LGPL license
13578970
https://en.wikipedia.org/wiki/Data%20Control%20Block
Data Control Block
In IBM mainframe operating systems, such as OS/360, MVS, z/OS, a Data Control Block (DCB) is a description of a dataset in a program. A DCB is coded in Assembler programs using the DCB macro instruction (which expands into a large number of "define constant" instructions). High level language programmers use library routines containing DCBs. A DCB is one of the many control blocks used in these operating systems. A control block is a data area with a predefined structure, very similar to a C struct, but typically only related to system's functions. A DCB may be compared to a FILE structure in C, but it is much more complex, offering many more options for various access methods. The control block acted as the Application programming interface between Logical IOCS and the application program and usually was defined within (and resided within) the application program itself. The addresses of I/O subroutines would be resolved during a linkedit phase after compilation or else dynamically inserted at OPEN time. The equivalent control block for IBM DOS/360, DOS/VSE and z/VSE operating systems is a "DTF" (Define the file) Typical contents of a DCB symbolic file name (to match a JCL statement for opening the file) type of access (e.g. random, sequential, indexed) physical characteristics (blocksize, logical record length) number of I/O buffers to allocate for processing to permit overlap of I/O address of I/O operating system library subroutines (e.g. read/write) other variables as required by the subroutines according to type Prototype DCBs Many of the constants and variables contained within a DCB may be left blank (i.e., these default to zero). The OPEN process results in a merge of the constants and variables specified in the DD JCL statement, and the dataset label for existing magnetic tape and direct-access datasets, into the DCB, replacing the zero values with actual, non-zero values. A control block called the JFCB (Job File Control Block) initially holds the information extracted from the DD statement for the dataset. The results of the merge are stored in the JFCB which may also be written into the DSCB during the CLOSE process, thereby making the dataset definition permanent. An example is the BLKSIZE= variable, which may be (and usually is) specified in the DCB as zero. In the DD statement, the BLKSIZE is specified as a non-zero value and this, then, results in a program-specified LRECL (logical record length) and a JCL-specified BLKSIZE (physical block size), with the merge of the two becoming the permanent definition of the dataset. See also Data Set Control Block (DSCB), a part of VTOC IBM mainframe operating systems
10697466
https://en.wikipedia.org/wiki/Car%20Wars%20%28video%20game%29
Car Wars (video game)
Car Wars is a video game for the Texas Instruments TI-99/4A programmed by Jim Dramis and published by TI in 1981. Car Wars is a clone of the 1979 Sega/Gremlin arcade game Head On. The player controls a car starting at the bottom of the screen and navigates it through an open grid full of dots. The object is to collect all the dots while avoiding crashing into other cars. The player's car is always moving counter-clockwise. The player, who can never stop the car or change direction, is only able to control the relative speed of the car and move the car across one or two lines of the grid. Gameplay The player is represented by a red car, and the computer, at the beginning, uses a yellow car. The player's car always moves counterclockwise around the board, and the computer's cars move clockwise. There are five "lanes," each of which forms a circumference around the board. The outer lanes are larger than the inner ones. The lanes (from inner to outer) have 4, 20, 36, 52, and 68 dots respectively. Each lane is divided into four sectors, with breaks at the 12:00, 3:00, 6:00 and 9:00 points. It is at these locations where lane-changing is possible. The player has the ability to switch up to two lanes at each break, while the computer's cars can only switch one lane at a time. In the center of the screen is a "pit," where spare cars are displayed. The game begins with two cars in the pit. When a level is cleared, a new car is added to the pit, and bonus points are scored. When a crash occurs, a car is removed from the pit. A maximum of four cars can occupy the pit at any given time. If a level is cleared with four cars in the pit, bonus points are scored, but no new cars are added. When the game starts, the player's red car and the computer's yellow car face off back-to-back at the bottom of the screen (6:00 position), and start off moving in the opposite direction. The player has the ability to speed up at any time by pressing the joystick button. The object is to clear all 180 dots from the board, which will result in bonus points, a new car being added to the pit (unless full), and advancement to the next level. The maximum score is 99,990. The game ends when the player crashes into a computer car while no spare cars are in the pit. After one level is cleared, the starting position of the computer's yellow car moves to the left of the screen (9:00 position). When the second screen is cleared, the yellow car moves back to its original position, but the computer adds a new blue car, which starts at the left. After this level is cleared, the blue car shifts to the second-outermost lane. On the fifth level, the computer adds a third car, which is green. The three cars start at different positions, depending on the level. No new cars are added from this point on. Options Prior to the game, the player can select playing options from a menu. These determine the speed of the player's and computer's cars and when the computer cars speed up. There are three speeds, identified as creepin', fast, and flyin'. Both the player and the computer will move at the selected speed. The three speed-up options are titled late, early, and look-out!. The computer's cars speed up after clearing 150, 120, and 90 dots, respectively. References External links Car Wars at Videogame House 1981 video games Texas Instruments TI-99/4A games Maze games Video game clones Video games developed in the United States
1644183
https://en.wikipedia.org/wiki/Gordon%20the%20Gopher
Gordon the Gopher
Gordon the Gopher, also known as Gordon T. Gopher, is an English puppet gopher who first appeared on Children's BBC (CBBC) between 1985 and 1987, listed on television shows by Phillip Schofield on the interstitial or in-vision continuity programme The Broom Cupboard. His main puppeteer was Paul Smith, who would later go on to a career as a BBC executive. He is a yellow puppet gopher with red paws resembling lobster claws. Career Early work Gordon's first appearances were on CBBC between 1985 and 1987, presenting television shows with Phillip Schofield on the interstitial programme The Broom Cupboard. In 1987, Gordon and Schofield, with Sarah Greene, went on to present the Saturday morning show Going Live!. On one occasion, Gordon was famously attacked by a puppy that had been brought on to the show. In 1988, Gordon and Schofield were replaced on The Broom Cupboard by Edd the Duck and Andy Crane. In 1991, Gordon had a series named after himself which was shown on CBBC on BBC One and BBC Two and ran from 3 January 1991 to 28 March 1991 only lasting a series of 13 episodes, appearing with his friend and colleague Phillip Schofield. The series was shown twice on BBC One, the first time being in January to March 1991 and again from 26 October to 21 December 1992 continuing where BBC Two left off at lunchtime repeats in Summer 1991, BBC Two have also repeated the series at lunchtimes four times from 18 June to 23 July 1991, 20 September to 6 December 1993, 9 March to 1 June 1994, The Christmas season of 1994 from 22 and 23 December 1994 and 17 January 1995 to 28 March 1995. It has not been repeated since 28 March 1995 on the BBC. In 1990, he appeared on a children's programme called Scrooge – A Christmas Sarah. Later work During his 2005 Room 101 appearance, Schofield made an attempt to place Gordon in Room 101 (i.e., consign him to the past), but in an audience vote Gordon was spared. On 26 February 2006, Gordon briefly appeared at the end of Channel 4's The 100 Greatest Funny Moments. In December 2006, he returned to the screen in the Going Live! segment of the BBC special It Started With... Swap Shop. Gordon also made a brief appearance with Schofield during a 1980s-themed edition of Dancing on Ice in February 2009. He also made a brief appearance on 5 February 2012 edition. Gordon appeared on This Morning on 13 September 2010 to celebrate Phillip Schofield's 25th anniversary of first presenting CBBC. Schofield said "I miss him". In September 2013, Gordon appeared on an episode of Celebrity Juice. In August 2015, Gordon appeared in an online short where he returned to the BBC and got a job as a cleaner. In this short, he was voiced by Warwick Davis. In the September of that year he made a brief appearance in a special that aired on CBBC (TV channel) called Hacker’s 30th Birthday Bash when Hacker T. Dog interviewed Phillip Schofield and reunited the two. Puppeteers Gordon was operated by several people in The Broom Cupboard, but the person who did it longest was BBC TV executive John Thompson who also operated Gordon for the whole run of Going Live! Warrick Brownlow-Pike performed him for his appearances on This Morning and Hacker’s 30th Birthday Bash. Influence Gordon's famous leather jacket was a gift from Adam Ant, who made and decorated the jacket himself. Ant had befriended Gordon while being interviewed on Going Live! in February 1990 to promote his new single, and described Gordon as "one of the most interesting people" he'd met. Gordon was parodied by comedian Brian Conley as "Larry the Loafer," puppet sidekick of sarcastic children's TV presenter Nick Frisbee. The skit is one of Conley's most widely remembered, along with its catchphrase "It's a puppet!" In 2006, Gordon was mentioned on Little Britain Abroad in the first Lou and Andy sketch. Lou tells Andy that he's planning to take Andy to Disney World. When Lou asks Andy who he's looking forward to meeting, Andy replies "Gordon the Gopher." References Television characters introduced in 1985 Fictional gophers British comedy puppets
31550
https://en.wikipedia.org/wiki/Ted%20Nelson
Ted Nelson
Theodor Holm Nelson (born June 17, 1937) is an American pioneer of information technology, philosopher, and sociologist. He coined the terms hypertext and hypermedia in 1963 and published them in 1965. Nelson coined the terms transclusion, virtuality, and intertwingularity (in Literary Machines). According to a 1997 Forbes profile, Nelson "sees himself as a literary romantic, like a Cyrano de Bergerac, or 'the Orson Welles of software'." Early life and education Nelson is the son of Emmy Award-winning director Ralph Nelson and Academy Award-winning actress Celeste Holm. His parents' marriage was brief and he was mostly raised by his grandparents, first in Chicago and later in Greenwich Village. Nelson earned a B.A. in philosophy from Swarthmore College in 1959. While there, he made an experimental humorous student film, The Epiphany of Slocum Furlow, in which the titular hero discovers the meaning of life. His contemporary at the college, musician and composer Peter Schickele, scored the film. Following a year of graduate study in sociology at the University of Chicago, Nelson began graduate work in "Social Relations", then a department at Harvard University specializing in sociology. in 1960, ultimately earning an A.M. in sociology from the Department of Social Relations in 1962. After Harvard, Nelson was a photographer and filmmaker for a year at John C. Lilly's Communication Research Institute in Miami, Florida, where he briefly shared an office with Gregory Bateson. From 1964 to 1966, he was an instructor in sociology at Vassar College. During college and graduate school, he began to envision a computer-based writing system that would provide a lasting repository for the world's knowledge, and also permit greater flexibility of drawing connections between ideas. This came to be known as Project Xanadu. Much later in life, in 2002, he obtained his Ph.D. in media and governance from Keio University. Project Xanadu Nelson founded Project Xanadu in 1960, with the goal of creating a computer network with a simple user interface. The effort is documented in the books Computer Lib / Dream Machines (1974), The Home Computer Revolution (1977) and Literary Machines (1981). Much of his adult life has been devoted to working on Xanadu and advocating for it. Throughout his career, Nelson supported his work on the project through a variety of administrative, academic and research positions and consultancies, including stints at Harcourt Brace and Company (a technology consultancy and assistantship typified by the creation of the Xanadu moniker and an early meeting with Douglas Engelbart, who later became a close friend; 1966-1967), Brown University (a tumultuous consultancy on the Nelson-inspired Hypertext Editing System and File Retrieval and Editing System with Swarthmore friend Andries van Dam's group; c. 1967-1969), Bell Labs (classified hypertext-related defense research; 1968-1969), CBS Laboratories ("writing and photographing interactive slide shows for their AVS-10 instructional device"; 1968-1969), the University of Illinois at Chicago (an interdisciplinary staff position; 1973-1976) and Swarthmore College (a lectureship in computing; 1977). Nelson also conducted research and development under the auspices of the Nelson Organization (founder and president; 1968-1972) and the Computopia Corporation (co-founder; 1977-1978). Clients of the former firm included IBM, Brown University, Western Electric, the University of California, the Jewish Museum, the Fretheim Chartering Corporation and the Deering-Milliken Research Corporation. He has alleged that the Nelson Organization was envisaged as a clandestine funding conduit for the Central Intelligence Agency, which expressed interest in Project Xanadu at an early juncture; however, the promised funds failed to materialize after several benchmarks were met. From 1980 to 1981, he was the editor of Creative Computing. At the behest of Xanadu developers Mark S. Miller and Stuart Greene, Nelson joined San Antonio, Texas-based Datapoint as chief software designer (1981-1982), remaining with the company as a media specialist and technical writer until its Asher Edelman-driven restructuring in 1984. Following several San Antonio-based consultancies and the acquisition of Xanadu technology by Autodesk in 1988, he continued working on the project as a non-managerial Distinguished Fellow in the San Francisco Bay Area until the divestiture of the Xanadu Operating Group in 1992–1993. After holding visiting professorships in media and information science at Hokkaido University (1995-1996), Keio University (1996-2002), the University of Southampton and the University of Nottingham, he was a Fellow (2004-2006) and Visiting Fellow (2006-2008) of the Oxford Internet Institute in conjunction with Wadham College, Oxford. More recently, he has taught classes at Chapman University and the University of California, Santa Cruz. The Xanadu project itself failed to flourish, for a variety of reasons which are disputed. Journalist Gary Wolf published an unflattering history of Nelson and his project in the June 1995 issue of Wired, calling it "the longest-running vaporware project in the history of computing". On his own website, Nelson expressed his disgust with the criticisms, referring to Wolf as "Gory Jackal", and threatened to sue him. He also outlined his objections in a letter to Wired, and released a detailed rebuttal of the article. As early as 1972, a demonstration iteration developed by Cal Daniels failed to reach fruition when Nelson was forced to return the project's rented Data General Nova minicomputer due to financial exigencies. Nelson has stated that some aspects of his vision are being fulfilled by Tim Berners-Lee's invention of the World Wide Web, but he dislikes the World Wide Web, XML and all embedded markup – regarding Berners-Lee's work as a gross over-simplification of his original vision: HTML is precisely what we were trying to PREVENT— ever-breaking links, links going outward only, quotes you can't follow to their origins, no version management, no rights management. Jaron Lanier explains the difference between the World Wide Web and Nelson's vision, and the implications: A core technical difference between a Nelsonian network and what we have become familiar with online is that [Nelson's] network links were two-way instead of one-way. In a network with two-way links, each node knows what other nodes are linked to it. ... Two-way linking would preserve context. It's a small simple change in how online information should be stored that couldn't have vaster implications for culture and the economy. Other projects In 1957, while a student, Nelson co-wrote and co-produced what he describes as a pioneering Rock Musical. Entitled "Anything and Everything", it was produced and performed at Swarthmore College. In 1965, he presented the paper "Complex Information Processing: A File Structure for the Complex, the Changing, and the Indeterminate" at the ACM National Conference, in which he coined the term "hypertext". In 1976, Nelson co-founded and briefly served as the advertising director of the "itty bitty machine company", or "ibm", a small computer retail store that operated from 1977 to 1980 in Evanston, Illinois. The itty bitty machine company was one of the few retail stores to sell the Apple I computer. In 1978, he had a significant impact upon IBM's thinking when he outlined his vision of the potential of personal computing to the team that three years later launched the IBM PC. From the 1960s to the mid-2000s, Nelson built an extensive collection of direct advertising mail he received in his mailbox, mainly from companies selling products in IT, print/publishing, aerospace, and engineering. In 2017, the Internet Archive began to publish it online in scanned form, in a collection titled "Ted Nelson's Junk Mail Cartons". ZigZag As of 2011, Nelson was working on a new information structure, ZigZag, which is described on the Xanadu project website, which also hosts two versions of the Xanadu code. He also developed XanaduSpace, a system for the exploration of connected parallel documents (an early version of this software may be freely downloaded). Influence and recognition In January 1988 Byte magazine published an article about Nelson's ideas, titled "Managing Immense Storage". This stimulated discussions within the computer industry, and encouraged people to experiment with Hypertext features. In 1998, at the Seventh WWW Conference in Brisbane, Australia, Nelson was awarded the Yuri Rubinsky Memorial Award. In 2001, he was knighted by France as Officier des Arts et Lettres. In 2007, he celebrated his 70th birthday by giving an invited lecture at the University of Southampton. In 2014, ACM SIGCHI honored him with a Special Recognition Award. Neologisms Nelson is credited with coining several new words that have come into common usage especially in the world of computing. Among them are: "hypertext" and "hypermedia", both coined by Nelson in 1963 and first published in 1965 transclusion virtuality intertwingularity Popu-litism, a combination of "populism" and "elite" Publications Many of his books are published through his own company, Mindful Press. Life, Love, College, etc. (1959) Computer Lib: You can and must understand computers now / Dream Machines: New freedoms through computer screens—a minority report (1974), Microsoft Press, revised edition 1987: The Home Computer Revolution (1977) Literary Machines: The report on, and of, Project Xanadu concerning word processing, electronic publishing, hypertext, thinkertoys, tomorrow's intellectual revolution, and certain other topics including knowledge, education and freedom (1981), Mindful Press, Sausalito, California; publication dates as listed in the 93.1 (1993) edition: 1980–84, 1987, 1990–93 The Future of Information (1997) A Cosmology for a Different Computer Universe: Data Model, Mechanisms, Virtual Machine and Visualization Infrastructure. Journal of Digital Information, Volume 5 Issue 1. Article No. 298, July 16, 2004 Geeks Bearing Gifts: How The Computer World Got This Way (2008; Chapter summaries) POSSIPLEX: Movies, Intellect, Creative Control, My Computer Life and the Fight for Civilization (2010), autobiography, published by Mindful Press via Lulu References External links Ted Nelson's homepage Ted Nelson's homepage at xanadu.com.au Ted Nelson on YouTube Ted Nelson on Patreon Transliterature – A Humanist Design : An Interview with Ted Nelson, 1999. Software and Media for a New Democracy a talk given by Ted Nelson at the File festival Symposium/November/2005. "We Are the Web". Wired article, recalling interview with Nelson, August 2005. , a talk given by Ted at Google, January 29, 2007. Ted Nelson Possiplex Internet Archive book reading video, October 8, 2010. Ted Nelson original interview footage from PBS's Machine That Changed the World, 1990. Video excerpts of a dinner at Howard Rheingold's home with Doug Englebart and Ted Nelson, August 18, 2010. , August 18, 2014. 1937 births American educators American expatriates in the United Kingdom American philosophers American sociologists Fellows of Wadham College, Oxford Harvard University alumni Internet pioneers Living people American people of Swedish descent American people of Norwegian descent Swarthmore College alumni Keio University alumni Officiers of the Ordre des Arts et des Lettres
23883521
https://en.wikipedia.org/wiki/Mandriva
Mandriva
Mandriva S.A. was a public software company specializing in Linux and open-source software. Its corporate headquarters was in Paris, and it had development centers in Metz, France and Curitiba, Brazil. Mandriva, S.A. was the developer and maintainer of a Linux distribution called Mandriva Linux, as well as various enterprise software products. Mandriva was a founding member of the Desktop Linux Consortium. History Mandriva, S.A. began as MandrakeSoft in 1998. In February 2004, following lengthy litigation with the Hearst Corporation over the name "Mandrake" (the Hearst Corporation owned a comic strip called Mandrake the Magician), MandrakeSoft was required to change its name. Following the acquisition of the Brazilian Linux distribution Conectiva in February 2005, the company's name was changed on 7 April 2005 to "Mandriva" to reflect the names "MandrakeSoft" and "Conectiva." On October 4, 2004, MandrakeSoft acquired the professional support company Edge IT, which focused on the corporate market in France and had 6 employees. On June 15, 2005, Mandriva acquired Lycoris (formerly, Redmond Linux Corporation). On October 5, 2006, Mandriva signed an agreement to acquire Linbox, a Linux enterprise software infrastructure company. The agreement included the acquisition of all shares of Linbox for a total of $1.739 million in Mandriva stock, plus an earn out of up to $401,000 based on the 2006 Linbox financials. In 2007, Mandriva reached a deal with the Government of Nigeria to put their operating system on 17,000 school computers. On January 16, 2008, Mandriva and Turbolinux announced a partnership to create a lab named Manbo-Labs, to share resources and technology to release a common base system for both companies' Linux distributions. Although Mandriva's operating system eventually became a significant entity in the data center, the company's operating margins were thin and by 2012 the company was on the brink of bankruptcy. On January 30, 2012, Mandriva announced that an external entity bid was rejected by a minority share holder and the deal did not go through. At the end of the first semester 2012, a solution to the situation that had appeared in January of the same year was found and a settlement achieved. Mandriva was subsequently owned by several different shareholders. Mandriva filed for administrative receivership in early 2015, and was liquidated on May 22, 2015. The Mandriva Linux distribution continues to survive as OpenMandriva Lx. Notable forks include Mageia Linux and ROSA Linux. Mandriva Club In addition to selling Linux distributions through its online store and authorized resellers, Mandriva previously sold subscriptions to the Mandriva Club. There were several levels of membership, at costs ranging from US$66 or €60 per year (as of 2007) to €600 per year. Club members gained access to the Club website, additional mirrors and torrents for downloading, free downloads of its boxed products (depending on membership level), interim releases of the Mandriva Linux distribution, and additional software updates. For example, only Gold-level and higher members could download Powerpack+ editions. Many Mandriva commercial products came with short-term membership in the club; however, Mandriva Linux was completely usable without a club membership. When Mandriva Linux 2008.0 was released in October 2007, Mandriva made club membership free of charge to all comers, splitting download subscriptions off into a separate service. Mandriva also had a Mandriva Corporate Club for larger organizations. Products Mandriva Linux A Linux distribution Pulse² Open-source software for application deployment, inventory, and maintenance of an IT network, also available as SaaS version as of November 2012. Mandriva Business Server A Linux-based server operating system Mandriva Class E-learning software enabling distributed, long-distance virtual classrooms. See also References Free software companies Linux companies Mandriva Linux Software companies of France
5387453
https://en.wikipedia.org/wiki/CMU%20Sphinx
CMU Sphinx
CMU Sphinx, also called Sphinx in short, is the general term to describe a group of speech recognition systems developed at Carnegie Mellon University. These include a series of speech recognizers (Sphinx 2 - 4) and an acoustic model trainer (SphinxTrain). In 2000, the Sphinx group at Carnegie Mellon committed to open source several speech recognizer components, including Sphinx 2 and later Sphinx 3 (in 2001). The speech decoders come with acoustic models and sample applications. The available resources include in addition software for acoustic model training, language model compilation and a public domain pronunciation dictionary, cmudict. Sphinx encompasses a number of software systems, described below. Sphinx Sphinx is a continuous-speech, speaker-independent recognition system making use of hidden Markov acoustic models (HMMs) and an n-gram statistical language model. It was developed by Kai-Fu Lee. Sphinx featured feasibility of continuous-speech, speaker-independent large-vocabulary recognition, the possibility of which was in dispute at the time (1986). Sphinx is of historical interest only; it has been superseded in performance by subsequent versions. An archival article describes the system in detail. Sphinx 2 A fast performance-oriented recognizer, originally developed by Xuedong Huang at Carnegie Mellon and released as open-source with a BSD-style license on SourceForge by Kevin Lenzo at LinuxWorld in 2000. Sphinx 2 focuses on real-time recognition suitable for spoken language applications. As such it incorporates functionality such as end-pointing, partial hypothesis generation, dynamic language model switching and so on. It is used in dialog systems and language learning systems. It can be used in computer based PBX systems such as Asterisk. Sphinx 2 code has also been incorporated into a number of commercial products. It is no longer under active development (other than for routine maintenance). Current real-time decoder development is taking place in the Pocket Sphinx project. An archival article describes the system. Sphinx 3 Sphinx 2 used a semi-continuous representation for acoustic modeling (i.e., a single set of Gaussians is used for all models, with individual models represented as a weight vector over these Gaussians). Sphinx 3 adopted the prevalent continuous HMM representation and has been used primarily for high-accuracy, non-real-time recognition. Recent developments (in algorithms and in hardware) have made Sphinx 3 "near" real-time, although not yet suitable for critical interactive applications. Sphinx 3 is under active development and in conjunction with SphinxTrain provides access to a number of modern modeling techniques, such as LDA/MLLT, MLLR and VTLN, that improve recognition accuracy (see the article on Speech Recognition for descriptions of these techniques). Sphinx 4 Sphinx 4 is a complete rewrite of the Sphinx engine with the goal of providing a more flexible framework for research in speech recognition, written entirely in the Java programming language. Sun Microsystems supported the development of Sphinx 4 and contributed software engineering expertise to the project. Participants included individuals at MERL, MIT and CMU. (Currently supported languages are C, C++, C#, Python, Ruby, Java, and JavaScript.) Current development goals include: developing a new (acoustic model) trainer implementing speaker adaptation (e.g. MLLR) improving configuration management creating a graph-based UI for graphical system design PocketSphinx A version of Sphinx that can be used in embedded systems (e.g., based on an ARM processor). PocketSphinx is under active development and incorporates features such as fixed-point arithmetic and efficient algorithms for GMM computation. See also Speech recognition software for Linux List of speech recognition software Project LISTEN References External links CMU Sphinx homepage Sphinx' repository on GitHub should be considered the definitive source for code SourceForge hosts older releases and files NeXT on Campus Fall 1990 (This document is postscript format compressed with gzip.) Carnegie Mellon University - Breakthroughs in speech recognition and document management, pgs. 12-13 Free software projects Speech recognition software Software using the BSD license
2753408
https://en.wikipedia.org/wiki/Trojan%20War%20%28film%29
Trojan War (film)
Trojan War is a 1997 American romantic comedy film directed by George Huang and starring Will Friedle, Jennifer Love Hewitt, and Marley Shelton. The film was a critical and box office disaster. Produced for $15 million, it made only $309 in ticket sales because it was played in a single movie theater and was pulled after only a week. Plot High school student Brad (Friedle) has had an unrequited crush on a classmate named Brooke (Marley Shelton) for years. After she asks him to come over one night to tutor her, she ends up wanting to have sex with him. But she only wants safe sex, and he does not have a condom (the use of Trojan in the title is a pun on the condom brand of the same name). In his quest to buy some condoms, he runs into all sorts of trouble; his dad's Jaguar gets stolen and then wrecked, he has a run-in with a crazy bus driver (Anthony Michael Hall), he is held hostage, he is pursued by a school janitor (Paulo Tocha) who accuses him of drawing graffiti, an odd pair of Hispanic siblings (Christine Deaver and Mike Moroff) who thinks he looks like David Hasselhoff, Brooke's dog, Brooke's jealous boyfriend Kyle (Eric Balfour), and a homeless man (David Patrick Kelly) who wants two dollars from him (and has secretly stolen his wallet), and he is arrested. After all of this and finally receiving a condom from a police officer (Lee Majors, who played Steve Austin in the 1970s TV series The Six Million Dollar Man; Major's policeman-character here is named "Officer Austin" as a nod to Majors' previous well-known role) who releases him, he realizes that the perfect girl has been there for him all along: his best friend Leah (Jennifer Love Hewitt), who has had feelings for him for a long time unbeknownst to Brad. Finally, Brad realizes his own feelings for Leah while also discovering Brooke is not as great as he thought she was, after he finds out that she only wants a one night stand with him instead of a relationship. Brad runs out to find Leah and professes his feelings to her, and they kiss each other by moonlight. After the end credits, Brad's parents are shocked by the sight of what is left of their car after the tow truck driver brings it back. Cast Will Friedle as Brad Kimble Jennifer Love Hewitt as Leah Jones Marley Shelton as Brooke Kingsley Danny Masterson as Seth Jason Marsden as Josh Eric Balfour as Kyle Lee Majors as Officer Austin John Finn as Ben Kimble Wendie Malick as Beverly Kimble Jennie Kwan as Trish Charlotte Lopez as Nina Christine Deaver as Latin Mama Mike Moroff as Big Brother lobo Sebastian as Lead Homeboy Joe Cerrano as Biggest Homeboy Julian Cegario as Homeboy Paulo Tocha as Janitor Anthony Michael Hall as Bus Driver David Patrick Kelly as Bagman Danny Trejo as Scarface Production Prior to the film's release, it was noted that there were similarities with its condom plot to another film in development, Booty Call, which featured an all-black cast and which would also be released in 1997. Booty Call was written without knowledge of Trojan War'''s existence. Music Songs featured in the motion picture: "I'll Fall With Your Knife" - Performed by Peter Murphy "Disappear" - Performed by Letters To Cleo "You Are Here" - Performed by Star 69 "All Five Senses" - Performed by Pomegranate "The Word Behind Words" - Performed by Jeremy Toback "The Love You Save" - Performed by Madder Rose "Snakebellies" - Performed by Fu Manchu "The Boys Are Back In Town" - Performed by The Cardigans "I Hope I Don't Fall In Love With You" - Performed by Jennifer Love Hewitt "You're One" - Performed by Imperial Teen "I Have A Date" - Performed by The Vandals "Underdog" - Performed by astroPuppees "Next To You" - Performed by Dance Hall Crashers "Yo Soy El Son Cubano" - Performed by Parmenio Salazar "Mistreated" - Performed by Shufflepuck "Don't Be" - Performed by astroPuppees "Disco Inferno" - Performed by The Trammps "American Girl" - Performed by Everclear "American City World" - Performed by Triple Fast Action "What a Bore" - Performed by Muzzle "Boom, Boom, Boom" - Performed by Juster "I Believe In" - Performed by Jennifer Love Hewitt "I'll Fall With Your Knife" - Performed by Tom Hiel "Trouble" - Performed by Shampoo "I've Got a Flair" - Performed by Fountains of Wayne "Can't Hold Me Down" - Performed by Schleprock "The Burning" - Performed by Teta Vega Box office The film was released in only a single movie theatre and was pulled after only one week. It earned a total of $309 against a production budget of $15 million. As of 2007 it was the fifth lowest grossing film since modern record keeping began in the 1980s. Dade Hayes of Variety magazine explained that a single theater release is more about fulfilling contractual obligations than anything to do with audience reaction to the film. Reception Nathan Rabin of The Onion's The A.V. Club'' wrote: "It may be formulaic, predictable and as substantial as a Little Debbie snack cake, but as a loving, inane throwback to the golden age of the Brat Pack and the two Coreys, it's irresistible." Charles Tatum of efilmcritic wrote: "Sometimes, a movie comes along that makes you want to sob, and not in the good way." References External links 1997 films 1990s teen comedy films 1990s romantic comedy films Films directed by George Huang Films scored by George S. Clinton Warner Bros. films 1997 comedy films
573838
https://en.wikipedia.org/wiki/PC-Write
PC-Write
PC-Write was a computer word processor and was one of the first three widely popular software products sold via the marketing method that became known as shareware. It was originally written by Bob Wallace in early 1983. Overview PC-Write was a modeless editor, using control characters and special function keys to perform various editing operations. By default it accepted many of the same control key commands as WordStar while adding many of its own features. It could produce plain ASCII text files, but there were also features that embedded control characters in a document to support automatic section renumbering, bold and italic fonts, and other such; also, a feature that was useful in list processing (as used in Auto LISP) was its ability to find matching open and closed parenthesis "( )"; this matching operation also worked for the other paired characters: { }, [ ] and < >. Lines beginning with particular control characters n and/or a period (.) contained commands that were evaluated when the document was printed, e.g. to specify margin sizes, select elite or pica type, or to specify the number of lines of text that would fit on a page, such as in escape sequences. While Quicksoft distributed copies of PC-Write for $10, the company encouraged users to make copies of the program for others in an early example of shareware. Quicksoft asked those who liked PC-Write to send it $75. The sum provided a printed manual (notable for its many pictures of cats, drawn by Megan Dana-Wallace), telephone technical support, source code, and a registration number that the user entered into his copy of the program. If anyone else paid the company $75 to purchase an already-registered copy of the software, the company paid a $25 commission back to the original registrant, and then issued a new number to the new buyer, thereby giving a financial incentive for buyers to distribute and promote the software. A configuration file allowed customizing PC-Write, including remapping the keyboard. Later versions of the registered (paid for) version of the program included a thesaurus (which was not shareware) along with the editor. In addition, there was vocabulary available in other languages, such as in German. Utilities were also provided to convert PC-Write files to and from other file formats that were common at the time. One limitation of the software was its inability to print directly from memory - because the print function was a separate subprogram, a document must be saved to a file before it could be printed. Bob Wallace found that running Quicksoft used so much of his time he could not improve the PC-Write software. In early 1991, he sold the firm to another Microsoft alumnus, Leo Nikora, the original product manager for Windows 1.0 (1983–1985). Wallace returned to full programming and an updated version of PC-Write was released in June 1991. One unusual feature of PC-Write was its implementation of free form editing: it could copy and paste a block of text anywhere. For instance, if one had a block of information, one per line, in the format Name (spaces) Address, one could highlight only the addresses section and paste that into the right-hand part of a page. Today, Emacs and jEdit are also capable of performing this function. When the market changed to multi-program software (office suites combining word processing, spreadsheet, and database programs), Quicksoft went out of business in 1993. The first Trojan horse (appearing in 1986), PC-Write Trojan, masqueraded as "version 2.72" of the shareware word processor PC-Write. Quicksoft did not release a version 2.72. PC-Write had one of the first "as you type", in "real-time mode" spell checker; earlier spell checkers only worked in "batch mode". The Brown Bag Word Processor is based on PC-Write's source code, licensed by Brown Bag Software, with some minor modifications and additions. Reception PC Magazine stated that version 1.3 of "PC-Write rates extremely well and compares favorably with many word processors costing much more". It cited very fast performance, good use of color, and availability of source code as advantages, while lack of built-in support for printing bold or underline and keyboard macros was a disadvantage. Compute! complimented the software's "clean implementation of standard editing features", cited its "truly staggering" level of customization, and after mentioning a few flaws stated that they should be "viewed in context of the program's overall excellence". See also Andrew Fluegelman Jim Knopf, also known as Jim Button PC-File PC-Talk References External links PC-WRITE: Quality Word Processing at a Price That's Hard to Beat Review of PC-Write in COMPUTERS and COMPOSITION 2(4), August 1985, page 78. 1983 software Shareware Word processors DOS text editors
22509052
https://en.wikipedia.org/wiki/Dingoo
Dingoo
The Dingoo is a handheld gaming console that supports music and video playback and open game development. The system features an on-board radio and recording program. It is available to consumers in three colors: white, black, and pink. It was released in February 2009 and has since sold over 1 million units. Other versions of the console include Dingoo A330 and Dingoo A380. Dingoo focuses on games and media products, and is located in the Futian District, Shenzhen. Hardware Specifications Internal Storage 1/2/4GB flash Additional Storage MiniSD/SDHC (MicroSD/SDHC with adapter) Input D-Pad, 2 shoulder, 4 face, Start & Select buttons, Mic. Outputs Stereo Speakers, Headphone Jack & TV-out w/ included cable I/O Mini-USB connector 2.0 Battery 3.7V 1700-1800 mAh (6.29WH) Li-Ion, approx. 7 hours run time Video Playback RM, MP4, 3GP, AVI, ASF, MOV, FLV, MPEG Audio Playback MP3, WMA, APE, FLAC, RA Radio Digital FM Tuner Recording Supports digital recording of voice (MP3 and WMA formats) and FM radio at 8 kHz Software Support Free official SDKs Available Dimensions 125 × 55.5 × 14mm (4.92 x 2.17 x 0.59in) Weight Function Games Original Original games in two different languages (USA (U) and Chinese (C)) for the Native OS are included: 7 Days - Salvation (U) Ali Baba (U) Amiba's Candy (U) Block Breaker (U) Decollation Warrior [God of War Criminal Day] (U) Dingoo Link Em Up (U) Dingoo Snake (U) Hell Striker II [World Road] (U) Landlord (U) Nose Breaker (U) Puzzle Bobble [PoPo Bash] (C) Tetris (U) Ultimate Drift (U) Yi-Chi King Fighter (C) Zhao Yun Chuan (C) Homebrew Homebrew / Public Domain (PD) games for all operation systems (OS) can be added manually: Native OS 15 Aothello Arcade Volleyball Astro Lander Biniax 2 BlueCube4D Brickomania Cave Story Chip World Color Lines Commander Koon [Commander Keen] Connect Four - Zero Gravity Digger Dooom Game & Watch - Formula 1 HexaVirus Manic Miner MineSweeper mRPG Mushroom Roulette New RAW [Another World] Quake Rubido SameGoo SomeTris Spartak-Chess Spear of Destiny Spoout szSokuban760 szSudoku760 [Sudoku Platinum] TCGS Car The Last Mission TowerToppler [Nebulus] Vectoroids Vorton Wolfenstein 3-D Wubtris XRickOO [Rick Dangerous] Dingux Duke Nukem 3D Emulation Official GBA NES Neo Geo SNES CPS-1 CPS-2 Sega Mega Drive/Genesis Community-based Atari 800 8-bit computers Atari 2600 Atari 5200 Atari 7800 Atari Lynx ColecoVision Commodore 64 Commodore Amiga Magnavox Odyssey 2 MSX (openMSX Dingux) Neo Geo Neo Geo Pocket Nintendo Game Boy and Game Boy Color Nintendo Game Boy Advance PC Engine PlayStation (Dingux only) Sega Genesis/Mega Drive/Mega-CD (Dingux only) Sega Master System, SG-1000 and Sega Game Gear (in progress, working for most games) WonderSwan and WonderSwan Color (in progress, working for most games) ZX Spectrum (GP2Xpectrum for Dingux, Unreal Speccy Portable for native OS) Arcade games Centipede and Millipede CPS-1 CPS-2 FinalBurn Alpha (Dingux only) MAME Mikie (Konami arcade game) Pac-Man and Ms. Pac-Man Video player Video containers: RMVB, RM, AVI, WMV, FLV, MPEG, MP4, ASF, MOV Video codecs: WMV1, WMV3, WMV7, WMV8.1, WMV9, MP42, mp4v, DIV3, DiVX5, XViD, MJPG, MPEG1, MPEG2 LCD resolution: 320×240 Audio player Audio formats: MP3, WMA, APE, FLAC, WAV, AC3, MOD, S3M, XM Channels: Stereo EQ Function Photo viewer Supports JPG, BMP, GIF, PNG File Formats Text reader Supports TXT file formats (English and Chinese) Support English text to speech Further functions include bookmarking, auto browsing, font sizing, and it can open while music is playing. Radio receiver FM Radio Wide channel range from 76.0 to 108.0 MHz, support manual/auto channel scanning, FM recording and can keep playing while using other application. User can save up to 40 channels. Audio recording Voice and radio recording Voice recording and supports MP3/WAV formats. Other Supports SWF File Format (only Flash 6) U-disk virus protection: Built-in anti-virus software protection, to keep the system at its best performance. USB 2.0 Transmission Interface: Windows 2000/XP/Vista and Mac OS X support File browser allows you to find files on your Dingoo (games, music, videos, photos, voice recordings) Firmware Official firmware Firmware V1.01 Firmware V1.02 Firmware V1.03 Firmware V1.10 (Added Multi-Language Support) Firmware V1.11 (Added Korean Language Support) Firmware V1.20 (Y & B button bug fix and more) Firmware V1.22 Unofficial firmware Team Dingoo released the first unofficial firmware with user customizable theming possibilities. The system files were moved from hidden memory to an accessible memory location, allowing users to change the graphical settings. This firmware is updated regularly. a320-1.03TD-3 a320-1.03TD-2 a320-1.03TD-1 µC/OS-II The native operating system of the Dingoo A320 is a variant of µC/uOS-II, a low-cost priority-based pre-emptive real time multitasking operating system kernel for microprocessors, written mainly in the C programming language. It is mainly intended for use in embedded systems. All official software for the Dingoo A-320 (including its emulators) run on µC/OS-II. Linux A Linux kernel was generally released by Booboo on Google Code on May 18, 2009. A dual boot installer called "Dingux" was released June 24. This allows for dual booting the original firmware or Linux without the need for connection to a PC. Enthusiasts have successfully run Linux versions of many games, including Prboom engine (Doom, Hexen, Heretic), Build engine (Duke Nukem 3D, Shadow Warrior), Quake, Dodgin' Diamonds 1 & 2, Biniax 2, GNU Robbo, Super Transball 2, Defendguin, Waternet, Sdlroids, Spout, Tyrian, Rise of the Triad, Open Liero, REminiscence, Blockrage, and the OpenBOR game engine. The Dingoo can run emulators: ScummVM, SMS Plus, Gmuplayer, FinalBurn Alpha, Gnuboy, GpSP, MAME, PSX4ALL, Snes9x, PicoDrive, openMSX, GP2Xpectrum, FCEUX and VICE. See also Comparison of handheld game consoles Similar portable Linux kernel-based gaming devices: GP32 GP2X GP2X Wiz GP2X Caanoo Pandora (console) DragonBox Pyra GCW Zero Mi2 console List of Linux-based, handheld gaming devices Linux for gaming Reviews Dingoo A320 review by Tech Radar External links Dingoo Official Website (English) References Seventh-generation video game consoles Handheld game consoles
9876168
https://en.wikipedia.org/wiki/Bauhaus%20Project%20%28computing%29
Bauhaus Project (computing)
The Bauhaus project is a software research project collaboration among the University of Stuttgart, the University of Bremen, and a commercial spin-off company Axivion formerly called Bauhaus Software Technologies. The Bauhaus project serves the fields of software maintenance and software reengineering. Created in response to the problem of software rot, the project aims to analyze and recover the means and methods developed for legacy software by understanding the software's architecture. As part of its research, the project develops software tools (such as the Bauhaus Toolkit) for software architecture, software maintenance and reengineering and program understanding. The project derives its name from the former Bauhaus art school. History The Bauhaus project was initiated by Erhard Ploedereder, Ph.D. and Rainer Koschke, Ph.D. at the University of Stuttgart in 1996. It was originally a collaboration between the Institute for Computer Science (ICS) of the University of Stuttgart and the Fraunhofer-Institut für Experimentelles Software Engineering (IESE), which is no longer involved. Early versions of Bauhaus integrated and used Rigi for visualization. The commercial spin-off Axivion was started in 2005. Research then was done at Axivion, the Institute of Software Technology, Department of Programming Languages at the University of Stuttgart as well as at the Software Engineering Group of the Faculty 03 at the University of Bremen. Today, the academic version of the Bauhaus project and the commercially sold Axivion Suite are different products, as development at Axivion since 2010 is based on a new infrastructure which allowed Axivion to add new applications such as MISRA checking. Bauhaus Toolkit The Bauhaus Toolkit (or simply the "Bauhaus tool") includes a static code analysis tool for C, C++, C#, Java and Ada code. It comprises various analyses such as architecture checking, interface analysis, and clone detection. Bauhaus was originally derived from the older Rigi reverse engineering environment, which was expanded by Bauhaus due to the Rigi's limitations. It is among the most notable visualization tools in the field. The Bauhaus tool suite aids the analysis of source code by creating abstractions (representations) of the code in an intermediate language as well as through a resource flow graph (RFG). The RFG is a hierarchal graph with typed nodes and edges, which are structured in various views. The toolkit is licensed at no charge for academic use (but this is a different product than the Axivion Suite). Axivion and the Axivion Suite For commercial use, the project has created a spin-off company, Axivion. Axivion is headquartered in Stuttgart, Germany and provides licensing and support for the Axivion Suite. While the Axivion Suite has its origins in the Bauhaus project, it today is a different product with a much broader range of static code analyses, such as MISRA checking, architecture verification, include analysis, defect detection, and clone management. It also provides IDE integrations for Eclipse and Microsoft Visual Studio not found in the academic project. Project funding The Bauhaus project was funded by the state of Baden-Württemberg, the Deutschen Forschungsgemeinschaft, the Bundesministerium für Bildung und Forschung, T-Nova Deutsche Telekom Innovationsgesellschaft Ltd., and Xerox Research. Reception The Bauhaus tool suite has been used successfully in research and commercial projects. It has been noted that Bauhaus is "perhaps [the] most extensive" customization of the well-known Rigi environment, The members of the project were repeatedly awarded with Best Paper Awards and were invited to submit journal papers several times. In 2003, the Bauhaus project received the do it software award from MFG Stiftung Baden-Württemberg. Footnotes Regarding the project's founding, the years 1996 and 1997 seem to appear equally as often among the various sources. References External links The Bauhaus Project University of Stuttgart, Institute of Software Technology, Department of Programming Languages University of Bremen, Software Engineering Group, Faculty 03 Axivion company homepage (commercial licensing and support for the Axivion Suite) Software metrics Static program analysis tools
12955279
https://en.wikipedia.org/wiki/Microsoft%20Software%20Licensing%20and%20Protection%20Services
Microsoft Software Licensing and Protection Services
Microsoft Software Licensing and Protection Services, also known as Microsoft SLPS, is a software licensing suite that provides developers with the ability to license software, create license versions, and track performance of products and profitability. SLPS is intended for developers and independent software vendors (ISV) streamline operations with .NET protection technology and a licensing server. History Microsoft SLPS, formerly called SecureLM, was acquired by Microsoft in January 2007 from the company Secured Dimensions. Secured Dimensions was founded in 2005 by Avi Shillo in Israel. It was acquired by Microsoft soon after the CEO of Microsoft Israel, Arie Scope, joined its board of directors. On June 9, 2009, Microsoft announced that a Dublin-based company called InishTech has acquired the product and would service existing contracts and accept new SLPS customers. Themes Security SLPS performs “private permutation” for each company by transforming managed (.NET) code into a secure virtual machine (SVM) language that is attached to the application, protecting it from manipulation by end users. This process is done using a toolkit called Code Protector. Application features can later be marked as licensable or modifiable entities. SLPS is .NET certified and works with the Visual Basic .NET platforms, C#, Java and web applications. Licensing In the Code Protector toolkit, developers can mark pieces of code as ‘licensable’ that can later be activated as bundles, SKUs, or packages where features can be turned on and off. Possible license models are time-based licenses, trial versions, user-based licenses, feature-based licenses and others depending on the business type. Distribution SLPS lets developers create and activate new digital licenses without having to ship a new product to the office. This removes the hassle of recompiling code or ordering a new product or SKU. All they need to do is create a new digital license and SLPS will unlock it and activate it. Management Developers and managers have access to real-time information about new licenses generated, license expiry times, and most popular packages. SLPS lets developers monitor billing, license usage, and software features usage. It can also be tied into a back-end billing system or customer relationship manager (CRM) to allow business partners to perform similar SLPS operations. Product models InishTech SLPS is available in three different Editions: Standard, Professional and Enterprise. Code Protector Software Development Kit A tool kit that will allow software developers to protect their software from reverse engineering, a common form of piracy. SLP Server A server that will manage the licensing issues and product keys for software vendors. SLP Online Service A InishTech hosted solution for license management. See also License manager Product activation Floating licensing References Microsoft Announces SLP Services Press Release SLPS White Paper Red Herring Article on Secured Dimensions Acquisition ZDnet Interview with SLPS Manager techcrunch.com New Release: Microsoft Quietly Closes Software Licensing and Protection Service External links Microsoft SLPS - Not for 64 bit Software Licensing and Protection Services
6060603
https://en.wikipedia.org/wiki/1143%20Odysseus
1143 Odysseus
1143 Odysseus , provisional designation , is a large Jupiter trojan located in the Greek camp of Jupiter's orbit. It was discovered on 28 January 1930, by German astronomer Karl Reinmuth at the Heidelberg Observatory in southwest Germany, and later named after Odysseus, the legendary hero from Greek mythology. The dark D-type asteroid has a rotation period of 10.1 hours. With a diameter of approximately , it is among the 10 largest Jovian trojans. Orbit and classification Odysseus is a dark Jovian asteroid orbiting in the leading Greek camp at Jupiter's Lagrangian point, 60° ahead of the Gas Giant's orbit in a 1:1 resonance (see Trojans in astronomy). It is a non-family asteroid in the Jovian background population. It orbits the Sun at a distance of 4.8–5.7 AU once every 12 years (4,393 days; semi-major axis of 5.25 AU). Its orbit has an eccentricity of 0.09 and an inclination of 3° with respect to the ecliptic. As a Jupiter Trojan it is in a very stable orbit. Its closest approach to any major planet will be on 5 May 2083 when it will still be from Mars. The body's observation arc begins at Heidelberg in February 1930, three weeks after its official discovery observation. Naming This minor planet was named after the ancient Greek hero Odysseus (Odysseus Laertiades) in Homer's epic poem Odyssey. The official naming citation was mentioned in The Names of the Minor Planets by Paul Herget in 1955 (). Another Jupiter trojan, 5254 Ulysses, is named after the Latin variant of Odysseus. Physical characteristics Odysseus is a dark D-type asteroid in both the Tholen classification and Bus–DeMeo classification. Diameter and albedo According to the surveys carried out by the Infrared Astronomical Satellite IRAS, the Japanese Akari satellite and the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer (WISE), Odysseus measures between 114.62 and 130.81 kilometers in diameter and its surface has an albedo between 0.050 and 0.0753. The Collaborative Asteroid Lightcurve Link adopts the results obtained by IRAS, that is, an albedo of 0.0753 and a diameter of 125.64 kilometers based on an absolute magnitude of 7.93. In May 2005, an asteroid occultation gave a best-fit dimension of for the major and minor axis of the occultation ellipse. An estimated mean-diameter of 130, 125 and 114 kilometers measured by Akari, IRAS and WISE, makes Odysseus the 7th, 8th or 10th largest Jupiter Trojan, respectively. Rotation period A large number of rotational lightcurves of Odysseus have been obtained since its first photometric observation by Richard Binzel in January 1988. In June 1994, the first accurate measurement of the asteroid's rotation period was made by Stefano Mottola using the former Bochum 0.61-metre Telescope at ESO's La Silla Observatory in northern Chile. As of 2018, analysis of the best-rated lightcurve from observations by the Kepler space observatory during its K2 mission observing Campaign Field 6 in September 2015, gave a well-defined period of 10.114 hours with a brightness amplitude of 0.20 magnitude (). Notes References External links Asteroid Lightcurve Database (LCDB), query form (info ) Dictionary of Minor Planet Names, Google books Discovery Circumstances: Numbered Minor Planets (1)-(5000) – Minor Planet Center 001143 001143 Discoveries by Karl Wilhelm Reinmuth Minor planets named from Greek mythology Named minor planets 001143 19300128
28667119
https://en.wikipedia.org/wiki/Gay%20Nigger%20Association%20of%20America
Gay Nigger Association of America
The Gay Nigger Association of America (GNAA) was an Internet trolling group. They targeted several prominent websites and Internet personalities including Slashdot, Wikipedia, CNN, Barack Obama, Alex Jones, and prominent members of the blogosphere. They also released software products, and leaked screenshots and information about upcoming operating systems. In addition, they maintained a software repository and a wiki-based site dedicated to Internet commentary. They were called a "cyberterrorist organization" by the Terrorism Research & Analysis Consortium. Members of the GNAA also founded Goatse Security, a grey hat information security group. Members of Goatse Security released information in June 2010 about email addresses on AT&T's website from people who had subscribed to mobile data service using the iPad. After the vulnerability was disclosed, the then-president of the GNAA, weev, and a GNAA member, "JacksonBrown", were arrested. Origins, known members and name The group was run by a president. New media researcher Andrew Lih stated that it was unclear whether or not there was initially a clearly defined group of GNAA members, or if founding and early members of the GNAA were online troublemakers united under the name in order to disrupt websites. However, professor Jodi Dean and Ross Cisneros claimed that they were an organized group of anti-blogging trolls. Reporters also referred to the GNAA as a group. In her 2017 book Troll Hunting, Australian journalist Ginger Gorman identified the president of the GNAA as an individual from Colorado known as "Meepsheep". Known former presidents of the GNAA were security researcher Jaime "asshurtmacfags" Cochran, who also co-founded the hacking group "Rustle League", and "timecop", founder of the anime fansub group "Dattebayo". Other members included former president Andrew "weev" Auernheimer, Daniel "JacksonBrown" Spitler, former Debian Project Leader Sam Hocevar, and former spokesman Leon Kaiser. GNAA has also been documented as having been loosely affiliated with the satirical wiki Encyclopedia Dramatica. The group's name incited controversy and was described as "causing immediate alarm in anyone with a semblance of good taste", "intentionally offensive", and "spectacularly offensive". The group denied allegations of racism and homophobia, explaining that the name was intended to sow disruption on the Internet and challenge social norms (claiming it was derived from the 1992 Danish satirical blaxploitation film Gayniggers from Outer Space). There was at least one known female GNAA member (Jaime "asshurtmacfags" Cochran). Trolling The GNAA used many different methods of trolling. One method involved flooding a weblog's comment form with text consisting of repeated words and phrases, referred to as "crapflooding". On Wikipedia, members of the group created an article about the group, while still adhering to Wikipedia's rules and policies; a process Andrew Lih says "essentially [used] the system against itself." Another method included attacking many Internet Relay Chat channels and networks using different IRC flooding techniques. The GNAA also produced shock sites containing malware. One such site, "Last Measure", contained embedded malware that opened up "an endless cascade of pop-up windows displaying pornography or horrific medical pictures." They also performed proof of concept demonstrations. These actions occasionally interrupted the normal operation of popular websites. 2000s In July 2004, two GNAA members submitted leaked screenshots of the upcoming operating system Mac OS X v10.4 to the popular Macintosh news website MacRumors, resulting in a post which read "With WWDC just days away, the first Tiger information and screenshots appears to have been leaked. According to sources, Apple will reportedly provide developers with a Mac OS X 10.4 preview copy at WWDC on Monday. The screenshots provided reportedly come from this upcoming developer preview." In June 2005, the GNAA announced that it had created a Mac OS X Tiger release for Intel x86 processors which caught media attention from various sources. The next day, the supposed leak was mentioned on the G4 television show Attack of the Show. The ISO image released via BitTorrent merely booted a shock image instead of the leaked operating system. On February 3, 2007, the GNAA successfully managed to convince CNN reporter Paula Zahn that "one in three Americans" believe that the September 11, 2001 terror attacks were carried out by Israeli agents. CNN subsequently ran a story erroneously reporting this, involving a round-table discussion regarding antisemitism and an interview with the father of a Jewish 9/11 victim. The GNAA-owned website said that "over 4,000" Jews were absent from work at the World Trade Center on 9/11. On February 11, 2007, an attack was launched on the website of US presidential candidate (and future US president) Barack Obama, where the group's name was caused to appear on the website's front page. 2010s In late January 2010, the GNAA used an obscure method known as cross-protocol scripting (a combination of cross-site scripting and inter-protocol exploitation) to cause users of the Freenode IRC network to unknowingly flood IRC channels after visiting websites containing inter-protocol exploits. They also have used a combination of inter-protocol, cross-site, and integer overflow bugs in both the Firefox and Safari web browsers to flood IRC channels. On October 30, the GNAA began a trolling campaign in the aftermath of Hurricane Sandy on the US East Coast, spreading fake photographs and tweets of alleged looters in action. After the GNAA published a press-release detailing the incident, mainstream media outlets began detailing how the prank was carried out. On December 3, the GNAA was identified as being responsible for a cross-site scripting attack on Tumblr that resulted in thousands of Tumblr blogs being defaced with a pro-GNAA message. In January 2013, the GNAA collaborated with users on the imageboard 4chan to start a "#cut4bieber" trend on Twitter, encouraging fans of Canadian pop singer Justin Bieber to practice self-harm. From 2014 into 2015, GNAA members began playing an active role in the Gamergate controversy, sabotaging efforts made by pro-Gamergate parties. Several GNAA members were able to gain administrative access to 8chan's (an imageboard associated with Gamergate) primary Gamergate board, which they disrupted and ultimately closed. The GNAA also claimed responsibility for releasing private information related to many pro-Gamergate activists. On October 13, 2016, GNAA member Meepsheep vandalized Wikipedia to cause the entries for Bill and Hillary Clinton to be overlapped with pornographic images and a message endorsing Republican presidential candidate Donald Trump. In August 2017, GNAA was named as having been involved in a feud between employees of the popular dating app Bumble, and tenants of the apartment building in Austin, Texas where the company was, at the time, illegally headquartered. Joseph Bernstein of BuzzFeed News reported that one of the building's residents contacted GNAA to "fight back" against Bumble after multiple complaints regarding the company's activities were ignored. The dispute resulted in Bumble choosing to relocate from the building, which GNAA claimed credit for in a press release the group spammed across several major websites via clickjacking. Goatse Security Several members of the GNAA with expertise in grey hat computer security research began releasing information about several software vulnerabilities under the name "Goatse Security." The group chose to publish their work under a separate name because they thought that they would not be taken seriously. In June 2010, Goatse Security attracted mainstream media attention for their discovery of at least 114,000 unsecured email addresses registered to Apple iPad devices for early adopters of Apple's 3G iPad service. The data was aggregated from AT&T's own servers by feeding a publicly available script with HTTP requests containing randomly generated ICC-IDs, which would then return the associated email address. The FBI soon investigated the incident. This investigation led to the arrest of then-GNAA President, Andrew 'weev' Auernheimer, on unrelated drug charges resulting from an FBI search of his home. In January 2011, the Department of Justice announced that Auernheimer would be charged with one count of conspiracy to access a computer without authorization and one count of fraud. A co-defendant, Daniel Spitler, was released on bail. In June 2011, Spitler pleaded guilty on both counts after reaching a plea agreement with US attorneys. On November 20, 2012, Auernheimer was found guilty of one count of identity fraud and one count of conspiracy to access a computer without authorization. These convictions were overturned on April 11, 2014, and Auernheimer was subsequently released from prison. In popular culture Music Childish Gambino's 2013 song, III. Life: The Biggest Troll [Andrew Auernheimer], is about GNAA member weev. YTCracker repeatedly references the GNAA and it's members in his 2017 track, welcome to the get along gang. References 2002 establishments in the United States Cyberattack gangs Hacker groups Internet culture GNAA Online obscenity controversies Slashdot Underground computer groups
3225176
https://en.wikipedia.org/wiki/Cognitive%20ergonomics
Cognitive ergonomics
Cognitive ergonomics is a scientific discipline that studies, evaluates, and designs tasks, jobs, products, environments and systems and how they interact with humans and their cognitive abilities. It is defined by the International Ergonomics Association as "concerned with mental processes, such as perception, memory, reasoning, and motor response, as they affect interactions among humans and other elements of a system. Cognitive ergonomics is responsible for how work is done in the mind, meaning, the quality of work is dependent on the persons understanding of situations. Situations could include the goals, means, and constraints of work. The relevant topics include mental workload, decision-making, skilled performance, human-computer interaction, human reliability, work stress and training as these may relate to human-system design." Cognitive ergonomics studies cognition in work and operational settings, in order to optimize human well-being and system performance. It is a subset of the larger field of human factors and ergonomics. Goals Cognitive ergonomics (sometimes known as cognitive engineering though this was an earlier field) is an emerging branch of ergonomics. It places particular emphasis on the analysis of cognitive processes required of operators in modern industries and similar milieus. This can be done by studying cognition in work and operational settings. It aims to ensure there is an appropriate interaction between human factors and process's that can be done throughout every day life. This would include every day life such as work tasks. Some cognitive ergonomics aims are: diagnosis, workload, situation awareness, decision making, and planning. CE is used to describe how work affects the mind and how the mind affects work. Its aim is to apply general principles and good practices of cognitive ergonomics that help to avoid unnecessary cognitive load at work and improves human performance. In a practical purpose, it will aid in human nature and limitations through additional help in information processing. Another goal related to the study of cognitive ergonomics is correct diagnosis. Because cognitive ergonomics is a small priority for many, it is especially important to diagnose and help what is needed. A comparison would be fixing what does not need to be fixed or vice-a-versa. Cognitive ergonomics aims at enhancing performance of cognitive tasks by means of several interventions, including these: user-centered design of human-machine interaction and human-computer interaction (HCI); design of information technology systems that support cognitive tasks (e.g., cognitive artifacts); development of training programs; work redesign to manage cognitive workload and increase human reliability. designed to be "easy to used" and accessible by everyone History The field of cognitive ergonomics emerged predominantly in the 70's with the advent of the personal computer and new developments in the fields of cognitive psychology and artificial intelligence. It studied how human cognitive psychology works hand-in-hand with specific cognitive limitations. This could only be done through time and trial and error. CE contrasts the tradition of physical ergonomics because "cognitive ergonomics is...the application of psychology to work...to achieve the optimization between people and their work." Viewed as an applied science, the methods involved with creating cognitive ergonomic design have changed with the rapid development in technological advances over the last 27 years. In the 80's, there was a worldwide transition in the methodological approach to design. According to van der Veer, Enid Mumford was one of the pioneers of interactive systems engineering, and advocated the notion of user-centered design, wherein the user is considered and "included in all phases of the design". Cognitive ergonomics as defined by the International Ergonomics Association "is concerned with mental processes, such as perception, memory, reasoning, and motor response, as they affect interactions among humans and other elements of a system”. It studies the cognition in work to help with the human well being in system performances. There are several different models which describe the criteria for designing user-friendly technology. A number of models focus on a systematic process for design, using task analysis to evaluate the cognitive processes involved with a given task and develop adequate interface capabilities. Task analysis in past research has focused on the evaluation of cognitive task demands, concerning motor control and cognition during visual tasks such as operating machinery, or the evaluation of attention and focus via the analysis of eye saccades of pilots when flying. Neuroergonomics, a subfield of cognitive ergonomics, aims to enhance human-computer interaction by using neural correlates to better understand situational task demands. Neuroergonomic research at the university of Iowa has been involved with assessing safe-driving protocol, enhancing elderly mobility, and analyzing cognitive abilities involved with the navigation of abstract virtual environments." Now, cognitive ergonomics adapts to technological advances because as technology advances new cognitive demands arise. This is called changes in socio-technical context. For example, when computers became popular in the 80's, there were new cognitive demands for operating them. Meaning, as new technology arises, humans will now have to adapt to the change leaving a deficiency somewhere else. Human Computer Interaction has a huge part in cognitive ergonomics because we are in a time period where most of life is digitalized. This created new problems and solutions. Studies show that most of the problems that happen are due to the digitalization of dynamic systems. With this it created a rise in the diversity in methods on how to process many streams of information. The changes in our socio-technical contexts adds to the stress of methods of visualization and analysis, along with the capabilities regarding cognitive perceptions by the user. Methods Successful ergonomic intervention in the area of cognitive tasks requires a thorough understanding not only of the demands of the work situation, but also of user strategies in performing cognitive tasks and of limitations in human cognition. In some cases, the artifacts or tools used to carry out a task may impose their own constraints and limitations (e.g., navigating through a large number of GUI screens). Tools may also co-determine the very nature of the task. In this sense, the analysis of cognitive tasks should examine both the interaction of users with their work setting and the user interaction with artifacts or tools; the latter is very important as modern artifacts (e.g., control panels, software, expert systems) become increasingly sophisticated. Emphasis lies on how to design human-machine interfaces and cognitive artifacts so that human performance is sustained in work environments where information may be unreliable, events may be difficult to predict, multiple simultaneous goals may be in conflict, and performance may be time-constrained. A proposed way of expanding a user's effectiveness with cognitive ergonomics is to expand the interdisciplinary connections related to normal dynamics. The method behind this is to transfer the pre-existing knowledge of the various mechanics in computers into structural patterns of cognitive space. This would work with human factors in developing an intellectual learning support system and applying an interdisciplinary methodology of training, helping the effective interaction between the person and the computer with the strengthening of critical thinking and intuition. Disability Accessibility is important in cognitive ergonomics because it is one pathway to build better user experience. The term accessibility refers to how people with disabilities access or benefits from a site, system, or application. Section 508 is the founding principle for accessibility . In the U.S., Section 508 of the Rehabilitation Act is one of several disability laws and requires federal agencies to develop, maintain, and use their information and communications technology (ICT) accessible to people with disabilities, regardless if they work for the federal government or not. Section 508 also implies that any people with disabilities applying for a federal government job or any person using the website to get general information about a program or completing an online form has access to the same information and resources that are obtainable by anyone. Accessibility can be implemented by making sites that can present information through multiple sensory channels with sound and sight. The strategic multi-sensory approach and a multi-interactivity approach allows disabled users to access the same information as nondisabled users. This allows for additional means of site navigation and interactivity beyond the typical point-and-click-interface: keyboard-based control and voice-based navigation. Accessibility is very valuable because it ensures that all potential users, including people with disabilities have a good user experience and can easily access information. Overall, it improves usability for all people that use a site. Some of the best practices for accessible content include: Not relying on color as a navigational tool or as the only way to differentiate items Images should include "alt text" in the markup/code and complex images should have more extensive descriptions near the image (caption or descriptive summaries built right into a neighboring paragraph) Functionality should be accessible through mouse and keyboard and be tagged to worked with voice-control systems Transcripts should be provided for podcasts Videos on your site must provide visual access to the audio information through in-sync captioning Sites should have a skip navigation feature Consider 508 testing to assure your site is in compliance User Interface Modeling Cognitive task analysis Cognitive task analysis is a general term for the set of methods used to identify the mental demands and cognitive skills needed to complete a task. Frameworks like GOMS provide a formal set of methods for identifying the mental activities required by a task and an artifact, such as a desktop computer system. By identifying the sequence of mental activities of a user engaged in a task, cognitive ergonomics engineers can identify bottlenecks and critical paths that may present opportunities for improvement or risks (such as human error) that merit changes in training or system behavior. It is the whole study of what we know, how we think, and how we organize new information. Applications As a design philosophy, cognitive ergonomics can be applied to any area where humans interact with technology. Applications include aviation (e.g., cockpit layouts), transportation (e.g., collision avoidance), the health care system (e.g., drug bottle labelling), mobile devices, appliance interface design, product design, and nuclear power plants. The focus of cognitive ergonomics is to be simple, clear and "easy to use" and accessible to everyone. Softwares are designed to help make better use of this. Its aim is to design icons and visual cues that are "easy" to use and function by all. See also Activity theory Cognitive psychology and cognitive science Cognitive work analysis Ecological interface design Engineering psychology Ethnography and cultural anthropology (distributed cognition) Ethnomethodology Human-computer interaction Mental space Neuroergonomics Supervisory control Systems engineering References External links Organizations EACE - European Association of Cognitive Ergonomics Cognitive Engineering and Decision Making Technical Group (CEDM-TG) Publications Cognition, Technology & Work Theoretical Issues in Ergonomics Science Activities (French) Journal of Cognitive Engineering and Decision Making Ergonomics PsychNology Journal Human–computer interaction Cognition
2493183
https://en.wikipedia.org/wiki/Chris%20Brown%20%28composer%29
Chris Brown (composer)
Chris Brown (born 1953) is an American composer, pianist and electronic musician, who creates music for acoustic instruments with interactive electronics, for computer networks, and for improvising ensembles. He was active early in his career as an inventor and builder of electroacoustic instruments; he has also performed widely as an improviser and pianist with groups as "Room" and the "Glenn Spearman Double Trio." In 1986 he co-founded the pioneering computer network music ensemble "The Hub". He is also known for his recorded performances of music by Henry Cowell, Luc Ferrari, and John Zorn. He has received commissions from the Berkeley Symphony, the Rova Saxophone Quartet, the Abel-Steinberg-Winant Trio, the Gerbode Foundation, the Phonos Foundation and the Creative Work Fund. His recent music includes the poly-rhythm installation "Talking Drum", the "Inventions" series for computers and interactive performers, and the radio performance "Transmissions" series, with composer Guillermo Galindo. His 1992 electroacoustic work "Lava", for brass, percussion, and electronics is produced by Tzadik Records. He teaches Composition and Electronic Music at Mills College in Oakland, where he is co-director of the Center for Contemporary Music (CCM). Discography 1980 "Earwig" with instrument builder Tom Nunn, cassette released by Essential Recordings, 16mm film by Eric Marin. 1985 Wayne Horvitz: Dinner at Eight (Dossier) 1989 "Snakecharmer" Live Electroacoustic Music by Chris Brown, Artifact Recordings, CD. 1989 "Room", Sound Aspects, CD. 1989 "The Hub: Computer Network Music" Artifact Recordings, CD. 1991 "The Virtuoso in the Computer Age -- I: CDCM Computer Music Series, vol. 10", piano performance, Centaur Records, CD 1992 Room: "Hall of Mirrors", Music and Arts. CD. 1993 Glenn Spearman Double Trio: "Mystery Project", piano and electronics performance, Black Saint, CD. 1994 "Music from the CCM at Mills College: CDCM Computer Music Series, vol. 17", Centaur Records, CD. 1994 Glenn Spearman Double Trio: "Smokehouse", piano performance, Black Saint, CD. 1994 The Hub: "Wreckin' Ball", Computer Network Music, Artifact Recordings, CD. 1995 "Conductions #11" by Butch Morris, original instruments performance, New World, CD. 1995 "In C" by Terry Riley The 25th Anniversary Performance, keyboard performance, New Albion Records, CD. 1995 "Lava" by Chris Brown, for brass, percussion and live electronics, Tzadik, CD. 1996 "Duets", by Chris Brown, with Tom Nunn, William Winant, Ikue Mori, and Tom Djll, Artifact Recordings, CD. 1996 Larry Ochs "The Secret Magritte", piano performance in ensemble including the Rova Saxophone Quartet, Marilyn Crispell, Barry Guy, Lisle Ellis, and William Winant, Black Saint, CD. 1996 Glenn Spearman The Fields, Black Saint 1997 Rova's 1995 Live Recording of John Coltrane's "Ascension", piano performance in large ensemble including the Rova Saxophone Quartet, Black Saint, CD. 1998 "Cellule 75", piano performance with William Winant, percussion of Luc Ferrari's composition, Tzadik CD. 1998 "Non Stop Flight", electronic performance with The Hub on this live recording by the Deep Listening Band, Music & Arts, CD. 1999 "New Music: Piano Compositions by Henry Cowell", piano performances by Chris Brown, New Albion Records, CD. 1999 "Waves", composition and performance with Philip Gelb, shakuhachi on "between/waves", Sparkling Beatnik, CD. 1999 Glenn Spearman's "Blues for Falasha", piano performance with the Glenn Spearman Double Trio, Tzadik, CD. 2000 Xu Feng, electronics performance with a sextet of John Zorn’s game piece, Tzadik, CD. 2001 "fuzzybunny", live electronic improvisations with the trio by the same name which also includes Tim Perkis and Scot Gresham-Lancaster, Sonore, CD. 2001 "Oasis", opening track of a live computer music performance titled "knottyspine", on a compilation of music by composers from Mills College, including Fred Frith, Pauline Oliveros, Maggi Payne, John Bischoff, and Alvin Curran, CD. 2001 "Talking Drum", binaural recordings of live electronic installations, and location recordings of traditional music and environmental soundscapes, Sonore, CD. 2002 "Branches", recordings of "Invention#7", and "Alternating Currents", on Ecstatic Peace, LP. 2002 "Transmission Temescal", binaural recording of installation of 20 boomboxes and clock radios on the decks of the Artship, the Artship Recordings, disc 47. 2002 "Water", live electronics with Philip Gelb, shakuhachi, on "Visions: Performances form the EMIT series compilation CD. 2003 "Headlands - Natto Quartet", extended piano improvisations with Philip Gelb, shakuhachi; Shoko Hikage, koto; and Tim Perkis, electronics, on 482 Music, CD. Electric Ascension by Rova::Orchestrova, the Rova Saxophone Quartet augmented by a group of electronic musicians (Atavistic, 2005) 2005 "Rogue Wave", by Chris Brown electronic and acoustic compositions "Rogue Wave", "Transmission Tenderloin", "Retroscan", "Flies", "Cloudsteams/Bellwethers" and "Alternating Currents" . With Eddie Def, William Winant, Julie Steinberg et al. 2007 "Cutter Heads", Piano and Electronics; with Fred Frith, acoustic and electric guitar, Intakt Records, CD. 2016 "Six Primes", Piano, New World Records, CD. References Chris Brown YBCA profile Chris Brown Intakt Records short biography External links Official Website Golden, Barbara. “Conversation with Chris Brown.” eContact! 12.2 — Interviews (2) (April 2010). Montréal: CEC. Once Upon a Time in CA. A podcast, curated by Chris Brown for Ràdio Web MACBA documenting the experimental music on the West Coast in the 1980s. 1953 births 20th-century American composers 20th-century American male musicians 20th-century American pianists 20th-century classical composers 21st-century American composers 21st-century American pianists 21st-century American male musicians 21st-century classical composers American classical composers American classical pianists American contemporary classical composers American electronic musicians American male classical composers American male classical pianists Contemporary classical music performers Intakt Records artists Living people Mills College faculty
40792578
https://en.wikipedia.org/wiki/Hyper%20Light%20Drifter
Hyper Light Drifter
Hyper Light Drifter is a 2D action role-playing game developed by Heart Machine. The game pays homage to 8-bit and 16-bit games, and is considered by its lead developer Alx Preston as a combination of The Legend of Zelda: A Link to the Past and Diablo. Preston originally launched Kickstarter funding for the title for approximately to develop the title for Microsoft Windows, OS X, and Linux computers, but ended up with more than , allowing him to hire more programmers and artists, and expanding the title for console and portable platforms through stretch goals. Though originally scoped for release in 2014, various improvements in the game and issues with Preston's health set the release back. The Microsoft Windows, Linux and OS X versions were released in March 2016, and the PlayStation 4 and Xbox One versions in July 2016. A Special Edition port of the game, featuring additional content, was released by Abylight Studios for the Nintendo Switch in September 2018 and for iOS devices in July 2019. Gameplay and story Hyper Light Drifter is a 2D action role-playing game fashioned after The Legend of Zelda: A Link to the Past, rendered in a pixel art style comparable to Superbrothers: Sword & Sworcery EP. The player controls the Drifter, a character that has access to technology that has long been forgotten by the inhabitants of the game's world and is suffering from an unspecified illness. The story concept was inspired by lead developer Alx Preston's heart disease, and has been likened by others to Studio Ghibli's Castle in the Sky, while Preston cites the studio's Nausicaä of the Valley of the Wind as inspiration for the game's world. The Drifter is equipped with an energy sword but can gain access to other modules that expand their weapon and ability arsenal. These often require power from rare batteries scattered around the world. Weaponry includes traditional console role-playing game archetypes, including long-range guns and area attacks. Rather than scavenging ammunition from the game world to load the player's guns, the player's ammunition instead charges when hitting enemies and objects with the energy sword. The player faces increasingly difficult monsters, both in number and ability, requiring the player to hone their tactics to succeed in the game. Preston's goal was to replicate the experience of playing on the SNES, noting that the unit had "amazing, almost perfect games designed for limited environments" which he challenged himself to simulate in Hyper Light Drifter. One feature of SNES games that Preston captured is that there is no spoken dialogue, placing more emphasis on the game's music and visuals to tell a story. Development Hyper Light Drifter is primarily based on the vision of its key developer, Alx Preston. Preston had been born with congenital heart disease, and throughout his life has been hospitalized with digestive and immune-system issues relating to this condition. While in college, Preston had used the mediums of painting and film to illustrate his experiences with frail health and near-death conditions. Preston envisioned Hyper Light Drifter as a video game as a means "to tell a story [he] can identify with, expressing something personal to a larger audience, so [he feels] more connected and have an outlet for the many emotions that crop up around life-altering issues". Further, he had yearned to develop a game that combined the best elements of The Legend of Zelda: A Link to the Past and Diablo for many years, which would feature world exploration and combat that required some strategy by the player, depending on foes they faced. After several years of being an animator, he felt he could do so in 2013. The theme and story for the game, featuring a protagonist suffering from a terminal disease, is meant as a simile for his own health. Preston originally set out to make the game for Windows, OS X, and Linux computers and started a Kickstarter campaign in September 2013 to secure in funding to complete the title. Prior to starting the campaign, Preston had secured the help of programmer Beau Blyth who created titles like Samurai Gunn, and musician Disasterpeace, who worked on the music for Fez. He opted to develop the game under the studio name Heart Machine as an allegory for the various medical devices he often needs to track his own health, and to use for future projects following Hyper Light Drifter. The project funding was exceeded in a day, and quickly grew over within a few days of its launch. To encourage additional funding, Preston created new stretch goals, including additional gameplay modes, more bosses and characters, and expanding the release to include the PlayStation 4 and Vita, the Ouya, and the Wii U consoles. These goals were all met by the completion of the campaign, with more than raised. Preston stated that he had had these additional platforms in mind when first launching the Kickstarter, but did not want to over-promise what he felt he could deliver. The additional funds have helped Preston hire additional developers to aid in porting the game to these additional consoles. The game was originally set for release in mid-2014 but was delayed until the second quarter of 2016, due to the expanded scope of the game, the need to perfect the game before its first release, and the lead developer's health issues. Preston found help from several developer friends around the Los Angeles area. He and a number worked together to build out Glitch Space, a small open office space for small developers to work from and share ideas with others. Besides his own team, Preston got frequent help from developers Ben Esposito (Donut County), Brandon Chung (Blendo Games), and Ben Vance. Preston was also encouraged by letters of support he got from people across the globe after reporting on some of his health conditions. The letters influenced Preston to alter the story in Hyper Light Drifter as to not make it about a problem facing a single character but something shared by many. With the most recent delay announced in August 2015, Heart Machine said that they will plan to release the Windows and OS X version first with the console versions shortly thereafter once they clear the console certification processes. The Windows and OS X versions were released on March 31, 2016. The PlayStation 4 and Xbox One versions were released on July 26, 2016. In February 2016, Heart Machine revealed that there were contractual issues at the time between Nintendo and YoYo Games, the developer of the GameMaker: Studio engine, that was beyond their control that may prevent the game from being ported to the Wii U, and while they hoped they can offer for this platform at the end, they have considered the Wii U version "in limbo." Several patches have been applied to the game since its initial release. One of these patches made the game slightly easier, in response to feedback about the game's difficulty. This patch made a number of minor changes to the game, most notable of which was the addition of a brief period of invincibility when the player uses the Dash mechanic. The reduction in difficulty led to debate within the game's fan community, split between those who liked the new patch, and those who preferred the old, more challenging version. Three days after this patch, the developers re-balanced the game to add back some of the difficulty. A mode featuring two-player co-op gameplay was planned during the initial development of the game and was intended to be a feature at launch, but it was cut due to time constraints. On April 27, a beta version of the co-op mode was released. An update that went live on May 5 fully implemented the co-op multiplayer feature in the game. In September 2016, Preston announced that they had to cancel the planned Wii U and PlayStation Vita versions, offering those backers the ability to redeem the game on another system or be refunded if desired. Preston cited issues with rebuilding the game from the ground up on these systems due to issues with GameMaker Studio on these platforms, noting that it took six months to get the game ported to PlayStation and Xbox. Preston also had further concerns on his own health, putting his well-being as a priority. After Preston’s announcement, Abylight Studios got in contact with him to lend a hand, which resulted in the collaboration between Heart Machine and Abylight Studios for an adaptation and publication of Hyper Light Drifter for the Nintendo Switch. Abylight Studios worked closely with YoYo Games, the developer of the game development software GameMaker Studio, which was used to create Hyper Light Drifter. On September 6, 2018, Hyper Light Drifter: Special Edition was launched for the Nintendo Switch featuring exclusive content, such as the Tower Climb challenge. The Special Edition was also ported to iOS devices with 120 fps gameplay on iPad Pro and 60 fps on both iPhone and iPad, and released on July 25, 2019. . The Hyper Light Drifter - Special Edition Collector's Set for Nintendo Switch, which includes the physical copy of the game and other collectible items, was announced for pre-order by Abylight Studios in December 2020 and started shipment in January 2021. Reception Hyper Light Drifter received critical acclaim, holding a score of 84/100 at the review aggregator Metacritic. Common praise has been given to the game's visuals, sound design and combat mechanics. Kyle Hilliard of Game Informer awarded the game a 9.5/10, claiming that the game "has already positioned itself as one of the best experiences of the year." Brandin Tyrrel of IGN called the game a "gorgeous, trendy hunk of stylish old-school sensibilities mated with the iconic hues of pixelated indie charm." Christian Donlan of Eurogamer praised the game's "intoxicating" atmosphere, as well as Disasterpeace's "typical delight" of a soundtrack. Kevin VanOrd of GameSpot cites the game's art direction as "rich and thoughtful," and comments on its "fluid, demanding, and fair" combat system. Mixed criticism commonly falls upon the minimalism of the game's storytelling method. Tyrrel alleges its "abstract storytelling" is a negative aspect, while Griffin McElroy of Polygon claims that the game's story is replaced with "moods," and "quiet moments with constant scenes of breakneck, pitch-perfect action." Accolades Legacy Heart Machine's next game, Solar Ash (originally announced as Solar Ash Kingdom), was announced in March 2019, and is set to be in the same universe as Hyper Light Drifter though not as a direct sequel. It will be released on the PlayStation 5. The Drifter is a playable character in the games Runbow and Brawlout, as well as in the upcoming game Hex Heroes. The Drifter will also be added as an expansion character in the board game Kingdom Death: Monster. Hyper Light Drifter is also featured on one of the clothing options for Travis Touchdown in Travis Strikes Again: No More Heroes. Animated limited television series A limited animated television series based on Hyper Light Drifter was announced in March 2019 by Preston and Adi Shankar, who had helped led adaptions of current Castlevania and planned Devil May Cry series. The two are currently writing scripts for the episodes and developing the series. They plan to retain elements of the game's pixel art style, but still borrow from anime influences. See also List of GameMaker Studio games References External links Official website at Abylight 2016 video games Action role-playing video games Cancelled PlayStation Vita games Cancelled Wii U games Crowdfunded video games GameMaker Studio games IOS games Kickstarter-funded video games Linux games MacOS games Nintendo Switch games Ouya games PlayStation 4 games PlayStation Network games Role-playing video games Video games developed in the United States Video games scored by Richard Vreeland Windows games Xbox One games
37881967
https://en.wikipedia.org/wiki/Expeditionary%20Combat%20Support%20System
Expeditionary Combat Support System
The Expeditionary Combat Support System (ECSS) was a failed enterprise resource planning software project undertaken by the United States Air Force (USAF) between 2005 and 2012. The goal of the project was to automate and streamline the USAF's logistics operations by, in part, consolidating and replacing over 200 separate legacy systems. The project was undertaken to develop a single integrated enterprise resource planning system built using commercial off-the-shelf software, aimed at tracking all of its physical assets and making efficiency savings. The purpose of the system was to enable the organisation to track all of its physical assets including airplanes, fuel, and even spare parts. The ECSS program was established through two main contracts. The first contract was with the database software company Oracle, to supply the commercial off-the-shelf (COTS) software. The second contract was with the Computer Science Corporation (CSC), to amalgamate the COTS software into the existing Air Force infrastructure. After spending $1.1 billion on its development, the USAF concluded in 2012 that the system, "has not yielded any significant military capability" and estimated that, "it would require an additional $1.1B for about a quarter of the original scope to continue and fielding would not be until 2020." Based on that conclusion, the USAF canceled the program in November 2012. United States Senate Committee on Armed Services members Carl Levin and John McCain characterized the failed project as "one of the most egregious examples of mismanagement in recent memory." References 2005 establishments in the United States 2012 disestablishments in the United States Projects established in 2005 Projects disestablished in 2012 Military logistics of the United States Projects of the United States Air Force United States military scandals 21st-century history of the United States Air Force Discontinued custom software projects
22970596
https://en.wikipedia.org/wiki/Oracle%20Beehive
Oracle Beehive
Oracle Beehive is collaboration platform software developed by Oracle Corporation that combines email, team collaboration, instant messaging, and conferencing in a single solution. It can be deployed on-premises as licensed software or subscribed to as software-as-a-service (SaaS). Features Key components Oracle Beehive includes a platform along with four main components: Beehive Platform: unified architecture and data store for collaboration and communication services. Includes restricted-use licenses for Oracle Database and Oracle Fusion Middleware. Enterprise Messaging: Email, calendar, address book, and task management accessible via Microsoft Outlook, Zimbra Web Client, and a selection of mobile phones. Team Collaboration: team workspaces with document library, team wiki, team calendar, team announcements, RSS and contextual search. Synchronous Collaboration: web conferencing and VoIP audio conferencing via the Beehive Conferencing Client. Interoperability Oracle Beehive supports the following standards for greater interoperability and to allow the use of different communication and collaboration clients: IMAP and SMTP for email Open Mobile Alliance DS and Push-IMAP for email over mobile devices CalDAV for calendar scheduling management XMPP for instant messaging and presence WebDAV for file management JSR 170 for accessing content repositories BPEL for automating business processes LDAP for directory services Java Management Extensions (JMX) for system management and monitoring Product enhancements In May 2009, Oracle released version 1.5 of Oracle Beehive with new capabilities including web-based team workspaces that include features such as file sharing, team wikis, team calendar, RSS support, and contextual search. Beehive 1.5 also includes added security and recording capabilities for audio and web conferencing and expanded integration with desktop productivity tools like Microsoft Outlook and Windows Explorer. In February 2010, Oracle released version 2.0 of Oracle Beehive. New capabilities include a software development kit with REST APIs, coexistence support for Lotus Domino email systems, integration with Oracle Universal Content Management and Oracle Information Rights Management software, and team collaboration enhancements including a user directory, discussion forums, task assignments, tagging, and faceted search. Project collaborations In November 2018, Oracle collaborated with the UK based global project The World Bee Project to create the network of AI Smart Hives. The project uses the artificial intelligence and machine learning to record the data from sensors planted in beehives to the Oracle Cloud data base which will help learning the relation between bees and the environment. See also Oracle Fusion Middleware Oracle Database List of collaborative software Enterprise 2.0 References External links Oracle Beehive Web Page Oracle Beehive Documentation Oracle Beehive FAQ Oracle Beehive Object Model Unified communications Collaborative software Groupware Collaboration Oracle software Service-oriented architecture-related products
31300012
https://en.wikipedia.org/wiki/SunComm%20Technology
SunComm Technology
SunComm Technology () is a Taiwan multinational computer technology and GSM Voice over IP gateway manufacturer. The main products in 2010 focused on GSM VoIP gateways & IP surveillance camera devices. Core members have been engaging in the communication & networks industry since 1977. History SunComm Technology Co., Ltd () In 1977, developed communication & networks industry. In 2007, developed GSM gateway. In 2008, developed VoIP. In 2010, developed GSM to VoIP Gateway. Operations Dongguan, China Liaison Office has been set up at Guan-Dong Province to provide service from Taiwan and China. SunComm Technology Co., Ltd provides: VoIP Wifi IP Phone gateway 2, 4, 6, 8, 16, 24 ports SIP Proxy Server for 200 users with 2, 4, 8 FXS, FXO embedded SIP IP PBX with 72 SIP Line and 24 SIP Trunk Web Call Server: with 10 concurrent call, 30 concurrent call GSM VoIP Device GSM Gateway (FWT) GSM VoIP gateway: 1 channel, 2 channels GSM E1 Channel Bank (30 channels) (Dual band 900/1800 MHz or Quad band 900/1800/1900 MHz) Products A VoIP phone has the following hardware components: Keypad & touchpad to enter phone number and text. Speaker & earphone and microphone. General purpose processor (GPP) to process application messages. Display hardware to feedback user input and show caller-id & messages. A voice engine or a digital signal processor (DSP) to process RTP messages. Some IC manufacturers provides GPP and DSP in single chip. Ethernet or wireless network hardware to send and receive messages on data network. Power source might be a battery or DC source. Some VoIP phones receive electricity from Power over Ethernet. ADC and DAC converters: To convert voice to digital data and vice versa. Other devices There are several Wi-Fi enabled mobile phones and PDAs that come pre-loaded with SIP clients, or are capable of running IP telephony clients. Some VoIP phones also support PSTN phone lines directly. Gateway devices Analog telephony adapters are connected to the internet or Local area network using an Ethernet port and have sockets to connect one or more PSTN phones. Such devices are sent out to customers who sign up with various commercial VoIP providers allowing them to continue using their existing PSTN based telephones. Another type of gateway device acts as a simple GSM base station and regular mobile phones can connect to this and make VoIP calls. While a license is required to run one of these in most countries these can be useful on ships or remote areas where a low-powered gateway transmitting on unused frequencies is likely to go unnoticed. Ethernet hub Voice over IP E1 GSM Channel Bank with VoIP GSM Fixed Phone (FWP) IP Phone / Wifi IP Skype Phone Desktop Router 3G Wifi AP Router Gateway 3G GSM Gateway GSM Gateway PoE Switch Server 3G Hsupa Hsdpa EvdoEdge Modem GSM Remote SIM Switch / Server Wifi ATA / VoIP Gateway Terminal 3G VoIP Terminal CDMA VoIP Terminal GSM VoIP Terminal Modem GSM / Wifi Dual Mode Payphone Disadvantages of VoIP phones IP networks, particularly residential Internet connections, are easily congested. This can cause poorer voice quality or the call to be dropped completely. VoIP phones, like other network devices can be subjected to denial-of-service attacks as well as other attacks, especially if the device is given a public IP address. Due to the latency induced by protocol overhead they do not work as well on satellite Internet and other high-latency Internet connections. Requires Internet access to make calls outside the local area network (LAN) unless a compatible local PBX is available to handle calls to and from outside lines. VoIP phones and routers depend on mains electricity for power, unlike PSTN phones, which are supplied with power from the telephone exchange. However, this can be mitigated by installing a UPS. See also 3G(3rd-generation) Business telephone system (PBX) GSM (Global System for Mobile Communications) List of companies of Taiwan Network address translation (NAT) Network bridge Power over Ethernet (PoE) Session Initiation Protocol (SIP) Skype Voice over IP (Voice over Internet Protocol) Wi-Fi Wireless access point (WAP) References 公司登記資料查詢(經濟部商業司) GSM operators to face Lottery Commission's anger Avaya CEO Predicts Broad Adoption Of SIP-Powered UC Virtual PBX Offers IP Solution External links SunComm Technology Co., Ltd GSM 900 Frequency and Provider Chart GSM 1800 Frequency and Provider Chart Windows Phone 7 will be GSM-only in 2010 AT&T lets 3G VoIP onto iPhone China locks down Voip Gmail gets Voip support From Voip to Unified Communications: Simplify System Management 1977 establishments in Taiwan Electronics companies of Taiwan Companies based in New Taipei Manufacturing companies based in New Taipei Computer companies established in 1977 Electronics companies established in 1977 Taiwanese brands
8705124
https://en.wikipedia.org/wiki/Cheating%20in%20chess
Cheating in chess
Cheating in chess is a deliberate violation of the rules of chess or other behaviour that is intended to give an unfair advantage to a player or team. Cheating can occur in many forms and can take place before, during, or after a game. Commonly cited instances of cheating include: collusion with spectators or other players, use of chess engines during play, rating manipulation, and violations of the touch-move rule. Many suspiciously motivated practices are not comprehensively covered by the rules of chess. On ethical or moral grounds only, such practices may be judged by some as acceptable, and by others as cheating. Even if an arguably unethical action is not covered explicitly by the rules, article 11.1 of the FIDE laws of chess states: "The players shall take no action that will bring the game of chess into disrepute." (This was article 12.1 in an earlier edition.) For example, while deliberately sneaking a captured piece back onto the board may be construed as an illegal move that is sanctioned by a time bonus to the opponent and a reinstatement of the last legal position, the rule forbidding actions that bring chess into disrepute may also be invoked to hand down a more severe sanction such as the loss of the game. FIDE has covered the use of electronic devices and manipulating competitions in their Anti-Cheating Regulations, which must be enforced by the arbiter. Use of electronic devices by players is strictly forbidden. Further, the FIDE Arbiter's manual contains detailed anti-cheating guidelines for arbiters. Online play is covered separately. History and culture Cheating at chess is almost as old as the game itself, and may even have caused chess-related deaths. According to one legend, a dispute over cheating at chess led King Cnut of the North Sea Empire to murder a Danish nobleman. One of the most anthologized chess stories is Slippery Elm (1929) by Percival Wilde, which involves a ruse to allow a weak player to beat a much stronger one, using messages passed on slippery-elm throat lozenges. Television shows have engaged the plot of cheating in chess, including episodes of Mission: Impossible and Cheers. In televised shows based on humourist Tenali Rama (a real-life personality who lived under king Krishnadeva Raya, ruler of Vijaynagar during its most prosperous period), a loud-mouthed chess "unbeatable champion" (who mostly depends on winning by cheating) takes advantage of the emperor's sleep due to boredom and starts shouting along with followers (who have accompanied him from an opponent kingdom), successfully convincing the assembly that he has won. Automaton hoaxes In contrast to the modern methods of cheating by playing moves calculated by machines, in the 18th and 19th centuries, the public were hoaxed by the opposite deception in which machines played moves of hidden humans. The first and most famous of the chess automaton hoaxes was The Turk (1770), followed by Ajeeb (1868), and Mephisto (1886). Collusion Over the years, there have been many accusations of collusion, either of players deliberately losing (often to help a friend or teammate get a title norm), or of players agreeing to draws to help both players in a tournament. One of the earliest evidences is with the Fifth American Chess Congress in 1880, when Preston Ware accused James Grundy of reneging on a deal to draw the game, with Grundy instead trying to play for a win. A newspaper article contemporary to the event stated, "Ware’s avowal of his right to sell a game in a tourney was a novelty in chess ethics ... Ware’s veracity has not been questioned, only his obliquity of moral vision ..." Six prior allegations of similar collusion and bribery, including another against Ware, were listed from 1876 to 1880 in that article on the Ware-Grundy affair, which was published in the Brooklyn Eagle on 8 February 1880. More recently, researchers at Washington University in St. Louis have claimed, based on their analysis of statistical tests of match records against economic models, that Soviet chess masters may have colluded in world chess championships held from 1940 to 1964. The Washington University study argues from the researchers' statistical findings that Soviet players may have agreed to draws between themselves to improve their standings. Opinions differ over how effective collusion may be. For example, if a leading player draws his game, it may allow his rivals to gain ground on him by winning their games. During the Cold War, Soviet players were accused of colluding with each other as if they were playing for the same team - setting up easy draws with each other so that they could focus their attention on other matches against non-Soviet players, or outright resignations if a favored player played a lesser player. The most famous alleged instance was at the 1962 Candidates Tournament for the 1963 World Chess Championship, where the three top-finishing Soviet players finished with draws in all their matches against each other. Journalist Nicholas Gilmore thought that Western Bloc accusations of Soviet collusion (especially by American Bobby Fischer) were "largely unfounded; but not completely", while a 2009 journal article by two economics professors argued that the Soviets did collude effectively during the period. In 2011, IM Greg Shahade wrote that "prearrangement of results is extremely commonplace, even at the highest levels of chess. This especially holds true for draws... There is a bit of a code of silence at the top levels of chess." The subject had been partially broached (in the U.S. context) by Alex Yermolinsky a few years earlier, saying "It's no secret how people act when facing a last-round situation when a draw gives no prize ... People will just dump games, period." Concerning an incident involving 2006 US Championship qualification, Shahade blamed the Swiss system for creating perverse incentives. Frederic Friedel reported that the PCA had considered running a series of open tournaments in 1990s, but for similar reasons given by John Nunn ultimately declined, saying that deliberately losing games was "very real in the many open tournaments that are staged all over the world." Touch-move rule In chess, the "touch-move" rule states that if a player (whose turn it is to move) touches one of his pieces, it must be moved if it has a legal move. In addition, if a piece is picked up and released on another square, the move must stand if it is a legal move. If an opponent's piece is touched, it must be captured if it is legal to do so. These rules are often difficult to enforce when the only witnesses are the two players themselves. Nevertheless, violations of these rules are considered to be cheating. In one famous instance, Garry Kasparov changed his move against Judit Polgár in 1994 after momentarily letting go of a piece. Kasparov went on to win the game. The tournament officials had video records proving that his hand left the piece, but refused to release the evidence. A factor counting against Polgár was that she waited a whole day before complaining, and such claims must be made during the game. The videotape revealed that Kasparov did let go of the piece for one-quarter of a second. Cognitive psychologist Robert Solso stated that it is too short a time to make a conscious decision. Another famous incident occurred in a game between Milan Matulović and István Bilek at the Sousse Interzonal in 1967. Matulović played a losing move but then took it back after saying "J'adoube" ("I adjust"—which should be announced before adjusting pieces on their square). His opponent complained to the arbiter but the modified move was allowed to stand. This incident earned Matulović the nickname "J'adoubović". The 2003 European Championship saw a "takeback game" between Zurab Azmaiparashvili and Vladimir Malakhov, who eventually finished first and second in the event. According to the book Smart Chip by Genna Sosonko: Both grandmasters were fighting for the lead, and the encounter had huge sporting significance. In an ending that was favourable to him, Azmai[parashvili] picked up the bishop, intending to make a move with it instead of first exchanging rooks. Malakhov recalled: "Seeing that the rooks were still on the board, he said something like, "Oh, first the exchange, of course." put his bishop back, took my rook, and the game continued. I don't know what should have been done differently in this situation—in Azmai's place, some might have resigned immediately, and in my place, some would have demanded that he make a move with his bishop but I didn't want to ruin the logical development of the duel, so I didn't object when Azmai made a different move: the mistake was obviously nothing to do with chess! When we signed the score sheets, Azmai suggested to me that we consider the game a draw. After the game I was left with an unpleasant aftertaste, but that was due mainly to my own play." Illegal moves and board manipulation A dishonest player can make an illegal move and hope their opponent does not notice. The rules of chess have had differing penalties for making an illegal move over time, varying from outright loss of the game on the spot to backing the game up and adding additional time to the other player's clock, but they only apply when the illegal move is noticed. Normally, illegal moves are simple mistakes from time pressure, but if made intentionally are considered cheating. Intentional use of an illegal move is rare in high level games. In all but the fastest matches, sufficiently skilled chess players have a strong mental picture of the board state such that a manipulation is obvious, and the penalties from making an illegal move mean that it is rarely worthwhile if the cheating player is caught. A rare example where a high level player was accused of this was at the 2017 World Blitz Chess Championship in Riyadh. Ernesto Inarkiev was playing Magnus Carlsen, and with little time left on both players' clocks, Inarkiev made an illegal move by checking Carlsen's king. However, Inarkiev's own king was in check, and Inarkiev's move did not remove Carlsen's check. Carlsen automatically moved his king out of check, but Inarkiev then claimed that Carlsen had made an illegal move, and that Carlsen's only legal play had been to point out Inarkiev's illegal move. The deputy arbiter agreed with Inarkiev's interpretation of the rules and awarded him the win. The decision was appealed and the initial ruling was overturned, with the new ruling to resume play. Inarkiev received criticism for gamesmanship for attempting to use his own illegal move as a way to win, and Carlsen took the win after Inarkiev refused to resume play. Board manipulation An extreme example of illegal moves is to outright manipulate the board such as by adjusting pieces on the border to the wrong square, removing opponent's pieces, or adding extra pieces to the cheater's own position, perhaps while the opponent is not at the board to observe or via sleight-of-hand techniques borrowed from close-up magic. This almost never happens in tournaments, but can happen in casual games where there is essentially no penalty for getting caught. A few "chess hustlers" playing casual games of speed chess for money in public parks have been caught using such techniques, although it is agreed that most hustlers do not cheat. A rare possible example of physical piece manipulation at the grandmaster level involved removing captured off-the-board pieces. At the 2017 Canadian Championship in Montreal, GM Bator Sambuev was playing IM Nikolay Noritsyn in a blitz match. Noritsyn was in a good position but was extremely low on time, having less than 10 seconds left. He moved a pawn to the eighth rank to promote, and FIDE rules require that pawns are replaced by their promoted piece before the move is complete and the clock can be pressed. Sambuev had the black queen concealed in his hand, however, and the arbiters did not arrange for spare queens to be available. Pressed for time, Noritsyn opted to use an upside-down rook to indicate he wanted a queen, rather than pause the game to complain that the queen was missing from the supply and ask for a spare. The angle shooting aspect came after the arbiters paused; rather than inform the arbiter that he had the queen in his hand and had forgotten to place it with the other previously captured pieces by accident, Sambuev quietly put the queen back with the others, and let the arbiters come to the conclusion that Noritsyn had simply erred in taking a rook. The arbiters ruled that the rook promotion stood. Sambuev went on to win the game, and Noritsyn's appeal after video showing Sambuev holding then replacing the queen was denied. Cheating with technology Technology has been used by chess cheaters in several ways. The most common way is to use a chess program while playing chess remotely, such as on the Internet or in correspondence chess. Rather than play the game directly, the cheater simply inputs the moves so far into the program and follows its suggestions, essentially letting the program play for them. Electronic communication with an accomplice during face-to-face competitive chess is a similar type of cheating; the accomplice can either be using a computer program or else simply be a much better player than their associate. Modern chess websites will analyze games after the fact to give a probabilistic determination on whether a player received surreptitious help as part of an effort to detect and discourage such behaviors. Attempting to compensate for latency in online play is a potential area for exploitation. Many chess programs attempt to make it so that a player's clock only starts running once they receive their opponent's move, to ensure fairness when two distant players are matched with each other. This allowed a number of stratagems if the client-side timing could be compromised, such as via pretending to have a very slow router, which would essentially put extra time on the cheater's clock. For example, the cheater might take 5 seconds to make a move after seeing their opponent's move, but their software would claim only 4 seconds were taken to the server - a significant advantage in rapidly paced games. Incidents Due to the great multiplication of technological cheating incidents, the following examples concentrate only on those that are either at a high level, or are of historical significance. High-profile In the 2010 FIDE Olympiad Tournament at Khanty-Mansiysk, three French players were caught in a scheme to use a computer program to decide moves. Their plan involved one player, Cyril Marzolo, following the tournament at home and using the computer program to decide the best moves. He would send the moves by SMS to the team coach, Arnaud Hauchard, who would then stand or sit at various tables as a signal to the player, Sébastien Feller, to make a certain move. Sébastien Feller was given a two-year and nine months suspension, Cyril Marzolo was given a one-year and six-month suspension, and Arnaud Hauchard was given a three-year suspension by the FIDE Ethics Commission. Unlike other cases, each player involved was a legitimate Grandmaster or International Master. None of the other players on the team knew of this or were involved. The scandals of Borislav Ivanov were cause célèbre in the chess world in 2012 and 2013, with cheating first being alleged at the Zadar Open, and then in Kyustendil. He was banned for four months by the Bulgarian Chess Federation, though this ban was overturned due to procedural defects, and was not based upon the cheating allegations, but rather Ivanov's rude behavior toward his accusers. After various interludes, he was banned permanently by the Bulgarian Chess Federation. The incidents were significant as they were one of the first times that statistical methods were used to analyze move-matching with computer programs, even though in the end such evidence was never used in a formal legal procedure. At the 2014 Iasi Open, Wesley Vermeulen was caught cheating by consulting a mobile phone in the toilet, admitted his offense, and was eventually banned for one year by both the Dutch chess federation and FIDE. In April 2015, Georgian grandmaster Gaioz Nigalidze was banned from the Dubai Open Chess Tournament after officials discovered him consulting a smartphone with chess software in the washroom during a game. He was later stripped of his grandmaster title and banned from competition for three years, though he was allowed to keep his International Master title. In February 2016, Sergey Aslanov was expelled from the Moscow Open, for a smartphone in the toilet, hidden under a loose tile behind a drainpipe. He declared himself to be guilty of error but not a crime, and was only suspended for one year. In July 2019, Igors Rausis was caught cheating in the Strasbourg Open, using a mobile phone in the bathroom. He admitted to having cheated, and announced his retirement from chess. On October 1, 2020, Wesley So accused Tigran L Petrosian of cheating in his semi-final and final games during the Chess.com 2020 PRO Chess League. So was rated eighth-highest player in the world at the time. Petrosian responded to So on Twitter with childish taunts. Chess.com found that Petrosian and by extension his team, the Armenia Eagles, had violated fair play regulations. The team was disqualified and the Saint Louis Arch Bishops were subsequently crowned champions. Chess.com and the PRO Chess League both issued lifetime bans to Petrosian. In March 2021, an Indonesian player identified as Dadang Subur played a computer-assisted game against Levy Rozman, obtaining over 90% accuracy. Rozman identified a pattern of cheating and reported Dadang's account. The Indonesian media claimed that Dadang was a legitimate player, inciting an online firestorm against Rozman. Dadang's subsequent match against IM Irene Sukandar received 1.25 million live viewers on YouTube, a historic record for a chess stream. Dadang lost three consecutive matches to Sukandar, with an accuracy of less than 40%. Historical One of the earliest known cases of using technology to cheat occurred in the 1993 World Open. An unrated player using the name "John von Neumann" won out of 9 games in the Open Section, including a draw against a grandmaster. This player was wearing headphones during the tournament, and had a suspicious bulge in his pocket that buzzed during certain moments of the game. He was disqualified when the tournament director found that he lacked even a basic understanding of chess. The 1998 Böblingen Open saw Clemens Allwermann latterly accused of cheating using Fritz, and after an investigation by the district attorney was inconclusive as to the evidence, the Bavarian Chess Federation barred him from participating in future tournaments. In the Lampertheim Open Tournament 2002 the arbiter announced the disqualification of a player before round seven. Markus Keller explained what had happened: In the sixth round a player came to me and said he suspected his opponent, W.S. from L., was using illicit aids during the game. He often left the board for protracted periods of time to go to the toilet, even when (especially when) it was his turn to play. He had done this in earlier rounds against other players as well. I watched W.S. and noticed that he played a number of moves very rapidly and then disappeared in the toilet. I followed him and could hear no sound coming from the stall. I looked under the door and saw that his feet were pointing sideways, so that he could not have been using the toilet. So I entered the neighbouring stall, stood on the toilet bowl and looked over the dividing wall. I saw W.S. standing there with a handheld PC which displayed a running chess program. He was using a stylus to operate it. I immediately disqualified the player. When confronted he claimed that he was only checking his emails, so I asked him to show me the computer, which he refused to do. There are witnesses for my investigation in the toilet, and we will ask the chess federation of our state to ban the player from playing in other tournaments. In the HB Global Chess Challenge 2005 (in Minneapolis, Minnesota), a player in the Under-2000 section exited the event under suspicion of cheating, while his final-round game was under way. According to tournament officials, he was caught repeatedly talking on his cell phone during his game—which the published rules for that event expressly prohibited. Directors suspected that he was receiving moves over the phone from an accomplice elsewhere in the building. His results were expunged from the tournament and an ethics complaint lodged. Six weeks later, the same player entered the World Open and tied for first through third place in the Under-2200 section, pocketing $5,833. An attempt was made to eject him midway through that event, when the organizers belatedly learned about the earlier incident in Minnesota. But, lacking any specific allegation that he was cheating in the World Open, they backtracked and re-admitted him after he threatened legal action. In the Subroto Mukerjee memorial international rating chess tournament 2006, an Indian chess player was banned from playing competitive chess for ten years due to cheating. During the tournament at Subroto Park, Umakant Sharma was caught receiving instructions from an accomplice using a chess computer via a Bluetooth-enabled device which had been sewn into his cap. His accomplices were outside the building, and were relaying moves from a computer simulation. Officials became suspicious after Sharma had made unusually large gains in rating points during the previous 18 months, even qualifying for the national championship. Umakant began the year with an average rating of 1933, and in 64 games gained over 500 points to attain a rating of 2484. Officials received multiple written complaints alleging that Umakant's moves were in exactly the same sequence suggested by the chess computer. Eventually, in the seventh round of the tournament, Indian Air Force officials searched the players on the top eight boards with a metal detector and found that Umakant was the only player who was cheating. Umakant's ten-year ban was imposed by the All India Chess Federation (AICF) after reviewing evidence presented by Umakant himself and the electronic devices seized by the tournament organizers. The penalty was considered harsh, especially considering that those in other sports who have been found to be doping and match fixing did not receive such lengthy suspensions. When officials were asked about the suspension they stated, "We wanted to be frank and send a stern message to all players. It is like cheating on exams." In the Philadelphia World Open 2006, Steve Rosenberg, who was playing in a lower section, was leading before the final round. A victory would have been worth about $18,000. He was confronted by a tournament director and found to be using a wireless transmitter and receiver called a "Phonito". He was disqualified from the event. In the same event, Eugene Varshavsky was also accused of cheating due to his unusually good performance for a low-ranked player and his moves matching up with the commercially-available chess engine Shredder, although no cheating device or transmitter was found either on his person or in the bathroom stall he had been occupying. In a Dutch League 2C 2007 match between Bergen op Zoom and AAS, the arbiter caught the team captain of AAS (who was playing on board 6) using a PDA. The player was outside the playing hall, with permission, to get some fresh air. The arbiter had followed him and caught him using Pocket Fritz. On the screen, the current position of the game was shown. The arbiter declared the game lost and informed the Dutch Federation about the incident. The competition manager communicated a heavy penalty: the player was banned from playing in the Dutch League and Cup matches, not only for that season, but also for the next two seasons. The competition manager applied article 20.3 of the Federation's competition regulations. In the Dubai Open 2008, M. Sadatnajafi, an untitled Iranian player (rated 2288 at the time), was disqualified from the tournament after he was caught receiving suggested moves by text message on his mobile phone while playing Grandmaster Li Chao. The game was being relayed live over the Internet and it was alleged that his friends were following it and guiding him using a computer. In the Norths Chess Club Centenary Year Under 1600 Tournament a 14-year-old boy was caught using what the arbiter called a "hand-held machine" in the toilets. The game was declared lost and the boy was expelled from the tournament. He was using the program Chessmaster on a PlayStation Portable, and that was probably the reason why the moves were not particularly strong. It was the first example of a chess player getting caught while using an electronic device in Australia, and so it quickly became a big story in the relatively small Australian chess community. In the German Chess Championship 2011, FM Christoph Natsidis used a chess program on his smartphone during his last-round game against GM Sebastian Siebrecht. Natsidis admitted that he had cheated, and was disqualified from the championship. At the 2012 Virginia Scholastic and Collegiate Championships, a player was caught using a chess engine running on a PDA. The player was disqualified from the tournament, had his membership to the Virginia Chess Federation suspended, and had an ethics complaint filed to the USCF. Unlike other incidents, the player was using the chess engine disguised as using eNotate, which is one of two electronic chess notation programs permitted to be used at USCF tournaments. While the player only admitted to using the chess engine in that one match, his results suggested he had been using the program for several tournaments. At the 2013 Cork Congress Chess Open, a 16-year-old player was found to be using a chess program on a smartphone when his opponent confronted him in the toilets by kicking down the cubicle door and physically hauling him out. The opponent received a ten-month ban for violent conduct. The 16-year-old player was banned for four months for cheating. In January 2016, the blind Norwegian player Stein Bjørnsen was accused of cheating after playing games that showed a very high correlation with computer analysis. Due to his disability, Bjørnsen had been allowed to keep a record of his moves with a recorder coupled to an ear plug. The ear plug was later found to be incompatible with the recorder, but capable of receiving messages by Bluetooth. In April 2016 he received a two-year ban on domestic competition from the Central Board of the Norwegian Chess Federation (NSF). Bjørnsen's appeal to the federation's rules committee was turned down in September 2016. Bjørnsen returned in 2018 after serving the ban. In March that year he was caught with a Bluetooth earpiece taped to his hand during a club tournament in Horten. The federation expelled Bjørnsen in May 2018. In June 2021, the Indian billionaire Nikhil Kamath cheated against former world champion Viswanathan Anand in a live simultaneous exhibition charity event organised by the chess.com website. The website banned Kamath's account for violating its fair play policy. The incident led to heavy criticism and discussion on social media after which Kamath confessed to using analysts and computers during his game and responded by apologising for "causing confusion". Rating manipulation Since the introduction of Elo ratings in the 1960s, a number of attempts have been made to manipulate the rating system, either to deliberately inflate one's rating or to disguise one's strength by deliberately losing rating points. Sandbagging Sandbagging involves deliberately losing rated games in order to lower one's rating so that one is eligible to enter the lower-rated section of a tournament with substantial prize money. This is most common in the United States, where the prize money for large open tournaments can be over $10,000, even in the lower-rated sections. Sandbagging is very difficult to detect and prove, so USCF has included minimum ratings based on previous ratings or money winnings to minimize the effect. Small pools of players A limited pool of players who rarely or never play against players from outside of that pool can cause distortions in the Elo rating system, especially if one or more of the players is significantly stronger than the others, or if the results are deliberately manipulated. Claude Bloodgood was accused of manipulating the USCF rating system in this way; at his peak in 1996 his USCF rating was in excess of 2700, the second highest in the country at the time. As a long-term prison inmate, he was necessarily restricted in the range of opponents available to him. The USCF suspected that he had deliberately inflated the ratings of his opponents; Bloodgood denied this, attributing his inflated rating to a quirk in the rating system resulting from his regularly playing against a limited pool of much weaker players. There was widespread reporting of anomalous Burmese (Myanmar) rating movements in the late 1990s, with Milan Novkovic giving an analysis of manipulation in Schach magazine. False tournament reports The most notable international example of ratings manipulation involves Romanian Alexandru Crisan, who allegedly falsified tournament reports to gain a Grandmaster title and was ranked 33rd in the world on the April 2001 FIDE rating list. A committee overseeing the matter recommended his rating be erased and his Grandmaster title revoked. While the Romanian Chess Federation initially favored action against Crisan, eventually he became the RCF president and changed the policy, creating such a situation that FIDE intervened to broker a resolution regarding many problems in the RCF, including Crisan's rating. Crisan then was arrested and imprisoned on fraud charges relating to his management of the company Urex Rovinari and disappeared from chess, thus failing to fulfill the conditions of the resolution and so activating the above recommendations regarding title revocation. FIDE did not fully update its online information until August 2015, when all his titles were removed and his rating adjusted downwards to 2132. When writing about the Crisan case, Ian Rogers alleges that Andrei Makarov (at the time a FIDE vice-president and Russian chess federation president) had arranged an IM title for himself through nonexistent tournaments in 1994. Rumors of rigged tournaments are not unheard of in the chess world. For instance, in 2005, FIDE refused to ratify norms from the Alushta (Ukraine) tournaments, claiming that the games did not meet ethical expectations. A number of players involved protested over the matter. A different Ukrainian tournament in 2005 was found to be completely fake. Usually the strongest players are not involved in these, as they are more for careerist players to gain title norms or small rating gains, but Zurab Azmaiparashvili was alleged to have rigged the results of the Strumica tournament of 1995 to allow him to reach the chess elite. In 2003, Sveshnikov referred to these high-profile Crisan and Azmaiparashvili incidents as "open secrets", at a time when both purported culprits were heavily involved in FIDE politics. Simultaneous games A player with no knowledge of chess can achieve a 50% score in simultaneous chess by replicating the moves made by one of his white opponents in a match against a black opponent, and vice versa; the opponents in effect play each other rather than the giver of the simul. This may be considered cheating in some events such as Basque chess. This can be used against any even number of opponents. Stage magician Derren Brown used the trick against eight leading British chess players in his television show. In most simultaneous exhibitions, the player giving the exhibition always plays the same color (by convention white) in all matches, rendering this trick ineffective; even with a mixed group, attempting to use this in an in-person circle is rather obvious due to more delayed moves than usual, as the player must always look at a given board, not make a move immediately, mirror the move seen on the opposite board, wait for the reply, then send the reply back to the original board. See also William Hartston - author of How to Cheat at Chess References Bibliography Cheating Chess
49772572
https://en.wikipedia.org/wiki/Samarendra%20Kumar%20Mitra
Samarendra Kumar Mitra
Samarendra Kumar Mitra (14 March 1916 – 26 September 1998) was an Indian scientist and mathematician. He designed, developed and constructed, in 1954, India's first computer (an electronic analog computer) at the Indian Statistical Institute (ISI), Calcutta (presently Kolkata). He began his career as a research physicist at the Palit Laboratory of Physics, Rajabazar Science College (University of Calcutta). In 1950, he joined the ISI, Calcutta, where he worked in various capacities such as professor, research professor and director. Mitra was the founder and first head of the Computing Machines and Electronics Division at the Indian Statistical Institute (ISI), Calcutta. In 1954, India's first indigenous electronic analogue computer for solving linear equations with 10 variables and related problems was designed and developed by Samarendra Kumar Mitra and was built under his direct personal supervision and guidance by Ashish Kumar Maity in the Computing Machines and Electronics Laboratory at the (ISI), Calcutta. This computer was used in computation of numerical solutions of simultaneous linear equations using a modified version of Gauss–Seidel iteration. Subsequently, in 1963, the ISI, Calcutta began design and development of the first second-generation indigenous digital computer of India in joint collaboration with Jadavpur University (JU), Calcutta. This collaboration was primarily led by Mitra, as he was the Head of the Computing Machines and Electronics Laboratory, ISI. He designed, developed, and constructed a general purpose High Speed Electronic Digital Computer, namely called the ISIJU computer (Indian Statistical Institute – Jadavpur University Computer). Under the leadership of Mitra, the first second-generation indigenous digital computer of India was produced, namely the transistor-driven machine ISIJU-1, which became operational in 1964. The Computer and Communication Sciences Division of Indian Statistical Institute (ISI) produced many eminent scientists such as Samarendra Kumar Mitra (who was its original founder) and attended the first annual convention of the Computer Society of India (CSI) hosted by ISI in 1965. Mitra was a self-taught scholar with wide-ranging interests in varied fields such as mathematics, physics, chemistry, biology, poultry science, Sanskrit language, philosophy, religion and literature. Biography Samarendra Kumar Mitra was born on 14 March 1916, in Calcutta, the eldest of two children. He was the only son and had a younger sister. His father was Sir Rupendra Coomar Mitter and his mother was Lady Sudhahasinee Mitter. His father, Sir Rupendra Coomar Mitter, was an MSc in mathematics, gold medalist and also a gold medalist in Law from the University of Calcutta and was an advocate by profession who practiced in the Calcutta High Court from 1913 to 1934. In 1934, Sir Rupendra Coomar Mitter was appointed as a Judge, Calcutta High Court and was Chief Justice (Acting) in 1947 during independence of India and continued as a Judge until 1950. Additionally, he was knighted in 1926. Thereafter, he was the Chairman of the Labour Appellate Tribunal from 1950 to 1955. Education Samarendra Kumar Mitra studied at the Bowbazar High School, Calcutta, and completed his Matriculation in 1st Division in 1931. In 1933, he did his Intermediate in Science (I.Sc.) in 1st Division from Presidency College (presently Presidency University), Calcutta (now Kolkata). In 1935, he did his Bachelor in Science with Honours (B.Sc. Hons) in Chemistry, with 2nd rank, from Presidency College (presently Presidency University), Calcutta (now Kolkata) and was awarded the Cunningham Memorial Prize in Chemistry. In 1937, he completed his Master in Science (M.Sc.) Chemistry and in 1940 his Master in Science (M.Sc.) in Applied Mathematics from the Rajabazar Science College, University of Calcutta, Calcutta. In later years, he was working towards his PhD in Physics under Professor Meghnad Saha, but did not pursue it after his mentor's death in 1956. Career He worked as a research physicist under the Council of Scientific & Industrial Research (CSIR,India) scheme on the design and development of an air-driven ultracentrifuge, at the Palit Laboratory of Physics, University of Calcutta, from 1944 to 1948. He was awarded an UNESCO Special Fellowship on the study of High Speed Computing Machines in the United States of America and the United Kingdom during 1949–50 and worked at Harvard University, Institute for Advanced Study, Princeton, United States and at the Mathematical Laboratory, University of Cambridge, U.K. During his time at the Institute for Advanced Study, he became close with numerous eminent physicists and mathematicians, such as Albert Einstein, Wolfgang Pauli, John von Neumann. And, attended lectures of Niels Bohr and Robert Oppenheimer. In fact, it is known that he had many discussions with Albert Einstein and spent a lot of his time with him(while he was at Princeton). He worked in various capacities from 1950 to 1976 at the Indian Statistical Institute (ISI), Calcutta, such as, professor, research professor and director. The Computing Machines and Electronics Division at the ISI, Calcutta was founded by Mitra. In 1953 he designed and constructed the first computer built in India, an electronic analogue computer for solving linear equations with ten variables and related problems. He was UNTAA Adviser on Computing, Moscow, and was responsible for bringing a massive technical aid to India from the U.S.S.R amounting to nearly one crore rupees under UNTTA, 1955. He was an adviser to the Ministry of Defense, Government of India, for computation of ballistic trajectories in 1959 and under his advice the firing table for the first gun produced in India in 1962 was done. He was a member of the Indian National Committee for Space Research from 1962 to 64. In 1963 as the leader of the team for the design and construction of a general purpose high speed electronic digital computer, the ISIJU computer (Indian Statistical Institute. He was a Technical Adviser during 1969–1976 to the Union Public Service Commission, Government of India. He had several research publications in mathematics, theoretical physics, and computer science. He travelled on work to United States, United Kingdom, Soviet Union, Switzerland, France, Czechoslovakia and Afghanistan. He was a member of the Calcutta Mathematical Society, Indian Association for the Cultivation of Science, Association for Computing Machinery, U.S. and the Indian Statistical Institute, India. He was Professor Emeritus and chairman, Calcutta Mathematical Society and Professor of the N.R. Sen Center for Pedagogical Mathematics. His other interests included translating from Sanskrit books of Scientific interest, such as Vaisheshik Darshan by Maharishi Kanada, a Hindu sage and philosopher. References Further reading Devaprasanna Sinha (August 2012). "Glimpsing through Early Days of Computers in Kolkata". Computer Society of India. pp. 5–6. Retrieved 17 November 2012. "50 Years of IT: Disrupting Moments: 1956–1965: The Beginning". Dataquest magazine, India. 30 December 2006. Retrieved 18 November 2012. 1916 births 1998 deaths Scientists from Kolkata 20th-century Indian mathematicians University of Calcutta alumni
20301108
https://en.wikipedia.org/wiki/Escola%20de%20Administra%C3%A7%C3%A3o%20de%20Empresas%20de%20S%C3%A3o%20Paulo
Escola de Administração de Empresas de São Paulo
The Escola de Administração de Empresas de São Paulo da Fundação Getulio Vargas (EAESP FGV) (Fundação Getulio Vargas's São Paulo School of Business Administration) is a Brazilian private higher education institution founded in 1954 as the result of joint efforts of the Brazilian government and business community with the objective to provide people with the skills needed to tackle the challenges Brazil was going through. The school was established with the help of Michigan State University professors in the assembly of its academic system. In partnerships with some Brazilians companies and governmental bodies, EAESP maintains 20 studies and research center and a Junior Enterprise, Empresa Júnior FGV, the first one in Latin America. Among other degrees, EAESP offers four-year bachelor's degrees in business and public administration, MBA, MPA and other master's degrees, as well as doctoral programs. In 2000, EAESP's undergraduate and graduate Administration programs were accredited by the Association to Advance Collegiate Schools of Business (AACSB). One year later, in 2001, its learning activities were again accredited with another international accreditation: European Quality Improvement System (EQUIS). In 2004, two of EAESP's courses were accredited by Association of MBAs (AMBA). EAESP is the only South American university with these three accreditations. Research and Publications Center ("Núcleo de Pesquisas e Publicações" – NPP) EAESP's Research and Publications Center − GVpesquisa − contains material for further research in the topics of Administration and related areas, such as Economics, Information Technology, Sociology and Psychology. Its Business Administration Reviews include RAE, GV-Executivo and RAE-eletrônica. RAE, published since 1961, is written and published by a team of authors made up of faculty and students from EAESP and other institutions. In addition to RAE, EAESP has published, since 2002, GV-executivo, with a focus on business administration. Together with Tuck School of Business at Dartmouth, Keio Business School in Tokyo, the School of Management of Fudan University in Shanghai, ESSEC Business School in France and Singapore, the University of Mannheim in Germany, EAESP forged an alliance of leading business schools from all parts of the world in 2010 called "Council on Business and Society". The Council on Business & Society convenes a biennial forum that combines the expertise of faculty members from each of the partner schools with that of representatives of business, government, and non-governmental organizations from around the world. The inaugural forum, held in Paris in November 2012, focused on Corporate Governance and Leadership. The 2014 forum was hosted by Keio Business School in Tokyo and focused on Health Care Delivery. The next edition will be hosted by Dartmouth in Boston and will focus on energy and environment Centros de Estudos (Studying Centres) The Studying Centres have publications on relevant topics in Administration learning. They are formed by: Centro de Estudos em Planejamento e Gestão de Saúde – GVsaúde (Health Planning and Management Studies Center) Centro de Administração Pública e Governo – CEAPG (Public Administration and Government Center) Centro de Estudos em Ética nas Organizações – CENE (Organizational Ethics Studies Center) Centro de Estudos de Administração e do Meio Ambiente – CEAMA (Administration and the Environment Studies Center) Centro de Estudos em Cultura e Consumo – CECC (Culture and Consumption Studies Center) Centro de Estudos de Lazer e Turismo – CELT (Leisure and Travel Studies Center) Centro de Excelência Bancária – CEB (Banking Excellence Center) Centro de Tecnologia de Informação Aplicada – CIA (Applied Information Technology Center) Centro de Estudos do Terceiro Setor – CETS (Third Sector Studies Center) Centro de Estudos em Finanças – CEF (Finance Studies Center) Centro de Excelência em Varejo – CEV (Retail Excellence Center) Centro de Estudos Estratégicos Internacionais – CEEI (Strategic International Studies Center) Centro de Estudos em Sustentabilidade – CES (Sustainability Studies Center) Centro de Estudos em Private Equity e Venture Capital – CEPE (Private Equity and Venture Capital Research Center) Centro de Estudos em Tecnologia da Informação para Governo – TECiGOV (Governmental Information Technology Studies Center) Centro de Empreendedorismo e Novos Negócios – CENN (Enterprise and New Business Center) Centro de Estudos de Negócios da Propaganda – CENPRO (Advertising Business Studies Center) Centro de Excelência em Logística e Cadeias de Suprimento – CELOG (Logistics and Supply Chain Excellence Center) Centro de Estudos em Estratégia e Competitividade – CEEC (Strategy and Competitiveness Studies Center) Centro de Estudos de Política e Economia do Setor Público – CEPESP (Public Sector Policy and Economy Studies Center) References External links Fundação Getulio Vargas Educational institutions established in 1954 Business schools in Brazil 1954 establishments in Brazil
10847539
https://en.wikipedia.org/wiki/Rathinam%20College%20of%20Arts%20and%20Science
Rathinam College of Arts and Science
Rathinam College of Arts and Science is a co-educational institution situated within the Rathinam Techzone Campus at Pollachi Main road, Eachanari, Coimbatore, India. It is affiliated to Bharathiar University and recognized by University Grants Commission (UGC). The college was established in 2001 by Rathinam Arumugam Research and Education Foundation. Rathinam Techzone The management started a Software Park in 2002 within the college campus. Rathinam Software Park operates with 14 national and multinational IT/ITES companies. The management has also started a large scale IT/ITEs Park under Special Economic Zone (SEZ) scheme. Founder The founder chairman of the college, Dr. Madan A Sendhil is an NRI with a postgraduate degree in Computer Engineering from the University of Central Florida. He also has worked for US organizations such as Motorola, Image Soft Technologies and Time Sys. He started Rathinam Educational Institutions and Software Technology Park in 2001. Prof.R.Manickam is the Chief Executive Officer and Rathinam College of Arts & Science is headed by Principal Dr.S.Mohandass. Rankings Among arts and science colleges in India, Rathinam College of Arts and Science ranked 74th among arts and science colleges by India today in 2019 and Ranked no 3 in the male-female student ratio in 2020. NIRF ranking 4 years in a row. Ranked 13th best institution in Tamilnadu by education world. Ranked 101 as top arts colleges in 2020 by outlook. Courses Department of Commerce B.com. B.com. CA B.com. PA B.com. BPS B.Com. Banking and Insurance (B&I) B.Com. Information Technology (IT) B.Com. Accounting & Finance (A&F) B.Com. Financial Services (FS) B.Com. Corporate Secretaryship (CS) M.com. M.com. CA Department of Computer Science B.Sc. Computer Science B.C.A. (Computer Applications) B.Sc. Information Technology B.Sc. Computer Technology M.Sc. Information Technology M.Sc. Computer Science M.Sc. Data Science and Business Analytics M.Phil. in Computer Science Ph.D. in Computer Science Department of Mathematics B.Sc. Mathematics M.Phil. in Mathematics Department of Management B.B.A. (Bachelor of Business Administration) M.B.A. (Master of Business Administration) Department of Visual Communication B.Sc. Visual Communication & e-Media M.A. (Mass Communication and Journalism) Department of Costume Design & Fashion B.Sc. Costume Design & Fashion Department of English B.A. English Literature M.A. English Literature Department of Science B.Sc. Physics Department of Bio-Science B.Sc. Microbiology B.Sc. Biotechnology Academics The college has autonomous status and it is having its own Board of Study (BOS) Team. Sports Along with contemporary sports such as Football, Cricket, Volleyball, Basket Ball, Handball, Coco and Table Tennis, Rathinam College of Arts & Science also promotes traditional sports like Kabaadi and traditional martial arts like Kalaripayattu, Silambam also finds its place in the extra-curricular activities. The campus accommodates a Cricket Ground with training nets, a dedicated basket Ball court and a gym. Facilities Classrooms are equipped with digital projectors and audio systems for interactive learning sessions. Separate hostels for boys and girls, a recreation centre, gym, wash centres for hostel students and restaurant with multiple food courts are the major facilities. There is also a fully functioning Entrepreneur Development Cell (EDC) to motivate and create Entrepreneurs. Community radio station A community radio station named Rathinavani90.8 is functioning inside the campus and it is maintained by the students. Notable Visitors to College Dr.K.Rosaiah – Governor of Tamil Nadu Dr.Mylswamy Annadurai – ISRO Scientist Prof.H.Devaraj – Vice Chairman, University of Grants Commission, New Delhi Thiru.K.Baghya Raj – Director, Film Industry Padmashri Actor Vivek – Actor, Film Industry Mr. Karthik Raja – Composer, Film Industry Dr.C.Sylendra Babu IPS – Commissioner of Police, Coimbatore Student life Along with periodical industrial visits, guest lectures and seminars there are few more extracurricular activities being conducted by the management. Rathinam Fest: Each and every year, a mega state level cultural meet is being organized in the name of Rathinam Fest. Tycoons: Department of Commerce and Management are organizing an intercollegiate cultural festival named "Tycoons" which also gives importance to curricular activities such as paper presentations, seminars, quiz, etc. Alumni The Rathinam Alumni Association is an international organization with chapters throughout the world, connecting alumni in networking, social events and fundraising. See also K.P.M. Trust References External links Official website Universities and colleges in Coimbatore
1242
https://en.wikipedia.org/wiki/Ada%20%28programming%20language%29
Ada (programming language)
Ada is a structured, statically typed, imperative, and object-oriented high-level programming language, extended from Pascal and other languages. It has built-in language support for design by contract (DbC), extremely strong typing, explicit concurrency, tasks, synchronous message passing, protected objects, and non-determinism. Ada improves code safety and maintainability by using the compiler to find errors in favor of runtime errors. Ada is an international technical standard, jointly defined by the International Organization for Standardization (ISO), and the International Electrotechnical Commission (IEC). , the standard, called Ada 2012 informally, is ISO/IEC 8652:2012. Ada was originally designed by a team led by French computer scientist Jean Ichbiah of CII Honeywell Bull under contract to the United States Department of Defense (DoD) from 1977 to 1983 to supersede over 450 programming languages used by the DoD at that time. Ada was named after Ada Lovelace (1815–1852), who has been credited as the first computer programmer. Features Ada was originally designed for embedded and real-time systems. The Ada 95 revision, designed by S. Tucker Taft of Intermetrics between 1992 and 1995, improved support for systems, numerical, financial, and object-oriented programming (OOP). Features of Ada include: strong typing, modular programming mechanisms (packages), run-time checking, parallel processing (tasks, synchronous message passing, protected objects, and nondeterministic select statements), exception handling, and generics. Ada 95 added support for object-oriented programming, including dynamic dispatch. The syntax of Ada minimizes choices of ways to perform basic operations, and prefers English keywords (such as "or else" and "and then") to symbols (such as "||" and "&&"). Ada uses the basic arithmetical operators "+", "-", "*", and "/", but avoids using other symbols. Code blocks are delimited by words such as "declare", "begin", and "end", where the "end" (in most cases) is followed by the identifier of the block it closes (e.g., if ... end if, loop ... end loop). In the case of conditional blocks this avoids a dangling else that could pair with the wrong nested if-expression in other languages like C or Java. Ada is designed for developing very large software systems. Ada packages can be compiled separately. Ada package specifications (the package interface) can also be compiled separately without the implementation to check for consistency. This makes it possible to detect problems early during the design phase, before implementation starts. A large number of compile-time checks are supported to help avoid bugs that would not be detectable until run-time in some other languages or would require explicit checks to be added to the source code. For example, the syntax requires explicitly named closing of blocks to prevent errors due to mismatched end tokens. The adherence to strong typing allows detecting many common software errors (wrong parameters, range violations, invalid references, mismatched types, etc.) either during compile-time, or otherwise during run-time. As concurrency is part of the language specification, the compiler can in some cases detect potential deadlocks. Compilers also commonly check for misspelled identifiers, visibility of packages, redundant declarations, etc. and can provide warnings and useful suggestions on how to fix the error. Ada also supports run-time checks to protect against access to unallocated memory, buffer overflow errors, range violations, off-by-one errors, array access errors, and other detectable bugs. These checks can be disabled in the interest of runtime efficiency, but can often be compiled efficiently. It also includes facilities to help program verification. For these reasons, Ada is widely used in critical systems, where any anomaly might lead to very serious consequences, e.g., accidental death, injury or severe financial loss. Examples of systems where Ada is used include avionics, air traffic control, railways, banking, military and space technology. Ada's dynamic memory management is high-level and type-safe. Ada has no generic or untyped pointers; nor does it implicitly declare any pointer type. Instead, all dynamic memory allocation and deallocation must occur via explicitly declared access types. Each access type has an associated storage pool that handles the low-level details of memory management; the programmer can either use the default storage pool or define new ones (this is particularly relevant for Non-Uniform Memory Access). It is even possible to declare several different access types that all designate the same type but use different storage pools. Also, the language provides for accessibility checks, both at compile time and at run time, that ensures that an access value cannot outlive the type of the object it points to. Though the semantics of the language allow automatic garbage collection of inaccessible objects, most implementations do not support it by default, as it would cause unpredictable behaviour in real-time systems. Ada does support a limited form of region-based memory management; also, creative use of storage pools can provide for a limited form of automatic garbage collection, since destroying a storage pool also destroys all the objects in the pool. A double-dash ("--"), resembling an em dash, denotes comment text. Comments stop at end of line, to prevent unclosed comments from accidentally voiding whole sections of source code. Disabling a whole block of code now requires the prefixing of each line (or column) individually with "--". While clearly denoting disabled code with a column of repeated "--" down the page this renders the experimental dis/re-enablement of large blocks a more drawn out process. The semicolon (";") is a statement terminator, and the null or no-operation statement is null;. A single ; without a statement to terminate is not allowed. Unlike most ISO standards, the Ada language definition (known as the Ada Reference Manual or ARM, or sometimes the Language Reference Manual or LRM) is free content. Thus, it is a common reference for Ada programmers, not only programmers implementing Ada compilers. Apart from the reference manual, there is also an extensive rationale document which explains the language design and the use of various language constructs. This document is also widely used by programmers. When the language was revised, a new rationale document was written. One notable free software tool that is used by many Ada programmers to aid them in writing Ada source code is the GNAT Programming Studio, part of the GNU Compiler Collection. History In the 1970s the US Department of Defense (DoD) became concerned by the number of different programming languages being used for its embedded computer system projects, many of which were obsolete or hardware-dependent, and none of which supported safe modular programming. In 1975, a working group, the High Order Language Working Group (HOLWG), was formed with the intent to reduce this number by finding or creating a programming language generally suitable for the department's and the UK Ministry of Defence's requirements. After many iterations beginning with an original Straw man proposal the eventual programming language was named Ada. The total number of high-level programming languages in use for such projects fell from over 450 in 1983 to 37 by 1996. The HOLWG working group crafted the Steelman language requirements, a series of documents stating the requirements they felt a programming language should satisfy. Many existing languages were formally reviewed, but the team concluded in 1977 that no existing language met the specifications. Requests for proposals for a new programming language were issued and four contractors were hired to develop their proposals under the names of Red (Intermetrics led by Benjamin Brosgol), Green (CII Honeywell Bull, led by Jean Ichbiah), Blue (SofTech, led by John Goodenough) and Yellow (SRI International, led by Jay Spitzen). In April 1978, after public scrutiny, the Red and Green proposals passed to the next phase. In May 1979, the Green proposal, designed by Jean Ichbiah at CII Honeywell Bull, was chosen and given the name Ada—after Augusta Ada, Countess of Lovelace. This proposal was influenced by the language LIS that Ichbiah and his group had developed in the 1970s. The preliminary Ada reference manual was published in ACM SIGPLAN Notices in June 1979. The Military Standard reference manual was approved on December 10, 1980 (Ada Lovelace's birthday), and given the number MIL-STD-1815 in honor of Ada Lovelace's birth year. In 1981, C. A. R. Hoare took advantage of his Turing Award speech to criticize Ada for being overly complex and hence unreliable, but subsequently seemed to recant in the foreword he wrote for an Ada textbook. Ada attracted much attention from the programming community as a whole during its early days. Its backers and others predicted that it might become a dominant language for general purpose programming and not only defense-related work. Ichbiah publicly stated that within ten years, only two programming languages would remain: Ada and Lisp. Early Ada compilers struggled to implement the large, complex language, and both compile-time and run-time performance tended to be slow and tools primitive. Compiler vendors expended most of their efforts in passing the massive, language-conformance-testing, government-required "ACVC" validation suite that was required in another novel feature of the Ada language effort. The Jargon File, a dictionary of computer hacker slang originating in 1975–1983, notes in an entry on Ada that "it is precisely what one might expect given that kind of endorsement by fiat; designed by committee...difficult to use, and overall a disastrous, multi-billion-dollar boondoggle...Ada Lovelace...would almost certainly blanch at the use her name has been latterly put to; the kindest thing that has been said about it is that there is probably a good small language screaming to get out from inside its vast, elephantine bulk." The first validated Ada implementation was the NYU Ada/Ed translator, certified on April 11, 1983. NYU Ada/Ed is implemented in the high-level set language SETL. Several commercial companies began offering Ada compilers and associated development tools, including Alsys, TeleSoft, DDC-I, Advanced Computer Techniques, Tartan Laboratories, Irvine Compiler, TLD Systems, and Verdix. In 1991, the US Department of Defense began to require the use of Ada (the Ada mandate) for all software, though exceptions to this rule were often granted. The Department of Defense Ada mandate was effectively removed in 1997, as the DoD began to embrace commercial off-the-shelf (COTS) technology. Similar requirements existed in other NATO countries: Ada was required for NATO systems involving command and control and other functions, and Ada was the mandated or preferred language for defense-related applications in countries such as Sweden, Germany, and Canada. By the late 1980s and early 1990s, Ada compilers had improved in performance, but there were still barriers to fully exploiting Ada's abilities, including a tasking model that was different from what most real-time programmers were used to. Because of Ada's safety-critical support features, it is now used not only for military applications, but also in commercial projects where a software bug can have severe consequences, e.g., avionics and air traffic control, commercial rockets such as the Ariane 4 and 5, satellites and other space systems, railway transport and banking. For example, the Airplane Information Management System, the fly-by-wire system software in the Boeing 777, was written in Ada. Developed by Honeywell Air Transport Systems in collaboration with consultants from DDC-I, it became arguably the best-known of any Ada project, civilian or military. The Canadian Automated Air Traffic System was written in 1 million lines of Ada (SLOC count). It featured advanced distributed processing, a distributed Ada database, and object-oriented design. Ada is also used in other air traffic systems, e.g., the UK's next-generation Interim Future Area Control Tools Support (iFACTS) air traffic control system is designed and implemented using SPARK Ada. It is also used in the French TVM in-cab signalling system on the TGV high-speed rail system, and the metro suburban trains in Paris, London, Hong Kong and New York City. Standardization The language became an ANSI standard in 1983 (ANSI/MIL-STD 1815A), and after translation in French and without any further changes in English became an ISO standard in 1987 (ISO-8652:1987). This version of the language is commonly known as Ada 83, from the date of its adoption by ANSI, but is sometimes referred to also as Ada 87, from the date of its adoption by ISO. Ada 95, the joint ISO/ANSI standard (ISO-8652:1995) was published in February 1995, making Ada 95 the first ISO standard object-oriented programming language. To help with the standard revision and future acceptance, the US Air Force funded the development of the GNAT Compiler. Presently, the GNAT Compiler is part of the GNU Compiler Collection. Work has continued on improving and updating the technical content of the Ada language. A Technical Corrigendum to Ada 95 was published in October 2001, and a major Amendment, ISO/IEC 8652:1995/Amd 1:2007 was published on March 9, 2007. At the Ada-Europe 2012 conference in Stockholm, the Ada Resource Association (ARA) and Ada-Europe announced the completion of the design of the latest version of the Ada language and the submission of the reference manual to the International Organization for Standardization (ISO) for approval. ISO/IEC 8652:2012 was published in December 2012. Other related standards include ISO 8651-3:1988 Information processing systems—Computer graphics—Graphical Kernel System (GKS) language bindings—Part 3: Ada. Language constructs Ada is an ALGOL-like programming language featuring control structures with reserved words such as if, then, else, while, for, and so on. However, Ada also has many data structuring facilities and other abstractions which were not included in the original ALGOL 60, such as type definitions, records, pointers, enumerations. Such constructs were in part inherited from or inspired by Pascal. "Hello, world!" in Ada A common example of a language's syntax is the Hello world program: (hello.adb) with Ada.Text_IO; use Ada.Text_IO; procedure Hello is begin Put_Line ("Hello, world!"); end Hello; This program can be compiled by using the freely available open source compiler GNAT, by executing gnatmake hello.adb Data types Ada's type system is not based on a set of predefined primitive types but allows users to declare their own types. This declaration in turn is not based on the internal representation of the type but on describing the goal which should be achieved. This allows the compiler to determine a suitable memory size for the type, and to check for violations of the type definition at compile time and run time (i.e., range violations, buffer overruns, type consistency, etc.). Ada supports numerical types defined by a range, modulo types, aggregate types (records and arrays), and enumeration types. Access types define a reference to an instance of a specified type; untyped pointers are not permitted. Special types provided by the language are task types and protected types. For example, a date might be represented as: type Day_type is range 1 .. 31; type Month_type is range 1 .. 12; type Year_type is range 1800 .. 2100; type Hours is mod 24; type Weekday is (Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday); type Date is record Day : Day_type; Month : Month_type; Year : Year_type; end record; Types can be refined by declaring subtypes: subtype Working_Hours is Hours range 0 .. 12; -- at most 12 Hours to work a day subtype Working_Day is Weekday range Monday .. Friday; -- Days to work Work_Load: constant array(Working_Day) of Working_Hours -- implicit type declaration := (Friday => 6, Monday => 4, others => 10); -- lookup table for working hours with initialization Types can have modifiers such as limited, abstract, private etc. Private types can only be accessed and limited types can only be modified or copied within the scope of the package that defines them. Ada 95 adds further features for object-oriented extension of types. Control structures Ada is a structured programming language, meaning that the flow of control is structured into standard statements. All standard constructs and deep-level early exit are supported, so the use of the also supported "go to" commands is seldom needed. -- while a is not equal to b, loop. while a /= b loop Ada.Text_IO.Put_Line ("Waiting"); end loop; if a > b then Ada.Text_IO.Put_Line ("Condition met"); else Ada.Text_IO.Put_Line ("Condition not met"); end if; for i in 1 .. 10 loop Ada.Text_IO.Put ("Iteration: "); Ada.Text_IO.Put (i); Ada.Text_IO.Put_Line; end loop; loop a := a + 1; exit when a = 10; end loop; case i is when 0 => Ada.Text_IO.Put ("zero"); when 1 => Ada.Text_IO.Put ("one"); when 2 => Ada.Text_IO.Put ("two"); -- case statements have to cover all possible cases: when others => Ada.Text_IO.Put ("none of the above"); end case; for aWeekday in Weekday'Range loop -- loop over an enumeration Put_Line ( Weekday'Image(aWeekday) ); -- output string representation of an enumeration if aWeekday in Working_Day then -- check of a subtype of an enumeration Put_Line ( " to work for " & Working_Hours'Image (Work_Load(aWeekday)) ); -- access into a lookup table end if; end loop; Packages, procedures and functions Among the parts of an Ada program are packages, procedures and functions. Example: Package specification (example.ads) package Example is type Number is range 1 .. 11; procedure Print_and_Increment (j: in out Number); end Example; Package body (example.adb) with Ada.Text_IO; package body Example is i : Number := Number'First; procedure Print_and_Increment (j: in out Number) is function Next (k: in Number) return Number is begin return k + 1; end Next; begin Ada.Text_IO.Put_Line ( "The total is: " & Number'Image(j) ); j := Next (j); end Print_and_Increment; -- package initialization executed when the package is elaborated begin while i < Number'Last loop Print_and_Increment (i); end loop; end Example; This program can be compiled, e.g., by using the freely available open-source compiler GNAT, by executing gnatmake -z example.adb Packages, procedures and functions can nest to any depth, and each can also be the logical outermost block. Each package, procedure or function can have its own declarations of constants, types, variables, and other procedures, functions and packages, which can be declared in any order. Concurrency Ada has language support for task-based concurrency. The fundamental concurrent unit in Ada is a task, which is a built-in limited type. Tasks are specified in two parts – the task declaration defines the task interface (similar to a type declaration), the task body specifies the implementation of the task. Depending on the implementation, Ada tasks are either mapped to operating system threads or processes, or are scheduled internally by the Ada runtime. Tasks can have entries for synchronisation (a form of synchronous message passing). Task entries are declared in the task specification. Each task entry can have one or more accept statements within the task body. If the control flow of the task reaches an accept statement, the task is blocked until the corresponding entry is called by another task (similarly, a calling task is blocked until the called task reaches the corresponding accept statement). Task entries can have parameters similar to procedures, allowing tasks to synchronously exchange data. In conjunction with select statements it is possible to define guards on accept statements (similar to Dijkstra's guarded commands). Ada also offers protected objects for mutual exclusion. Protected objects are a monitor-like construct, but use guards instead of conditional variables for signaling (similar to conditional critical regions). Protected objects combine the data encapsulation and safe mutual exclusion from monitors, and entry guards from conditional critical regions. The main advantage over classical monitors is that conditional variables are not required for signaling, avoiding potential deadlocks due to incorrect locking semantics. Like tasks, the protected object is a built-in limited type, and it also has a declaration part and a body. A protected object consists of encapsulated private data (which can only be accessed from within the protected object), and procedures, functions and entries which are guaranteed to be mutually exclusive (with the only exception of functions, which are required to be side effect free and can therefore run concurrently with other functions). A task calling a protected object is blocked if another task is currently executing inside the same protected object, and released when this other task leaves the protected object. Blocked tasks are queued on the protected object ordered by time of arrival. Protected object entries are similar to procedures, but additionally have guards. If a guard evaluates to false, a calling task is blocked and added to the queue of that entry; now another task can be admitted to the protected object, as no task is currently executing inside the protected object. Guards are re-evaluated whenever a task leaves the protected object, as this is the only time when the evaluation of guards can have changed. Calls to entries can be requeued to other entries with the same signature. A task that is requeued is blocked and added to the queue of the target entry; this means that the protected object is released and allows admission of another task. The select statement in Ada can be used to implement non-blocking entry calls and accepts, non-deterministic selection of entries (also with guards), time-outs and aborts. The following example illustrates some concepts of concurrent programming in Ada. with Ada.Text_IO; use Ada.Text_IO; procedure Traffic is type Airplane_ID is range 1..10; -- 10 airplanes task type Airplane (ID: Airplane_ID); -- task representing airplanes, with ID as initialisation parameter type Airplane_Access is access Airplane; -- reference type to Airplane protected type Runway is -- the shared runway (protected to allow concurrent access) entry Assign_Aircraft (ID: Airplane_ID); -- all entries are guaranteed mutually exclusive entry Cleared_Runway (ID: Airplane_ID); entry Wait_For_Clear; private Clear: Boolean := True; -- protected private data - generally more than only a flag... end Runway; type Runway_Access is access all Runway; -- the air traffic controller task takes requests for takeoff and landing task type Controller (My_Runway: Runway_Access) is -- task entries for synchronous message passing entry Request_Takeoff (ID: in Airplane_ID; Takeoff: out Runway_Access); entry Request_Approach(ID: in Airplane_ID; Approach: out Runway_Access); end Controller; -- allocation of instances Runway1 : aliased Runway; -- instantiate a runway Controller1: Controller (Runway1'Access); -- and a controller to manage it ------ the implementations of the above types ------ protected body Runway is entry Assign_Aircraft (ID: Airplane_ID) when Clear is -- the entry guard - calling tasks are blocked until the condition is true begin Clear := False; Put_Line (Airplane_ID'Image (ID) & " on runway "); end; entry Cleared_Runway (ID: Airplane_ID) when not Clear is begin Clear := True; Put_Line (Airplane_ID'Image (ID) & " cleared runway "); end; entry Wait_For_Clear when Clear is begin null; -- no need to do anything here - a task can only enter if "Clear" is true end; end Runway; task body Controller is begin loop My_Runway.Wait_For_Clear; -- wait until runway is available (blocking call) select -- wait for two types of requests (whichever is runnable first) when Request_Approach'count = 0 => -- guard statement - only accept if there are no tasks queuing on Request_Approach accept Request_Takeoff (ID: in Airplane_ID; Takeoff: out Runway_Access) do -- start of synchronized part My_Runway.Assign_Aircraft (ID); -- reserve runway (potentially blocking call if protected object busy or entry guard false) Takeoff := My_Runway; -- assign "out" parameter value to tell airplane which runway end Request_Takeoff; -- end of the synchronised part or accept Request_Approach (ID: in Airplane_ID; Approach: out Runway_Access) do My_Runway.Assign_Aircraft (ID); Approach := My_Runway; end Request_Approach; or -- terminate if no tasks left who could call terminate; end select; end loop; end; task body Airplane is Rwy : Runway_Access; begin Controller1.Request_Takeoff (ID, Rwy); -- This call blocks until Controller task accepts and completes the accept block Put_Line (Airplane_ID'Image (ID) & " taking off..."); delay 2.0; Rwy.Cleared_Runway (ID); -- call will not block as "Clear" in Rwy is now false and no other tasks should be inside protected object delay 5.0; -- fly around a bit... loop select -- try to request a runway Controller1.Request_Approach (ID, Rwy); -- this is a blocking call - will run on controller reaching accept block and return on completion exit; -- if call returned we're clear for landing - leave select block and proceed... or delay 3.0; -- timeout - if no answer in 3 seconds, do something else (everything in following block) Put_Line (Airplane_ID'Image (ID) & " in holding pattern"); -- simply print a message end select; end loop; delay 4.0; -- do landing approach... Put_Line (Airplane_ID'Image (ID) & " touched down!"); Rwy.Cleared_Runway (ID); -- notify runway that we're done here. end; New_Airplane: Airplane_Access; begin for I in Airplane_ID'Range loop -- create a few airplane tasks New_Airplane := new Airplane (I); -- will start running directly after creation delay 4.0; end loop; end Traffic; Pragmas A pragma is a compiler directive that conveys information to the compiler to allow specific manipulating of compiled output. Certain pragmas are built into the language, while others are implementation-specific. Examples of common usage of compiler pragmas would be to disable certain features, such as run-time type checking or array subscript boundary checking, or to instruct the compiler to insert object code instead of a function call (as C/C++ does with inline functions). Generics See also APSE – a specification for a programming environment to support software development in Ada Ravenscar profile – a subset of the Ada tasking features designed for safety-critical hard real-time computing SPARK (programming language) – a programming language consisting of a highly restricted subset of Ada, annotated with meta information describing desired component behavior and individual runtime requirements References International standards ISO/IEC 8652: Information technology—Programming languages—Ada ISO/IEC 15291: Information technology—Programming languages—Ada Semantic Interface Specification (ASIS) ISO/IEC 18009: Information technology—Programming languages—Ada: Conformity assessment of a language processor (ACATS) IEEE Standard 1003.5b-1996, the POSIX Ada binding Ada Language Mapping Specification, the CORBA interface description language (IDL) to Ada mapping Rationale These documents have been published in various forms, including print. Also available apps.dtic.mil, pdf Books 795 pages. Archives Ada Programming Language Materials, 1981–1990. Charles Babbage Institute, University of Minnesota. Includes literature on software products designed for the Ada language; U.S. government publications, including Ada 9X project reports, technical reports, working papers, newsletters; and user group information. External links Ada - C/C++ changer - MapuSoft DOD Ada programming language (ANSI/MIL STD 1815A-1983) specification JTC1/SC22/WG9 ISO home of Ada Standards Programming languages .NET programming languages Avionics programming languages High Integrity Programming Language Multi-paradigm programming languages Programming language standards Programming languages created in 1980 Programming languages with an ISO standard Statically typed programming languages Systems programming languages 1980 software High-level programming languages
1107856
https://en.wikipedia.org/wiki/Free%20Java%20implementations
Free Java implementations
Free Java implementations are software projects that implement Oracle's Java technologies and are distributed under free software licences, making them free software. Sun released most of its Java source code as free software in May 2007, so it can now almost be considered a free Java implementation. Java implementations include compilers, runtimes, class libraries, etc. Advocates of free and open source software refer to free or open source Java virtual machine software as free runtimes or free Java runtimes. Some advocates in this movement prefer not to use the term "Java" as it has trademark issues associated with it. Hence, even though it is a "free Java movement", the term "free Java runtimes" is avoided by them. Mid-1990s to 2006 The first free project to offer substantial parts of Java platform functionality was likely guavac, which began some time before November 1995. Since then, the free software movement developed other Java compilers, most notably the GNU Compiler for Java. Others include the Eclipse Java Compiler (ECJ), which is maintained by the Eclipse Foundation, and Jikes, which is no longer actively maintained. Since the GNU Compiler Collection's 4.3 release, GCJ (its Java compiler) is using the ECJ parser front-end for parsing Java. Examples of free runtime environments include Kaffe, SableVM and gcj. GNU Classpath is the main free software class library for Java. Most free runtimes use GNU Classpath as their class library. In May 2005, Apache Harmony was announced, however, the project chose the Apache License, which was at the time incompatible with all existing free Java implementations. Another event in May 2005 was the announcement that OpenOffice.org 2.0 would depend on Java features which free software implementations couldn't provide. Following controversy, OpenOffice.org adopted a guideline requiring it to work with free Java implementations. Notable applications that already worked with free software Java implementations before November 2006 include OpenOffice.org and Vuze, both of which work when compiled with GCJ. Sun's November 2006 announcement On 13 November 2006, Sun released its compiler, javac, under the GNU General Public License. As of September 2007, as well as javac, Sun has released the code of HotSpot (the virtual machine) and almost all the Java Class Library as free software. Following their promise to release a fully buildable JDK based almost completely on free and open source code in the first half of 2007, Sun released the complete source code of the Class library under the GPL on May 8, 2007, except some limited parts that were licensed by Sun from 3rd parties who did not want their code to be released under a free software licence. Sun has stated that it aims to replace the parts that remain proprietary and closed source with alternative implementations and make the class library completely free and open source. Since there's some encumbered code in the JDK, Sun will continue to use that code in commercial releases until it's replaced by fully functional free and open-source alternatives. After the May 2007 code release As of May 2008, the only part of the Class library that remains proprietary (4% as of May 2007 for OpenJDK 7, and less than 1% as of May 2008 in OpenJDK 6) is the SNMP implementation. Since the first May 2007 release, Sun Microsystems, with the help of the community, has released as free software (or replaced with free-software alternatives) almost all the encumbered code: All the audio engine code, including the software synthesizer, has been released as open-source. The closed-source software synthesizer has been replaced by a new synthesizer developed specifically for OpenJDK called Gervill, All cryptography classes used in the Class library have been released as free software, FreeType has replaced the code that scales and rasterizes fonts. LittleCMS has replaced the native color-management system. There is a pluggable layer in the JDK, so that the commercial version can use the old color management system and OpenJDK can use LittleCMS. The open-sourced Pisces renderer used in the phoneME project has replaced the anti-aliasing graphics rasterizer code. This code is fully functional, but still needs some performance enhancements, The JavaScript plugin has been open-sourced (the JavaScript engine itself was open-sourced from the beginning). Because of these previously encumbered components, it was not possible to build OpenJDK only with free software components. In order to be able to do this before the whole class library is made free, and to be able to bundle OpenJDK in Fedora Core and other free Linux distributions, Red Hat has started a project called IcedTea. It is basically an OpenJDK/GNU Classpath hybrid that can be used to bootstrap OpenJDK using only free software. As of March 2008, the Fedora 9 distribution has been released with OpenJDK 6 instead of the IcedTea implementation of OpenJDK 7. Some of the stated reasons for this change are: Sun has replaced most of the encumbrances for which IcedTea was providing replacements (less than 1% of encumbered code remains in the class library, and this code is not necessary to run OpenJDK). OpenJDK 6 was a stable branch, whereas OpenJDK 7 was unstable and not expected to ship a stable release until 2009. Sun has licensed the OpenJDK trademark for use in Fedora. In June 2008, it was announced that IcedTea6 (as the packaged version of OpenJDK on Fedora 9) had passed the Technology Compatibility Kit tests and can claim to be a fully compatible Java 6 implementation. In September 2013, Azul Systems released Zulu, a free, open source build of OpenJDK for Windows Server and the Microsoft Azure Cloud. Later releases added support for Mac OS X, multiple versions of Linux and the Java Platform, Standard Edition version 8. Zulu is certified compliant with Java SE 8, 7 and 6 using the OpenJDK Community Technology Compatibility Kit. Amazon have released Amazon Corretto a no-cost, multiplatform, production-ready distribution of the Open Java Development Kit. It is released under GPL v2 with the Classpath Exception. Long term support versions of Java 8 and Java 11 are available. It was first publicly released on 31 January 2019. See also Java (software platform) Javac HotSpot Apache Harmony OpenJDK GNU Classpath, GCJ, and GIJ IcedTea JamVM IKVM.NET List of Java virtual machines Comparison of Java virtual machines References External links Free But Shackled - The Java Trap Escaping the Java Trap: A practical road map to the Free Software and Open Source alternatives Hybrids Combine GNU Classpath and OpenJDK Hour long 2007 video of a workshop with Sun, GGJ, and GNU Classpath developers Java Trademark Issues Java virtual machine Free virtualization software
50368633
https://en.wikipedia.org/wiki/Security%20First%20Corp
Security First Corp
Security First Corp is a Rancho Santa Margarita, CA information assurance and data security company. The company holds over 250 patents for software defined data security, including its Secure Parser Extended (SPx) technology, which encrypts and randomly splits data into multiple segments, storing them in different locations. This technology is also called bitsplitting. History Security First Corp was founded in 2002 by Mark O'Hare, a 26-year navy veteran who had served as Program Executive Officer of the US Navy Aircraft Carrier Program. In 2008, information technology company Unisys integrated Security First's Secure Parser technology into its Stealth brand software for Windows servers and desktops. In June 2009, the company acquired Silicon Valley-based DRC Computer Corporation (DRC), a developer of acceleration coprocessors. Security First was reportedly using DRC's products for an information security appliance, and planned to operate DRC as a wholly owned subsidiary. In August 2011, IBM announced they were integrating three of Security First's cryptographic technologies into its next generation of chips, to increase their security. In December 2014, Security First released SPxSHARC for VMware's vCenter Server, running on VMWare's ESXi hypervisor. In 2015, IBM announced it was using Security First's SPxBitFiler-IPS encryption technology to allow IBM's PureApplication System virtual pattern deployers to encrypt on-disk data. The technology is also licensed by IBM for its Cloud Data Encryption Service (ICDES). Products Security First's core product is Secure Parser Extended (SPx) technology, which encrypts data, scrambles it randomly and disperses it to different locations. The technology combines AES-256 certified encryption, multi-factor secret sharing with keyed information dispersal, and cryptographic random bit-splitting. The solution is compliant with common government and industry data protection standards and security requirements. The company offers SPx SHARC, a security suite designed for multi-site data protection, SPx Gateway, a data protection solution designed to protect data stored across multiple cloud computing service providers, a process the company calls "Cloud Spanning", and ParsedCloud, a file transfer application that encrypts, splits and transfers data between multiple sites, available in free and fee-based versions. Media coverage In December 2014, former Apple CEO John Sculley was interviewed on Fox News and called Security First's bitsplitting technology a "gamechanger". Funding In a December 2014 SEC filing, the company announced it had raised $29 Million from sales of debt and equity, from undisclosed investors. In an April 2016 SEC filing, the company announced it had raised an additional $36 Million from sales of debt and equity, also from undisclosed investors. Subsidiaries The company operates DRC Computer corporation as a wholly owned subsidiary. References External links Companies based in Orange County, California 2002 establishments in California Information governance Data security
3315647
https://en.wikipedia.org/wiki/ETRAX%20CRIS
ETRAX CRIS
The ETRAX CRIS is a RISC ISA and series of CPUs designed and manufactured by Axis Communications for use in embedded systems since 1993. The name is an acronym of the chip's features: Ethernet, Token Ring, AXis - Code Reduced Instruction Set. Token Ring support has been taken out from the latest chips as it has become obsolete. Types of chips The first Axis chip with an embedded microcontroller was the CGA-1 (Coax Gate Array) which contained both IBM 3270 (coax) communications and IBM 5250 communications (Twinax). It also had a small microcontroller and various IO:s, including serial and parallel interfaces. The CGA-1 chip was designed by Martin Gren, the bug-fixed CGA-2 by Martin Gren and Staffan Göransson. ETRAX In 1993, by introducing 10 Mbit/s Ethernet and Token Ring controllers, the name ETRAX was born. The ETRAX-4 had improved performance over previous models and an SCSI controller. The ETRAX 100 features a 10/100 Mbit/s Ethernet Controller along with ATA and Wide SCSI support. ETRAX 100LX In 2000, the ETRAX 100LX design added an MMU, as well as USB, synchronous serial and SDRAM support. Its CPU performance was raised to 100 MIPS. Since it has an MMU, it could run the Linux kernel without modifications (low-level support for the ETRAX CPU had to be added). As of Linux kernel 4.17 the architecture has been dropped due to being obsolete. Main characteristics: A 32-bit RISC CPU core 10/100 Mbit/s Ethernet controller 4 asynchronous serial ports 2 synchronous serial ports 2 USB ports 2 Parallel ports 4 ATA (IDE) ports 2 Narrow SCSI ports (or 1 Wide) Support for SDRAM, Flash, EEPROM, SRAM The device comes in a 256-pin Plastic Ball Grid Array (PBGA) package and uses power (typical). ETRAX 100LX MCM This system-on-a-chip is an ETRAX 100LX plus flash memory, SDRAM, and an Ethernet PHYceiver. There were two versions commercialized: the ETRAX 100LX MCM 2+8 (2 MB flash, 8 MB SDRAM), and the ETRAX MCM 4-16 (4 MB flash, 16 MB SDRAM). ETRAX FS Designed in 2005, and with full Linux 2.6 support, this chip features: A 200 MIPS, 32-bit RISC with pipeline CRIS CPU core with 16 kB data and 16 kB instruction cache and a MMU. Two 10/100 Mbit/s Ethernet controllers Crypto accelerator, supporting AES, DES, Triple DES, SHA-1 and MD5. 128 kB on-chip RAM A microprogrammable I/O processor, supporting PC-Card, CardBus, PCI, USB FS/HS host, USB FS device, SCSI and ATA. The device comes in a 256-pin Plastic Ball Grid Array package and uses 465 mW power (typical). Development tools Software A SDK (along with a cross-compiler) is provided by Axis on the development site. Hardware Several hardware manufacturers offer developer boards: a circuit board featuring an ETRAX chip and all the necessary I/O ports to develop (or even deploy) applications. These include: Axis Communications AXIS 82 developer board Embedded Linux PC from ipcas ACME Systems' FOX board Elphel Reconfigurable Network Camera based on Etrax FS and Xilinx Spartan 3e FPGA Free2move's embedded Linux system Rcotel Corporation's single board Linux computer DSP&FPGA's industrial control unit BBDevice.com remote control systems Operating system support In April 2018 it was announced that Linux would stop supporting this architecture. References External links Home page Developers wiki Embedded microprocessors
44970841
https://en.wikipedia.org/wiki/William%20Newman%20%28computer%20scientist%29
William Newman (computer scientist)
William Maxwell Newman (21 May 1939 – 11 June 2019) was a British computer scientist. With others at the Xerox Palo Alto Research Center in the 1970s Newman demonstrated the advantages of the raster (bitmap graphics) display technology first deployed in the Xerox Alto personal workstation, developing interactive programs for producing illustrations and drawings. With Bob Sproull he co-authored the first major textbook on interactive computer graphics. Newman later contributed to the field of human–computer interaction, publishing several papers and a book taking an engineering approach to the design of interactive systems. He was an honorary professor at University College London and taught at Harvard, Queen Mary College London, University of California at Irvine, the University of Utah, Technische Universität Darmstadt, and the University of Cambridge, and became an ACM SIGCHI Academy member in 2004. Early life Newman was born 21 May 1939 at Comberton, near Cambridge, England. He was the second son of Max Newman, the distinguished mathematician and World War II codebreaker who worked at Bletchley Park, Manchester University and Cambridge University. William's mother was Lyn Irvine, a writer linked with the Bloomsbury Group. For many years William was unaware of his father's important work at the Bletchley Park WWII codebreaking centre because it was protected under the Official Secrets Act until at least in the mid-1970s. Nevertheless, Alan Turing was a firm family friend, as was Albert Einstein, and a Monopoly board devised by William with Turing in 1950 was retrieved in 2011 following a visit to his family home with his son, daughter and future daughter-in-law, and later repackaged and sold by Bletchley Park. In later life he took a keen interest in his father's role there, contributing items to the Bletchley Park Museum and elsewhere. He also invested similar energy into his mother's creative output, collating and publishing letters between his mother and Lady Esher; these letters are available to view at St John's College, Cambridge, although she was an alumna of Girton College. Growing up, the Newmans were very close family friends with the Penrose family, including Roger Penrose and Lionel Penrose; following his mother's death from cancer in 1973, his father married Margaret Penrose, Lionel Penrose's widow. As his mother Lyn Irvine was raised in Aberdeen with family across the Scottish Borders, William spent much time in the Highlands and particularly in Torridon and Applecross. Education He attended Manchester Grammar School before studying Architecture and Engineering at St. John's College, Cambridge, obtaining a BA with first-class honours in 1961. His first contact with computers came in the mid-1960s when he joined others developing early CAD applications on the PDP-7 computer installed at the Cambridge Computer Laboratory. This PDP-7 was one of the first computers in the United Kingdom equipped with a vector-graphics display. Research and career Newman completed a PhD in Computer Graphics at Imperial College London in 1968 under the supervision of Professor Bill Elliott. For his PhD project he produced the Reaction Handler, a system for organising the elements of a graphical user interface that is often referred to as the first user interface management system (UIMS). He then joined Ivan Sutherland's research team developing software for interactive computer graphics systems, first at Harvard and then the University of Utah. He then held teaching and research positions at Queen Mary College London, University of California at Irvine and the University of Utah. Between 1973 and 1979, Newman worked at the Xerox Palo Alto Research Center (Xerox PARC) where he was involved in the development of several of the software components for the Alto, Xerox's pioneering personal computer. He independently developed Markup (1975), an early interactive drawing (paint) program. With Bob Sproull he developed Press, a page description language for printers that was a precursor to PostScript; and with Timothy Mott he developed Officetalk Zero, a prototype office system. All of them saw use in early versions of the Alto system. Markup included what was almost certainly the first instance of the use of pop-up menus. (Further details on Markup and Press can be found in the Alto User's Handbook). In 1973, Newman and Bob Sproull published Principles of Interactive Computer Graphics; a second edition was published in 1979. This was the first comprehensive textbook on computer graphics and was regarded as the graphics "bible" until it was succeeded by Foley and van Dam's Computer Graphics: Principles and Practice. Newman went on to manage a research team at the Xerox Research Centre Europe, Cambridge, UK. With Margery Eldridge and Mik Lamming he pursued a research project in Activity-Based Information Retrieval’ (AIR). The basic hypothesis of the project was that if contextual data about human activities can be automatically captured and later presented as recognisable descriptions of past episodes, then human memory of those past episodes can be improved. With his wife Karmen Guevara, he founded a company in 1986, Beta Chi Design, which was instrumental in introducing human-computer interaction and user-centred design practice to the UK, through workshops held across the UK, drawing on expertise gained while working with Xerox PARC. Newman subsequently undertook research in human–computer interaction with the aim of identifying measurable parameters that characterise the quality of interaction. He developed an approach based on Critical Parameters for designing interactive systems that deliver tangible performance improvements to the user. In 1995 he published the textbook Interactive System Design with Mik Lamming incorporating those ideas. After leaving Xerox, Newman worked as a consultant, advising a number of organisations on interactive systems design. He was also an honorary professor at University College London, lecturing at its Interaction Centre (UCLIC), at Cambridge University and at the Technische Universität Darmstadt. Personal life While lecturing in Computer Science at University of California, Irvine, Newman met and married Karmen Guevara; they had two children, Damien Newman (1972) and Chantal Guevara (1975). The marriage ended in divorce. He subsequently married Anikó Anghi. In 2009, William suffered from an arrythmic attack, triggering vascular dementia. He spent his later years in a care home on the outskirts of Cambridge. References External links CHM Live │Yesterday's Computer of Tomorrow: The Xerox Alto, Bob Sproull demonstrating William's Markup program Retrieved 22 July 2019. Larry Tesler, "A Personal History of Modeless Text Editing and Cut/Copy-Past", ACM Interactions, July 2012, vol. 19, no. 4, p70. Refers to the influence of the pop-up action icons that Newman used in Markup on the evolution of the right-click contextual menus that are commonplace today. Vimeo | William Newman: Computer Aided Illustration | William Newman demonstrates Markup graphics software on a Xerox Alto computer at PARC in 1976, Retrieved 29 August 2019. The Guardian | William Newman obituary, 27 August 2019, Retrieved 29 August 2019. 1939 births 2019 deaths People from Comberton People educated at Manchester Grammar School Alumni of St John's College, Cambridge Alumni of Imperial College London English emigrants to the United States British expatriates in the United States Xerox people Harvard University staff University of California, Irvine faculty University of Utah faculty Academics of Queen Mary University of London Academics of University College London English computer scientists Computer graphics researchers
5441737
https://en.wikipedia.org/wiki/Computer%20Bismarck
Computer Bismarck
Computer Bismarck is a computer wargame developed and published by Strategic Simulations, Inc. (SSI) in 1980. The game is based on the last battle of the battleship Bismarck, in which British Armed Forces pursue the German Bismarck in 1941. It is SSI's first game, and features turn-based gameplay and two-dimensional graphics. The development staff consisted of two programmers, Joel Billings and John Lyons, who programmed the game in BASIC. Originally developed for the TRS-80, an Apple II version was also created two months into the process. After meeting with other wargame developers, Billings decided to publish the game as well. To help accomplish this, he hired Louis Saekow to create the box art. The first commercially published computer war game, Computer Bismarck sold well and contributed to SSI's success. It is also credited in part for legitimizing war games and computer games. Synopsis The game is a simulation of the German battleship Bismarcks last battle in the Atlantic Ocean during World War II. On May 24, 1941, Bismarck and sank the British and damaged at the Battle of the Denmark Strait. Following the battle, British Royal Navy ships and aircraft pursued Bismarck for two days. After being crippled by a torpedo bomber on the evening of May 26, Bismarck was sunk the following morning. Gameplay Computer Bismarck is a turn-based computer wargame in which players control British forces against the battleship Bismarck and other German units. The German forces can be controlled by either a computer opponent (named "Otto von Computer") or a second player. The game takes place on a map of the North Atlantic Ocean on which letters from the English alphabet represent military units and facilities (airfields and ports). Units have different capabilities, as well as statistics that determine their mobility, firepower, vulnerability and other gameplay factors. Turns take the form of phases, and players alternate inputting orders to maneuver their respective units. Phases can serve different functions, such as informing players of status changes, unit movement, and battles. Players earn points by destroying their opponent's units. After the Bismarck is sunk or a number of turns have occurred, the game ends. Depending on the number of points players have earned, either the British or German forces are declared the victor. Development During college, Joel Billings used computers to do econometrics, mathematical modeling and forecasting. This experience led him to believe that computers could handle war games and remove tedious paperwork from gameplay. While between his undergraduate and graduate education, Billings met an IBM programmer and discussed computers. Billings suggested starting a software company with him, but the programmer was not interested in war games, stating that they were too difficult and complicated to be popular. Billings posted flyers at hobby shops in the Santa Clara, California area to attract war-game enthusiasts with a background in programming. John Lyons was the first to reply and joined Billings after quickly developing a good rapport. Billings chose the Bismarcks last battle because he felt it would be easier to develop than other war games. Computer Bismarck was written in BASIC and compiled to increase its processing speed. In August 1979, Billings provided Lyons with access to a computer to write the program. Lyons began programming a simplified version similar to a fox and hounds game—he had "hounds" search a playing field for a "fox". At the time, the two were working full-time and programmed at Billings' apartment during the night. Lyons did the bulk of the programming, while Billings focused on design and assisted with data entry and minor programming tasks. The game was originally developed for the Tandy Corporation's TRS-80. Two months into development, Billings met with Trip Hawkins, then a marketing manager at Apple Computers, via a venture capitalist, who convinced Billings to develop the game for the ; he commented that the computer's capacity for color graphics made it the best platform for strategy games. In October 1979, Billings' uncle gave him an . Billings and Lyons then converted their existing code to work on the and used a graphics software package to generate the game's map. After Lyons began programming, Billings started to study the video games market. He visited local game stores and attended a San Francisco gaming convention. Billings approached Tom Shaw from Avalon Hill—the company produced many war games that Billings played as a child—and one of the founders of Automated Simulations to share market data, but aroused no interest. The lukewarm responses made Billings believe he would have to publish SSI's games. After Computer Bismarck was finished in January 1980, he searched for a graphic designer to handle the game's packaging. Billings met Louis Saekow through a string of friends but was hesitant to hire him. Inspired by Avalon Hill's games, Billings wanted SSI's games to look professional and include maps, detailed manuals, and excellent box art. Two months prior, Saekow had postponed medical school to pursue his dream of becoming a graphic designer. To secure the job, Saekow told Billings that he could withhold pay if the work was unsatisfactory. In creating the box art, Saekow used a stat camera; his roommate worked for a magazine company and helped him sneak in to use its camera after hours. Saekow's cousin then handled printing the packaging. Without any storage for the complete products, Billings stored the first in his bedroom. In February 1980, he distributed to owners, and displayed the game at the Applefest exposition a month later. SSI purchased a full-page advertisement for the version in the March 1980 issue of BYTE magazine, which mentioned the ability to save a game in progress as well as play against the computer or another person. The advertisement also promised future support for the TRS-80 and other computers. Reception and legacy In 1980, Peter Ansoff of BYTE magazine called Computer Bismarck a "milestone in the development of commercial war games", and approved of the quality of the documentation and the option to play against the computer, but disapproved of the game. Acknowledging that "it is perhaps unfair to expect the first published [computer war game] to be a fully developed product", he criticized Computer Bismarck for overly faithfully copying the mechanics of the Bismarck board game, including those that worked efficiently on a board but less so on a computer. Ansoff also noted that the computer game "perpetuates the [board game's] irritating system of ship-movement rates", and concluded that "the failings of Computer Bismarck can be summarized by saying that it does not take advantage of the possibilities offered by the computer". The game was better received by other critics. Neil Shapiro of Popular Mechanics that year praised the game's detail and ability to recreate the complex maneuvering involved in the real battle. He referred to it as unique and "fantastic". In Creative Computing, Randy Heuer cautioned that the game "is probably not for everyone. The point which I probably cannot emphasize enough is that it is an extremely complex simulation ... However, for those ready for a [challenge] ... I enthusiastically recommend Computer Bismarck". Reviewing Computer Bismarck in The Space Gamer magazine, Joseph T. Suchar called the game "superb" and stated that "it has so many strategic options for both sides that it is unlikely to be optimized." United States Navy defense researcher Peter Perla in 1990 considered war games like Computer Bismarck a step above earlier war-themed video games that relied on arcade-style action. He praised the addition of a computer-controlled opponent that such games provide to solitaire players. Perla attributes SSI's success to the release of its early wargames, specifically citing Computer Bismarck. Computer Gaming Worlds Bob Proctor in 1988 agreed that Computer Bismarck contributed to SSI's success, commenting that the title earned the company a good profit. He also stated that it encouraged game enthusiasts to submit their own games to SSI, which he believed helped further the company's success. Describing it as the first "serious wargame for a microcomputer", Proctor credited Computer Bismarck with helping to legitimize war games and computer games in general. He stated that the professional packaging demonstrated SSI's seriousness to produce quality products; prior to Computer Bismarck, most computer games were packaged in zipper storage bags. Saekow became a permanent SSI employee and designed artwork for most of its products. Ansoff noted the similarity of the game's mechanics to Avalon Hill's Bismarck, stating that "it would seem proper as a matter of courtesy to acknowledge that the game was based on an Avalon Hill design". In 1983, Avalon Hill took legal action against SSI for copying game mechanics from its board games; Computer Bismarck, among other titles, was involved in the case. The two companies settled the issue out of court. The game was later re-released as part of the company's "SSI classics" line of popular games at discounted prices. One of SSI's later games, Pursuit of the Graf Spee, uses an altered version of Computer Bismarcks core system. In December 2013 the International Center for the History of Electronic Games received a software donation of several SSI games, including Computer Bismarck with the source code for preservation. References External links The Battleship Bismarck 1980 video games Apple II games Naval video games World War II video games Strategic Simulations games German battleship Bismarck Operation Rheinübung TRS-80 games Video games developed in the United States Computer wargames
23700552
https://en.wikipedia.org/wiki/Trojan%20Armoured%20Vehicle%20Royal%20Engineers
Trojan Armoured Vehicle Royal Engineers
The Trojan Armoured Vehicle Royal Engineers (AVRE) is a combat engineering vehicle of the British Army. It is used to breach minefields and for many other tasks. It is currently in use with the Royal Engineers. Design The Trojan Armoured Vehicle Royal Engineers is based on a Challenger 2 tank chassis, but lacks the main armament. In place of the turret, it has a large hydraulic excavator arm, which can be used to excavate areas, move obstacles, and deposit the fascine that the Trojan carries at its rear. The Trojan is usually also fitted with a plough on the front, which enables it to clear mines, either detonating them on contact, or pushing them out of the way to clear a safe channel for following vehicles. For self-defence only, it carries a 7.62mm machine gun. For rapid mine-clearing purposes, the Trojan can also tow a trailer carrying the Python, a rocket-propelled hose similar to the Giant Viper. This allows for a much quicker way of clearing a path for ground forces. The hose, packed with explosive, is launched across a minefield, and detonates after it hits the ground, clearing a 7-metre wide path for 230 metres. History The vehicles were built at BAE Systems Land Systems plant in Newcastle upon Tyne. The contract was awarded in 2001 to Vickers Defence Systems, who were bought by BAE Systems in 2004. The project was known as the Future Engineer Tank. 33 have been built. It was first used on exercise in May 2007 with the 1st Battalion (Mechanised) of the Duke of Lancaster's Regiment. A number of Trojans are permanently based in Canada at British Army Training Unit Suffield in order to allow the Royal Engineers to support Armoured Battle Groups on major exercises. Trojans were first deployed operationally by the British Army to Afghanistan in 2009 engaging in their first advance under contact in 2010. During Operation Moshtarak 28 Engineer Regiment operated the Trojan attempting to use its traditional mine clearance equipment in the Counter-IED role in support of a major British Army advance. Its companion vehicle, developed at the same time, is another variant of the Challenger 2, the Titan armoured bridge layer, of which 33 have also been built. References External links Royal Engineers Military engineering vehicles of the United Kingdom Mine warfare countermeasures BAE Systems land vehicles Vehicles introduced in 2007 Military vehicles introduced in the 2000s
1237399
https://en.wikipedia.org/wiki/CPanel
CPanel
cPanel is a web hosting control panel software developed by cPanel, LLC. It provides a graphical interface (GUI) and automation tools designed to simplify the process of hosting a web site to the website owner or the "end user". It enables administration through a standard web browser using a three-tier structure. While cPanel is limited to managing a single hosting account, cPanel & WHM allows the administration of the entire server. In addition to the GUI, cPanel also has command line and API-based access that allows third-party software vendors, web hosting organizations, and developers to automate standard system administration processes. cPanel & WHM is designed to function either as a dedicated server or virtual private server. The latest cPanel & WHM version supports installation on CentOS, Red Hat Enterprise Linux (RHEL), CloudLinux OS, and Ubuntu. cPanel 11.30 is the last major version to support FreeBSD. History cPanel is currently developed by cPanel, L.L.C., a privately owned corporation headquartered in Houston, Texas, United States. It was originally designed in 1996 as the control panel for Speed Hosting, a now-defunct web hosting company. The original author of cPanel, John Nick Koston, had a stake in Speed Hosting. Webking quickly began using cPanel after their merger with Speed Hosting. The new company moved their servers to Virtual Development Inc. (VDI), a now-defunct hosting facility. Following an agreement between Koston and VDI, cPanel was only available to customers hosted directly at VDI. At the time there was little competition in the control panel market, with the main choices being VDI and Alabanza. Eventually, due to Koston leaving for college, he and William Jensen signed an agreement in which cPanel was split into a separate program called WebPanel; this version was run by VDI. Without the lead programmer, VDI was not able to continue any work on cPanel and eventually stopped supporting it completely. Koston kept working on cPanel while also working at BurstNET. Eventually, he left BurstNET to focus fully on cPanel. cPanel 3 was released in 1999: main additions over cPanel 2 were an automatic upgrade and the Web Host Manager (WHM). The interface was also improved when Carlos Rego of WizardsHosting made what became the default theme of cPanel. On August 20, 2018 cPanel L.L.C. announced that it had signed an agreement to be acquired by a group led by Oakley Capital (who also own Plesk and SolusVM). While Koston sold his interest in cPanel, he will continue to be an owner of the company that owns cPanel. Add-ons cPanel provides front-ends for a number of common operations, including the management of PGP keys, crontab tasks, mail and FTP accounts, and mailing lists. Several add-ons exist, some for an additional fee, including auto installers such as Installatron, Fantastico, Softaculous, and WHMSonic (SHOUTcast/radio Control Panel Add-on). The add-ons need to be enabled by the server administrator in WHM to be accessible to the cPanel user. WHM manages some software packages separately from the underlying operating system, applying upgrades to Apache, PHP, MySQL, Exim, FTP, and related software packages automatically. This ensures that these packages are kept up-to-date and compatible with WHM, but makes it more difficult to install newer versions of these packages. It also makes it difficult to verify that the packages have not been tampered with, since the operating system's package management verification system cannot be used to do so. WHM WHM, short for WebHost Manager, is a web-based tool which is used for server administration. There are at least two tiers of WHM, often referred to as "root WHM", and non-root WHM (or Reseller WHM). Root WHM is used by server administrators and non-root WHM (with fewer privileges) is used by others, like entity departments, and resellers to manage hosting accounts often referred to as cPanel accounts on a web server. WHM is also used to manage SSL certificates (both server self generated and CA provided SSL certificates), cPanel users, hosting packages, DNS zones, themes, and authentication methods. The default automatic SSL (AutoSSL) provided by cPanel is powered by Sectigo (formerly Comodo CA). Additionally, WHM can also be used to manage FTP, Mail (POP, IMAP, and SMTP) and SSH services on the server. As well as being accessible by the root administrator, WHM is also accessible to users with reseller privileges. Reseller users of cPanel have a smaller set of features than the root user, generally limited by the server administrator, to features which they determine will affect their customers' accounts rather than the server as a whole. From root WHM, the server administrator can perform maintenance operations such as upgrading and recompiling Apache and PHP, installing Perl modules, and upgrading RPMs installed on the system. Enkompass A version of cPanel & WHM for Microsoft Windows, called Enkompass, was declared end-of-life as of February 2014. Version 3 remained available for download, but without further development or support. In the preceding years, Enkompass had been available for free as product development slowed. Pricing On June 27, 2019 cPanel announced a new account-based pricing structure. After backlash from their customers, cPanel issued a second announcement but did not change the new structure. See also Comparison of web hosting control panels References External links Web applications Website management Web server management software Perl software 1996 software
47496131
https://en.wikipedia.org/wiki/Syntorial
Syntorial
Syntorial is a synthesizer-teaching software created by Audible Genius, a company owned by website programmer, musician and teacher Joe Hanley. He was inspired to make the program by his frustration of learning synthesis in his early career, and wanted to create something that would train the user to design a patch by ear. Kickstarter-funded in 2012, the program was officially released for Microsoft Windows and OS X on August 27, 2013, and for the iPad on June 25, 2015. The synth that is built into the software is called Primer, which was released as a VST and AU in November 2013. Syntorial garnered critical acclaim with reviewers praising it a fun way to learn synthesis, earning an Editors’ Choice Award from Electronic Musician in 2014. The latest version of Syntorial is 1.6.1, which was released on August 4, 2015. Features Syntorial includes a total of 199 lessons and 129 interactive challenges, where the user programs sounds using a built-in synth called Primer. Each lesson starts with a video lecture teaching a control or a group of controls, followed by a challenge; a patch is heard, but the user is not shown how the patch is programmed, so that they can try to program the patch to sound like the hidden patch. After the user is done programming the sound, they will submit the patch to the program, with the user shown what controls they used correctly and what controls they used incorrectly. Once the user corrects the mistakes, they can try the challenge again or move on to the next lesson. As the user progresses, more controls are added in each topic. A total of 39 quizzes are included in-between lessons. Once the user finishes all the lessons, he/she will have programmed 706 patches. Syntorial uses controls and features that are the most common in many synthesizers, including subtractive synthesis, three oscillators, saw, pulse, triangle and sine waves, an FM parameter, noise oscillator, oscillator sync, band-pass filter with resonance and key tracking, ADSR envelopes, an AD modulation envelope, LFO, monophonic and polyphonic voice modes, portamento, unison with voice, detune and spread controls, ring modulation, distortion, chorus, phaser, delay, reverb, mod wheel, pitch wheel and velocity. Development A 2003 graduate of the Berklee College of Music, Joe Hanley had been a professional musician for 17 years, a teacher for nine years, and a composer for six years before he began work on Syntorial. He was inspired to make Syntorial based on his struggles of learning musical synthesis: After making the prototype, Hanley held crowd-funding for Syntorial on Kickstarter from August 23 to September 22, 2012. The campaign garnered $8,614, succeeding its $5,500 goal. The money was used for hiring graphic and web designers, for buying JUCE and other development-related programs and paying for some startup business-related expenses. Hanley said on the main page for the campaign that he was planning to finish the program by March 2013. A beta was released on June 15, 2013, and the first official version of Syntorial came out on August 27. The day after Syntorial's first official release, Hanley announced that he was in the works of the VST and AU versions of Primer. A beta of the VST and AU was released on October 14, 2013, and the official first versions was distributed on November 5, 2013. Version 1.2 of Syntorial was released on September 25, 2013, added 50 presets to the synth and the ability to save user files, significantly tweaked the scoring system and lowered the audio response level, among minor bug fixes. It was also announced by Hanley that day that all the presets in Primer would be created for other software synths, including Analog by Ableton, Logic Pro's ES2, Thor from Reason, Wasp XT included in FL Studio, and Tal-Noisemaker. The program was updated to 1.3.1 on February 28, 2014, with the ability to skip lessons and modifications to several patches to get rid of "buried" parameters. Primer was also updated to 1.1 that day, adding a MIDI control mapping feature for customization of the interface. A pack of 37 additional video lessons totaling three hours and 17 minutes, titled the Z3TA+ 2 Lesson Pack, was released on May 30, 2014. On July 22, another pack of lessons, titled the Minimoog Voyager Lesson Pack, was issued, with 34 more videos totaling two hours and 22 minutes. Version 1.5.1 of Syntorial was released on October 15, 2014, which added the ability to save progress in-between a challenge, adjust the audio response volume, share challenge results on Twitter and Facebook directly from the software, and overhaul the scoring system. Many tweaks were also made to the reply system in this version. On June 25, 2015, Syntorial was released for the iPad. The current version of Syntorial, as of August 4, 2015, is 1.6.1, new features including a hint button for group challenges and saving "favorite patches" during challenges. Critical reception Syntorial was met with critical acclaim upon release, with many reviewers praising it as the most enjoyable way of learning a synthesizer. Tom Brislin headlined in his review for Keyboard magazine that it was "The Most Fun Way To Learn Synth Programming". He did, however, dislike the limited amount of polyphonic voices of the Primer synth. The magazine MusicTech scored it a ten out of ten, giving it the labels of "excellence" and "innovation" and calling it “The best training in synth sound-design we’ve come across." It got a 2014 Electronic Musician Editors’ Choice Award, with the magazine calling it "too cool for school". References External links Official website Kickstarter Syntorial Site Syntorial overview en español Software synthesizers Educational software for MacOS Educational software for Windows
1530518
https://en.wikipedia.org/wiki/Honor%20society
Honor society
In the United States, an honor society is a rank organization that recognizes excellence among peers. Numerous societies recognize various fields and circumstances. The Order of the Arrow, for example, is the National Honor Society of the Boy Scouts of America. Chiefly, the term refers to scholastic honor societies, those that recognize students who excel academically or as leaders among their peers, often within a specific academic discipline. Many honor societies invite students to become members based on the scholastic rank (the top x% of a class) and/or grade point averages of those students, either overall, or for classes taken within the discipline for which the honor society provides recognition. In cases where academic achievement would not be an appropriate criterion for membership, other standards are usually required for membership (such as completion of a particular ceremony or training program). It is also common for a scholastic honor society to add a criterion relating to the character of the student. Some honor societies are invitation only while others allow unsolicited applications. Finally, membership in an honor society might be considered exclusive, i.e., a member of such an organization cannot join other honor societies representing the same field. Academic robes and regalia identifying by color the degree, school and other distinction, are controlled under rules of a voluntary Intercollegiate Code. In addition, various colored devices such as stoles, scarves, cords, tassels, and medallions are used to indicate membership in a student's honor society. Of these, cords and mortarboard tassels are most often used to indicate membership. Most institutions allow honor cords, tassels and/or medallions for honor society members. Stoles are less common, but they are available for a few honor societies. Virtually all, if not all honor societies have chosen such colors, and may sell these items of accessory regalia as a service or fundraiser. Many fraternities and sororities are referred to by their membership or by non-members as honor societies, and vice versa, though this is not always the case. Honor societies exist at the high school, collegiate/university, and postgraduate levels, although university honor societies are by far the most prevalent. In America, the oldest academic society, Phi Beta Kappa, was founded as a social and literary fraternity in 1776 at the College of William and Mary and later organized as an honor society in 1898, following the establishment of the honor societies Tau Beta Pi for Engineering (1885), Sigma Xi for Scientific Research (1886), and Phi Kappa Phi for all disciplines (1897). Mortar Board was established in 1918, as the first national honor society for senior women, with chapters at four institutions: Cornell University, The University of Michigan, Ohio State University and Swarthmore College. Later, the society became coeducational. The Association of College Honor Societies (ACHS) is a predominantly American, voluntary association of national collegiate and post-graduate honor societies. ACHS was formed in 1925 to establish and maintain desirable standards for honor societies. While ACHS membership is a certification that the member societies meet these standards, not all legitimate honor societies apply for membership in ACHS. List of honor societies Notable national and international honor societies based in or at schools include the following: General collegiate scholastic honor societies These societies are open to all academic disciplines, though they may have other affinity requirements. Alpha Chi, (all academic fields), colors: Emerald Green and Sapphire blue Alpha Kappa Mu, (all academic fields) Alpha Lambda Delta, (freshman scholarship), Alpha Sigma Lambda, (non-traditional students), colors: Burgundy and Gold Alpha Sigma Nu, (Scholarship, Loyalty and Service at Jesuit institutions of higher education), colors: Maroon and Gold Chi Alpha Sigma, (college student-athletes) Chi Beta Lambda, (competency-based learning), colors: Navy blue, Pink, and White Delta Epsilon Sigma, (all academic fields at traditionally Catholic colleges and universities) Delta Epsilon Tau, (Distance Education Accrediting Commission institutions) Epsilon Tau Pi, (General scholarship, Eagle Scouts) Golden Key International Honour Society (academics) Mortar Board (Scholars chosen for Leadership united to Serve) National Society of Collegiate Scholars, (scholarship/leadership/service), colors: Purple and Gold Phi Eta Sigma, (freshman scholarship) Phi Kappa Phi, (all academic fields) Phi Sigma Pi, (all academic fields), colors: Purple and Gold Phi Tau Phi, (all academic fields) Republic of China Tau Sigma, (transfer students) Lambda Sigma, (student leadership, scholarship, and service) Order of Omega (fraternities and sororities) Sigma Alpha Lambda, (all academic fields) Leadership These societies recognize leadership, with a scholarship component; multi-disciplinary. Order of The Key Honor Society (leadership) Mortar Board (leadership), colors: Gold and Silver National Residence Hall Honorary, (Residence hall leadership/service) Omicron Delta Kappa, (leadership and academic; juniors, seniors, graduate students, alumni, faculty and staff, honorary) Sigma Alpha Lambda, (leadership) Sigma Alpha Pi, (leadership) --> See National Society of Leadership and Success or NSLS (certified by the U.S. Department of Education) Phi Kappa Alpha (Syracuse), (leadership) Dormant, as of 1961. Military These are collegiate-based honor societies for students in the armed forces. There are other non-collegiate honor societies serving military branches, often listed as professional fraternities. Arnold Air Society, (United States Air Force cadets) Pershing Rifles, (United States armed forces) Scabbard and Blade, (ROTC cadets and midshipmen) Liberal arts These societies are open to the traditional liberal arts disciplines, and may be department-specific. Some are grouped by discipline subheader. Alpha Kappa Delta, (sociology), color: Teal Alpha Upsilon Alpha, (reading and language arts) Chi Delta Phi, (women's literary, later co-ed) Chi Sigma Iota, (counseling) Delta Epsilon Chi, (divinity, national honor society of the Association for Biblical Higher Education) Eta Sigma Phi, (classics) Kappa Omicron Nu, (human sciences), colors: Burgundy and Cream Lambda Alpha, (anthropology) Lambda Iota Tau, (literature) Phi Alpha Theta, (history) Phi Beta Kappa, (undergraduate arts and sciences), colors: Pink and Sky blue Phi Sigma Tau, (philosophy) Phi Upsilon Omicron, (Family and consumer science), colors: Violet, Gold and Cream Pi Alpha Alpha, (public administration) Pi Gamma Mu, (social sciences) Pi Sigma Alpha, (political science) Psi Chi, (psychology), colors: Navy blue and Platinum Sigma Iota Rho, (international relations) Sigma Tau Delta, (English) Theta Alpha Kappa, , (Religious Studies/Theology/Philosophy) Theta Chi Beta, (religious studies) Business Alpha Iota Delta, (decision sciences) Alpha Mu Alpha, (marketing), color: Red Beta Alpha Psi, (accounting and finance) Beta Gamma Sigma, (AACSB accredited business programs), colors: Gold and Yale Blue Delta Mu Delta, (ACBSP accredited business programs) Eta Sigma Delta, (International Hospitality Management Honor Society, ICHRIE) Mu Kappa Tau, (marketing) Omega Rho, (operations research, management science) Omicron Delta Epsilon, (economics), colors: Blue and Gold Sigma Beta Delta, (business, management and administration) Education Kappa Delta Pi, (education), colors: Jade Green and Violet Pi Lambda Theta, (education) Pi Omega Pi, (business education) Phi Beta Delta, (international education) Eta Sigma Gamma, (health education) Fine arts Alpha Psi Omega, (theatre) Chi Tau Epsilon, (dance) Delta Phi Delta, (art) Delta Psi Omega, (theatre) Kappa Pi, (art) Kappa Kappa Psi, (music - band) Mu Beta Psi, (music) Pi Kappa Lambda, (music) Pi Nu Epsilon, (music) Tau Beta Sigma, (music - band) Theta Alpha Phi, (theatre) Journalism and communications Kappa Tau Alpha, (journalism/mass communication), colors: Light blue and Gold Lambda Pi Eta, (communication) Society for Collegiate Journalists, (SCJ) (journalism) Languages Alpha Mu Gamma, (foreign languages), color: Gold Delta Phi Alpha, - (German), colors: Black, Red and Gold Pi Delta Phi, (French), colors: Blue, White and Red Sigma Delta Pi, (Spanish and Portuguese), colors: Red and Gold Phi Sigma Iota, (modern foreign languages, Classics, linguistics, philology, comparative literature, bilingual education, second language acquisition), colors: Purple and White Law The Order of Barristers (law) Order of the Coif (law) Phi Delta Phi, , (law), colors: Garnet and Pearl Blue Alpha Phi Sigma, , (Criminal Justice, law) Lambda Epsilon Chi, , (Paralegal) Sciences These societies are open to students in the STEM disciplines, and may be department-specific. Some are grouped by discipline subheader. Beta Beta Beta, (biology), colors: Blood red and Leaf green Beta Kappa Chi, (natural sciences/mathematics) Chi Beta Phi, (science and mathematics) Chi Epsilon Pi, (meteorology) Gamma Theta Upsilon, (geography) Gamma Sigma Epsilon, (chemistry) Iota Sigma Pi, (chemistry and related fields, women's) Phi Lambda Upsilon, (chemistry) Phi Sigma, (biological sciences) Phi Tau Sigma, (Food Science and Technology) Pi Epsilon, (environmental sciences) Sigma Gamma Epsilon, (geology/Earth sciences), colors: Gold, Blue, and Silver Sigma Lambda Chi, (construction management technology) Sigma Pi Sigma, (physics), colors: Forest Green and Ivory Sigma Xi, ΣΞ (Research in Science and Engineering), colors: Blue and Gold Sigma Zeta, (natural sciences/mathematics/computer science) Agriculture Alpha Mu, , (Agricultural Systems Management) Delta Tau Alpha, , (Honor Society of Agriculture) Gamma Sigma Delta, , (Honor Society of Agriculture) colors: Sand and Forest Green Pi Alpha Xi, (horticulture), colors: Nile green and Cerulean blue Xi Sigma Pi, (forestry), colors: Green and Gray Architecture Sigma Lambda Alpha, (landscape architecture), colors: Gold and Green Tau Sigma Delta, (architecture), colors: White and Gold Engineering Within the larger group of STEM disciplines, these societies serve engineering disciplines. Alpha Epsilon, (agricultural/food/biological engineering) Alpha Eta Mu Beta, (biomedical engineering) Alpha Nu Sigma, (nuclear engineering) Alpha Pi Mu, (industrial engineering) Alpha Sigma Mu, (metallurgy/materials engineering) Chi Epsilon, (civil engineering), colors: Purple and White Eta Kappa Nu, (electrical engineering, computer engineering), colors: Navy blue and Scarlet Omega Chi Epsilon, (chemical engineering), colors: Maroon and White Phi Alpha Epsilon, (architectural engineering) Pi Epsilon Tau, (petroleum engineering and related fields), colors: Gold and Black Pi Tau Sigma, (mechanical engineering), colors: Teal and Maroon Rho Beta Epsilon, (robotics), colors: Crimson, Gold, and Black Sigma Gamma Tau, (aerospace engineering) Tau Alpha Pi, (engineering technology) Tau Beta Pi, (engineering, all types), colors: Brown and White Upsilon Pi Epsilon, (computer science/computer engineering) Health sciences This section includes all health care related fields, including veterinary science. Alpha Epsilon Delta, (pre-medical), colors: Red and Violet Alpha Omega Alpha, (medical students and physicians), Forest Green, Gold and White Beta Sigma Kappa, (Optometry) Delta Omega, (public health) Iota Tau Alpha, (Athletic Training) Nu Rho Psi, (Neuroscience) Phi Zeta, (veterinary medicine) Pi Delta, (podiatry) Pi Theta Epsilon, (occupational therapy) Rho Chi, (pharmacy), colors: Purple and White Sigma Theta Tau, (nursing), colors: Orchid and White Sigma Phi Alpha, (dental hygiene) Sigma Phi Omega, , (Gerontology) Sigma Sigma Phi, (osteopathic medicine) or (medicine) Upsilon Phi Delta, (health administration) Information technology Beta Phi Mu, (library science/information science/information technology) Epsilon Pi Tau, (technology) Gamma Nu Eta, (information technology) Order of the Sword & Shield, (homeland security, intelligence, emergency management, and all protective studies) Upsilon Pi Epsilon, (computer information systems, computer science) Mathematics Kappa Mu Epsilon, (mathematics) Mu Alpha Theta, (mathematics, high school and two-year colleges) Mu Sigma Rho, (statistics) Pi Mu Epsilon, (mathematics) Local honor societies Some universities have their own independent, open honor societies, which are not affiliated with any national or international organization. Such organizations typically recognize students who have succeeded academically irrespective of their field of study. These include: Activities Honorary Society at University of Illinois at Chicago Aquinas Honor Society at University of St. Thomas Bisonhead at the University at Buffalo, The State University of New York Burning Spear Society at Florida State University Cap and Skull at Rutgers University FHC Society at The College of William & Mary Florida Blue Key at University of Florida Friar Society at the University of Texas at Austin Iron Arrow Honor Society at the University of Miami Lion's Paw at the Pennsylvania State University Matteo Ricci Society at Fordham University Owl and Key at the University of Utah Phalanx Honor Society at Rensselaer Polytechnic Institute, also at Clarkson University. Separate groups? Plumb Bob at the University of Minnesota Quill and Dagger at Cornell University Raven Society at the University of Virginia Society of Innocents at the University of Nebraska–Lincoln Skull and Bones (Penn State) at the Pennsylvania State University Skull and Dagger at University of Southern California Sphinx Head at Cornell University Dean William Tate Society at the University of Georgia Texnikoi Engineering Honorary at Ohio State University Tiger Brotherhood (Clemson University) at the Clemson University White Key Society at Rensselaer Polytechnic Institute Certificate, vocational, technical, and workforce education Alpha Beta Kappa National Technical Honor Society Two-year colleges and community colleges Alpha Beta Gamma, (business at two-year colleges) Alpha Gamma Sigma, (California community colleges) Beta Chi, (criminal justice at two-year colleges) Kappa Beta Delta, (business at Community Colleges) Phi Rho Pi, (forensics at two-year colleges) Phi Theta Kappa, (All academic fields at community and junior colleges) Psi Beta, (psychology at two-year colleges) Sigma Kappa Delta, (English at community and junior colleges) Sigma Zeta, (natural sciences/mathematics/computer science - Associate membership available for community and junior colleges) Secondary school societies Commonly referred to as high school societies. California Scholarship Federation Cum Laude Society (general) German National Honor Society-Delta Epsilon Phi (Deutsche Ehrenverbindung) (German) International Thespian Society (theatre), colors: Blue and Gold Key Club Mu Alpha Theta, (mathematics) National Art Honor Society (visual arts) National Beta Club National Forensic League (public speaking), colors: Red and Silver National Honor Society (high school general) National Honorary Beta Club (high school general) National Honorary Junior Beta Club (middle school general) National Junior Honor Society (middle school general) Quill and Scroll (journalism) Science National Honor Society (science) Société Honoraire de Français (French) Spanish National Honor Society (Sociedad Honoraria Hispánica) (Spanish) Technology Student Association (STEM), colors: Red blue White Tri-M Music Honor Society (music), colors: Pink Boy Scouts Order of the Arrow, National BSA Honor Society Tribe of Mic-O-Say, Heart of America Council and Pony Express Council Firecrafter, Crossroads of America Council See also Professional fraternities and sororities Association of College Honor Societies (ACHS) References External links Educational organizations based in the United States
29492999
https://en.wikipedia.org/wiki/IPXE
IPXE
iPXE is an open-source implementation of the Preboot eXecution Environment (PXE) client software and bootloader, created in 2010 as a fork of gPXE. It can be used to enable computers without built-in PXE capability to boot from the network, or to provide additional features beyond what built-in PXE provides. While standard PXE clients use only TFTP to load parameters and programs from the server, iPXE client software can use additional protocols, including HTTP, iSCSI, ATA over Ethernet (AoE), and Fibre Channel over Ethernet (FCoE). Also, on certain hardware, iPXE client software can use a Wi-Fi link, as opposed to the wired connection required by the PXE standard. The iPXE client is a superset of, and can replace or supplement, prior PXE implementations. iPXE is the official replacement for gPXE. it has every feature of gPXE, and users can seamlessly upgrade from gPXE to iPXE. Before 2008, gPXE was known as Etherboot. PXE implementation iPXE can be booted by a computer either by replacing (re-flashing) the existing standard PXE ROM on a supported network interface card (NIC), or by booting the NIC's standard PXE ROM and then chainloading into the iPXE binary, thus obtaining its features without the need to re-flash a NIC. iPXE firmware embeds its configuration script into the firmware image, thus any changes to the configuration require a NIC to be re-flashed. iPXE implements its own PXE stack either by using the network card driver provided by iPXE, or the standard PXE UNDI driver if iPXE is chainloaded from a standard PXE ROM. Implementing an independent PXE stack allows clients without the standard PXE ROM on their NICs to use an alternative iPXE stack by loading it from an alternative medium. Boot manager Although its basic role was to implement a PXE stack, iPXE can be also used as a network boot manager with limited capabilities for menu-based interaction with end users. iPXE can fetch boot files using multiple network protocols, such as TFTP, NFS, HTTP or FTP. iPXE can act as a boot loader for the Linux kernel, with support for multiboot. For other operating systems, for example Windows CE, iPXE chain-loads corresponding Microsoft boot loader. Additionally, iPXE is scriptable and can load COMBOOT and COM32 SYSLINUX extensions, which, for example, allows SYSLINUX-based graphical menu capabilities to be available for network booting. See also PXE PXELINUX gPXE References External links and Etherboot/gPXE wiki Introduction to Network Booting and Etherboot Network booting Free boot loaders Free network-related software
657071
https://en.wikipedia.org/wiki/FLTK
FLTK
Fast Light Toolkit (FLTK, pronounced fulltick) is a cross-platform widget (graphical control element) library for graphical user interfaces (GUIs), developed by Bill Spitzak and others. Made to accommodate 3D graphics programming, it has an interface to OpenGL, but it is also suitable for general GUI programming. Using its own widget, drawing and event systems abstracted from the underlying system-dependent code, it allows for writing programs which look the same on all supported operating systems. FLTK is free and open-source software, licensed under GNU Lesser General Public License (LGPL) with an added clause permitting static linking from applications with incompatible licenses. In contrast to user interface libraries like GTK, Qt, and wxWidgets, FLTK uses a more lightweight design and restricts itself to GUI functionality. Because of this, the library is very small (the FLTK "Hello World" program is around 100 KiB), and is usually statically linked. It also avoids complex macros, separate code preprocessors, and use of some advanced C++ features: templates, exceptions, and run-time type information (RTTI) or, for FLTK 1.x, namespaces. Combined with the modest size of the package, this makes it relatively easy to learn for new users. These advantages come with corresponding disadvantages. FLTK offers fewer widgets than most GUI toolkits and, because of its use of non-native widgets, does not have native look-and-feel on any platform. Meaning of the name FLTK was originally designed to be compatible with the Forms Library written for Silicon Graphics (SGI) machines (a derivative of this library called XForms is still used quite often). In that library, all functions and structures start with fl_. This naming was extended to all new methods and widgets in the C++ library, and this prefix FL was taken as the name of the library. After FL was released as open source, it was discovered that searching "FL" on the Internet was a problem, because it is also the abbreviation for Florida. After much debating and searching for a new name for the toolkit, which was already in use by several people, Bill Spitzak came up with Fast Light Tool Kit (FLTK). Architecture FLTK is an object-oriented widget toolkit written in the programming language C++. While GTK is mainly for the X Window System, FLTK works on other platforms, including Microsoft Windows (interfaced with the Windows API), and OS X (interfaced with Quartz). A Wayland back-end is being discussed. FLTK2 has gained experimental support for optionally using the cairo graphics library. Language bindings A library written in one programming language may be used in another language if language bindings are written. FLTK has a range of bindings for various languages. FLTK was mainly designed for, and is written in, the programming language C++. However, bindings exist for other languages, for example Lua, Perl, Python, Ruby, Rust and Tcl. For FLTK 1.x, this example creates a window with an Okay button: #include <FL/Fl.H> #include <FL/Fl_Window.H> #include <FL/Fl_Button.H> int main(int argc, char *argv[]) { Fl_Window* w = new Fl_Window(330, 190); new Fl_Button(110, 130, 100, 35, "Okay"); w->end(); w->show(argc, argv); return Fl::run(); } GUI designers FLTK includes Fast Light User Interface Designer (FLUID), a graphical GUI designer that generates C++ source and header files. Use Many programs and projects use FLTK, including: Nanolinux, 14 MB Linux distribution XFDOS, a FreeDOS-based distribution with a GUI, porting Nano-X and FLTK Agenda VR3, a Linux-based personal digital assistant with software based on FLTK. third-party Agenda VR3 software Amnesia: The Dark Descent, by Frictional Games uses FLTK as its launcher application MwendanoWD, Logic puzzle for personal computer by YPH. Audio: Fldigi, amateur radio software, allows data transmission and text chat via digital modes such as PSK31 Giada, looper, micro-sequencer, sample player software, open-source Prodatum, synthesizer preset editor, uses a lifelike interface design ZynAddSubFX, an open-source software synthesizer DiSTI GL Studio, human-machine interface development tool Engineering: ForcePAD, an intuitive tool to visualise the behavior of structures subject to loading and boundary conditions Gmsh, an open-source finite element mesh generator RoboCIM, software to simulate and control operation of a servo robot system and external devices Equinox Desktop Environment (EDE) FlBurn optical disc burning software for Linux Graphics: Avimator, a Biovision Hierarchy (BVH) editor CinePaint, deep-paint software, migrating from GTK to FLTK, open-source ITK-SNAP, software application for medical image segmentation, open-source Nuke, a digital compositing program. Until version 5, now replaced by Qt Open Movie Editor OpenVSP, NASA parametric aircraft sketching, recently open-sourced PosteRazor, open-source poster printing software for Windows, OS X, Linux Tilemap Studio, An open-source tilemap editor for Game Boy, Color, Advance, DS, and SNES projects SmallBASIC, Windows port Web browsers: Dillo, Dillo-2 was based on FLTK-2, abandoning this FLTK branch, with no official release, was a major cause of Dillo-3 being started, using FLTK1.3 Fifth, replicates functioning of early Opera NetRider Brain Visualizer: An open-source interactive visualizer for large-scale 3D brain models. Part of the Brain Organization Simulation System (BOSS) developed at Stony Brook University X window managers: FLWM miwm Versions This version history is an example of the sometimes tumultuous nature of open-source development. 1.0.x This is a prior stable version, now unmaintained. 1.1.x This is a prior stable version, now unmaintained. 2.0 branch This was a development branch, long thought to be the next step in FLTK's evolution, with many new features and a cleaner programming style. It never achieved stability, and development has largely ceased. The branch is inactive now. 1.2.x This was an attempt to take some of the best features of 2.0 and merge them back into the more popular 1.1 branch. It is no longer developed. 1.3.x Current stable release. Provides UTF-8 support. 1.4.x Current development branch. Adds more features to 1.3. 3.0 branch This branch is mostly a conceptual model for future work. Now inactive. See also GTK gtkmm (C++ binding of GTK) FOX toolkit IUP (software) - a multi-platform toolkit to build native graphical user interfaces Juce Qt (software) Visual Component Framework (VCF) Widget toolkit wxWidgets - cross platform open source C++ widgets toolkit developed by community Ultimate++ List of widget toolkits References External links Cross-platform free software Free computer libraries Free software programmed in C++ Software that uses Cairo (graphics) Software using the LGPL license Widget toolkits X-based libraries
61335100
https://en.wikipedia.org/wiki/Amazon%20Rekognition
Amazon Rekognition
Amazon Rekognition is a cloud-based software as a service (SaaS) computer vision platform that was launched in 2016. It has been sold and used by a number of United States government agencies, including U.S. Immigration and Customs Enforcement (ICE) and Orlando, Florida police, as well as private entities. Capabilities Rekognition provides a number of computer vision capabilities, which can be divided into two categories: Algorithms that are pre-trained on data collected by Amazon or its partners, and algorithms that a user can train on a custom dataset. As of July 2019, Rekognition provides the following computer vision capabilities. Pre-trained algorithms Celebrity recognition in images Facial attribute detection in images, including gender, age range, emotions (e.g. happy, calm, disgusted), whether the face has a beard or mustache, whether the face has eyeglasses or sunglasses, whether the eyes are open, whether the mouth is open, whether the person is smiling, and the location of several markers such as the pupils and jaw line. People Pathing enables tracking of people through a video. An advertised use-case of this capability is to track sports players for post-game analysis. Text detection and classification in images Unsafe visual content detection Algorithms that a user can train on a custom dataset SearchFaces enables users to import a database of images with pre-labeled faces, to train a machine learning model on this database, and to expose the model as a cloud service with an API. Then, the user can post new images to the API and receive information about the faces in the image. The API can be used to expose a number of capabilities, including identifying faces of known people, comparing faces, and finding similar faces in a database. Face-based user verification History and use 2017 In late 2017, the Washington County, Oregon Sheriff's Office began using Rekognition to identify suspects' faces. Rekognition was marketed as a general-purpose computer vision tool, and an engineer working for Washington County decided to use the tool for facial analysis of suspects. Rekognition was offered to the department for free, and Washington County became the first US law enforcement agency known to use Rekognition. In 2018, the agency logged over 1,000 facial searches. The county, according to the Washington Post, by 2019 was paying about $7 a month for all of its searches. The relationship was unknown to the public until May 2018. In 2018, Rekognition was also used to help identify celebrities during a royal wedding telecast. 2018 In April 2018, it was reported that FamilySearch was using Rekognition to enable their users to "see which of their ancestors they most resemble based on family photographs". In early 2018, the FBI also began using it as a pilot program for analyzing video surveillance. In May 2018, it was reported by the ACLU that Orlando, Florida was running a pilot using Rekognition for facial analysis in law enforcement, with that pilot ending in July 2019. After the report, on June 22, 2018, Gizmodo reported that Amazon workers had written a letter to CEO Jeff Bezos requesting he cease selling Rekognition to US law enforcement, particularly ICE and Homeland Security. A letter was also sent to Bezos by the ACLU. On June 26, 2018, it was reported that the Orlando police force had ceased using Rekognition after their trial contract expired, reserving the right to use it in the future. The Orlando Police Department said that they had "never gotten to the point to test images" due to old infrastructure and low bandwidth. In July 2018, the ACLU released a test showing that Rekognition had falsely matched 28 members of Congress with mugshot photos, particularly Congresspeople of color. 25 House members afterwards sent a letter to Bezos, expressing concern about Rekognition. Amazon responded saying the Rekognition test had generated 80 percent confidence, while it recommended law enforcement only use matches rated at 99 percent confidence. The Washington Post states that Oregon instead has officers pick a "best of five" result, instead of adhering to the recommendation. In September 2018, it was reported that Mapillary was using Rekognition to read the text on parking signs (e.g. no stopping, no parking, or specific parking hours) in cities. In October 2018, it was reported that Amazon had earlier that year pitched Rekognition to U.S. Immigration and Customs Enforcement agency. Amazon defended government use of Rekognition. On December 1, 2018, it was reported that 8 Democratic lawmakers had said in a letter that Amazon had "failed to provide sufficient answers" about Rekognition, writing that they had "serious concerns that this type of product has significant accuracy issues, places disproportionate burdens on communities of color, and could stifle Americans' willingness to exercise their First Amendment rights in public." 2019 In January 2019, MIT researchers published a peer-reviewed study asserting that Rekognition had more difficulty in identifying dark-skinned females than competitors such as IBM and Microsoft. In the study, Rekognition misidentified darker-skinned women as men 31% of the time, but made no mistakes for light-skinned men. Amazon called the report "misinterpreted results" of the research with an improper "default confidence threshold." In January 2019, Amazon's shareholders "urged Amazon to stop selling Rekognition software to law enforcement agencies." Amazon in response defended its use of Rekognition, but supported new federal oversight and guidelines to "make sure facial recognition technology cannot be used to discriminate." In February 2019, it was reported that Amazon was collaborating with the National Institute of Standards and Technology (NIST) on developing standardized tests to improve accuracy and remove bias with facial recognition. In March 2019, an open letter regarding Rekognition was sent by a group of prominent AI researchers to Amazon, criticizing its sale to law enforcement with around 50 signatures. In April 2019, Amazon was told by the Securities and Exchange Commission that they had to vote on two shareholder proposals seeking to limit Rekognition. Amazon argued that the proposals were an "insignificant public policy issue for the Company" not related to Amazon's ordinary business, but their appeal was denied. The vote was set for May. The first proposal was tabled by shareholders. On May 24, 2019, 2.4% of shareholders voted to stop selling Rekognition to government agencies, while a second proposal calling for a study into Rekognition and civil rights had 27.5% support. In August 2019, the ACLU again used Rekognition on members of government, with 26 of 120 lawmakers in California flagged as matches to mugshots. Amazon stated the ACLU was "misusing" the software in the tests, by not dismissing results that did not meet Amazon's recommended accuracy threshold of 99%. By August 2019, there had been protests against ICE's use of Rekognition to surveil immigrants. In March 2019, Amazon announced a Rekognition update that would improve emotional detection, and in August 2019, "fear" was added to emotions that Rekognition could detect. 2020 In June 2020, Amazon announced it was implementing a one-year moratorium on police use of Rekognition, in response to the George Floyd protests. Controversy regarding facial analysis Racial and gender bias In 2018, MIT researchers Joy Buolamwini and Timnit Gebru published a study called Gender Shades. In this study, a set of images was collected, and faces in the images were labeled with face position, gender, and skin tone information. The images were run through SaaS facial recognition platforms from Face++, IBM, and Microsoft. In all three of these platforms, the classifiers performed best on male faces (with error rates on female faces being 8.1% to 20.6% higher than error rates on male faces), and they performed worst on dark female faces (with error rates ranging from 20.8% to 30.4%). The authors hypothesized that this discrepancy is due principally to Megvii, IBM, and Microsoft having more light males than dark females in their training data, i.e. dataset bias. In January 2019, researchers Inioluwa Deborah Raji and Joy Buolamwini published a follow-up paper that ran the experiment again a year later, on the latest versions same three SaaS facial recognition platforms, plus two additional platforms: Kairos, and Amazon Rekognition. While the systems' overall error-rates improved over the previous year, all five of the systems again performed better on male faces than on dark female faces. See also Amazon Lex Amazon Mechanical Turk Amazon Polly Amazon SageMaker Amazon Web Services Facial recognition system Timeline of Amazon Web Services References 2016 software Rekognition Rekognition Cloud infrastructure Computer vision software Data mining and machine learning software Facial recognition software Object recognition and categorization
47910741
https://en.wikipedia.org/wiki/Nektar%2B%2B
Nektar++
Nektar++ is a spectral/hp element framework designed to support the construction of efficient high-performance scalable solvers for a wide range of partial differential equations (PDE). The code is released as open-source under the MIT license. Although primarily driven by application-based research, it has been designed as a platform to support the development of novel numerical techniques in the area of high-order finite element methods. Nektar++ is modern object-oriented code written in C++ and is being actively developed by members of the SherwinLab at Imperial College London (UK) and Kirby's group at the University of Utah (US). Capabilities Nektar++ includes the following capabilities: One-, two- and three-dimensional problems; Multiple and mixed element types, i.e. triangles, quadrilaterals, tetrahedra, prisms and hexahedra; Both hierarchical and nodal expansion bases with variable and heterogeneous polynomial order between elements; Continuous Galerkin, discontinuous Galerkin, hybridizable discontinuous Galerkin and flux reconstruction operators; Multiple implementations of finite element operators for efficient execution on a wide range of CPU architectures; Comprehensive range of explicit, implicit and implicit-explicit (IMEX) time-integration schemes; Preconditioners tailored to high-order finite element methods; Numerical stabilization techniques such as dealiasing and spectral vanishing viscosity; Parallel execution and scalable to thousands of processor cores; Pre-processing tools to generate meshes, or manipulate and convert meshes generated with third-party software into a Nektar++-readable format; Extensive post-processing capabilities for manipulating output data; Cross platform support for Linux, Mac OS X and Windows; Support for running jobs on cloud computing platforms via the prototype Nekkloud interface from the libhpc project; Wide user community, support and annual workshop. Stable versions of the software are released on a 1-month basis and it is supported by an extensive testing framework which ensures correctness across a range of platforms and architectures. Other capabilities currently under active development include p-adaption, r-adaption and support for accelerators (GPGPU, Intel Xeon Phi). Application domains The development of the Nektar++ framework is driven by a number of aerodynamics and biomedical engineering applications and consequently the software package includes a number of pre-written solvers for these areas. Incompressible flow This solver time-integrates the incompressible Navier-Stokes equations for performing large-scale direct numerical simulation (DNS) in complex geometries. It also supports the linearised and adjoint forms of the Navier-Stokes equations for evaluating hydrodynamic stability of flows. Compressible flow External aerodynamics simulations of high-speed compressible flows are supported through solution of the compressible Euler or Navier-Stokes equations. Cardiac Electrophysiology This solver supports the solution of the monodomain model and bidomain model of action potential propagation through myocardium. Other application areas shallow water equations; reaction-diffusion-advection problems; pulse wave propagation solver for modelling arterial networks; acoustic perturbation equations; linear elasticity equations. License Nektar++ is free and open source software, released under the MIT license. Alternative software Free and open-source software Nek5000 (BSD) Advanced Simulation Library (AGPL) Code Saturne (GPL) FEATool Multiphysics Gerris Flow Solver (GPL) OpenFOAM (GPL) SU2 code (LGPL) PyFR Proprietary software ADINA CFD ANSYS CFX ANSYS Fluent COMSOL Multiphysics Pumplinx Simcenter STAR-CCM+ KIVA (software) RELAP5-3D References External links Official resources Nektar++ home page Nektar++ Gitlab repository Computational fluid dynamics Free science software Free computer-aided design software Scientific simulation software
44504153
https://en.wikipedia.org/wiki/List%20of%20Jupiter%20trojans%20%28Trojan%20camp%29%20%281%E2%80%93100000%29
List of Jupiter trojans (Trojan camp) (1–100000)
This is a partial list of Jupiter's trojans (60° behind Jupiter) with numbers 1–100000 . If available, an object's mean diameter is taken from the NEOWISE data release, which the Small-Body Database has also adopted. Mean diameters are rounded to two significant figures if smaller than 100 kilometers. Estimates are in italics and calculated from a magnitude-to-diameter conversion, using an assumed albedo of 0.057. 1–100000 This list contains 376 objects sorted in numerical order. top References Trojan_0 Jupiter Trojans (Trojan Camp)
660850
https://en.wikipedia.org/wiki/Visualization%20%28graphics%29
Visualization (graphics)
Visualization or visualisation (see spelling differences) is any technique for creating images, diagrams, or animations to communicate a message. Visualization through visual imagery has been an effective way to communicate both abstract and concrete ideas since the dawn of humanity. Examples from history include cave paintings, Egyptian hieroglyphs, Greek geometry, and Leonardo da Vinci's revolutionary methods of technical drawing for engineering and scientific purposes. Visualization today has ever-expanding applications in science, education, engineering (e.g., product visualization), interactive multimedia, medicine, etc. Typical of a visualization application is the field of computer graphics. The invention of computer graphics (and 3D computer graphics) may be the most important development in visualization since the invention of central perspective in the Renaissance period. The development of animation also helped advance visualization. Overview The use of visualization to present information is not a new phenomenon. It has been used in maps, scientific drawings, and data plots for over a thousand years. Examples from cartography include Ptolemy's Geographia (2nd century AD), a map of China (1137 AD), and Minard's map (1861) of Napoleon's invasion of Russia a century and a half ago. Most of the concepts learned in devising these images carry over in a straightforward manner to computer visualization. Edward Tufte has written three critically acclaimed books that explain many of these principles. Computer graphics has from its beginning been used to study scientific problems. However, in its early days the lack of graphics power often limited its usefulness. The recent emphasis on visualization started in 1987 with the publication of Visualization in Scientific Computing, a special issue of Computer Graphics. Since then, there have been several conferences and workshops, co-sponsored by the IEEE Computer Society and ACM SIGGRAPH, devoted to the general topic, and special areas in the field, for example volume visualization. Most people are familiar with the digital animations produced to present meteorological data during weather reports on television, though few can distinguish between those models of reality and the satellite photos that are also shown on such programs. TV also offers scientific visualizations when it shows computer drawn and animated reconstructions of road or airplane accidents. Some of the most popular examples of scientific visualizations are computer-generated images that show real spacecraft in action, out in the void far beyond Earth, or on other planets. Dynamic forms of visualization, such as educational animation or timelines, have the potential to enhance learning about systems that change over time. Apart from the distinction between interactive visualizations and animation, the most useful categorization is probably between abstract and model-based scientific visualizations. The abstract visualizations show completely conceptual constructs in 2D or 3D. These generated shapes are completely arbitrary. The model-based visualizations either place overlays of data on real or digitally constructed images of reality or make a digital construction of a real object directly from the scientific data. Scientific visualization is usually done with specialized software, though there are a few exceptions, noted below. Some of these specialized programs have been released as open source software, having very often its origins in universities, within an academic environment where sharing software tools and giving access to the source code is common. There are also many proprietary software packages of scientific visualization tools. Models and frameworks for building visualizations include the data flow models popularized by systems such as AVS, IRIS Explorer, and VTK toolkit, and data state models in spreadsheet systems such as the Spreadsheet for Visualization and Spreadsheet for Images. Applications Scientific visualization As a subject in computer science, scientific visualization is the use of interactive, sensory representations, typically visual, of abstract data to reinforce cognition, hypothesis building, and reasoning. Scientific visualization is the transformation, selection, or representation of data from simulations or experiments, with an implicit or explicit geometric structure, to allow the exploration, analysis, and understanding of the data. Scientific visualization focuses and emphasizes the representation of higher order data using primarily graphics and animation techniques. It is a very important part of visualization and maybe the first one, as the visualization of experiments and phenomena is as old as science itself. Traditional areas of scientific visualization are flow visualization, medical visualization, astrophysical visualization, and chemical visualization. There are several different techniques to visualize scientific data, with isosurface reconstruction and direct volume rendering being the more common. Data visualization Data visualization is a related subcategory of visualization dealing with statistical graphics and geospatial data (as in thematic cartography) that is abstracted in schematic form. Information visualization Information visualization concentrates on the use of computer-supported tools to explore large amount of abstract data. The term "information visualization" was originally coined by the User Interface Research Group at Xerox PARC and included Jock Mackinlay. Practical application of information visualization in computer programs involves selecting, transforming, and representing abstract data in a form that facilitates human interaction for exploration and understanding. Important aspects of information visualization are dynamics of visual representation and the interactivity. Strong techniques enable the user to modify the visualization in real-time, thus affording unparalleled perception of patterns and structural relations in the abstract data in question. Educational visualization Educational visualization is using a simulation to create an image of something so it can be taught about. This is very useful when teaching about a topic that is difficult to otherwise see, for example, atomic structure, because atoms are far too small to be studied easily without expensive and difficult to use scientific equipment. Knowledge visualization The use of visual representations to transfer knowledge between at least two persons aims to improve the transfer of knowledge by using computer and non-computer-based visualization methods complementarily. Thus properly designed visualization is an important part of not only data analysis but knowledge transfer process, too. Knowledge transfer may be significantly improved using hybrid designs as it enhances information density but may decrease clarity as well. For example, visualization of a 3D scalar field may be implemented using iso-surfaces for field distribution and textures for the gradient of the field. Examples of such visual formats are sketches, diagrams, images, objects, interactive visualizations, information visualization applications, and imaginary visualizations as in stories. While information visualization concentrates on the use of computer-supported tools to derive new insights, knowledge visualization focuses on transferring insights and creating new knowledge in groups. Beyond the mere transfer of facts, knowledge visualization aims to further transfer insights, experiences, attitudes, values, expectations, perspectives, opinions, and predictions by using various complementary visualizations. See also: picture dictionary, visual dictionary Product visualization Product visualization involves visualization software technology for the viewing and manipulation of 3D models, technical drawing and other related documentation of manufactured components and large assemblies of products. It is a key part of product lifecycle management. Product visualization software typically provides high levels of photorealism so that a product can be viewed before it is actually manufactured. This supports functions ranging from design and styling to sales and marketing. Technical visualization is an important aspect of product development. Originally technical drawings were made by hand, but with the rise of advanced computer graphics the drawing board has been replaced by computer-aided design (CAD). CAD-drawings and models have several advantages over hand-made drawings such as the possibility of 3-D modeling, rapid prototyping, and simulation. 3D product visualization promises more interactive experiences for online shoppers, but also challenges retailers to overcome hurdles in the production of 3D content, as large-scale 3D content production can be extremely costly and time-consuming. Visual communication Visual communication is the communication of ideas through the visual display of information. Primarily associated with two dimensional images, it includes: alphanumerics, art, signs, and electronic resources. Recent research in the field has focused on web design and graphically oriented usability. Visual analytics Visual analytics focuses on human interaction with visualization systems as part of a larger process of data analysis. Visual analytics has been defined as "the science of analytical reasoning supported by the interactive visual interface". Its focus is on human information discourse (interaction) within massive, dynamically changing information spaces. Visual analytics research concentrates on support for perceptual and cognitive operations that enable users to detect the expected and discover the unexpected in complex information spaces. Technologies resulting from visual analytics find their application in almost all fields, but are being driven by critical needs (and funding) in biology and national security. Interactivity Interactive visualization or interactive visualisation is a branch of graphic visualization in computer science that involves studying how humans interact with computers to create graphic illustrations of information and how this process can be made more efficient. For a visualization to be considered interactive it must satisfy two criteria: Human input: control of some aspect of the visual representation of information, or of the information being represented, must be available to a human, and Response time: changes made by the human must be incorporated into the visualization in a timely manner. In general, interactive visualization is considered a soft real-time task. One particular type of interactive visualization is virtual reality (VR), where the visual representation of information is presented using an immersive display device such as a stereo projector (see stereoscopy). VR is also characterized by the use of a spatial metaphor, where some aspect of the information is represented in three dimensions so that humans can explore the information as if it were present (where instead it was remote), sized appropriately (where instead it was on a much smaller or larger scale than humans can sense directly), or had shape (where instead it might be completely abstract). Another type of interactive visualization is collaborative visualization, in which multiple people interact with the same computer visualization to communicate their ideas to each other or to explore information cooperatively. Frequently, collaborative visualization is used when people are physically separated. Using several networked computers, the same visualization can be presented to each person simultaneously. The people then make annotations to the visualization as well as communicate via audio (i.e., telephone), video (i.e., a video-conference), or text (i.e., IRC) messages. Human control of visualization The Programmer's Hierarchical Interactive Graphics System (PHIGS) was one of the first programmatic efforts at interactive visualization and provided an enumeration of the types of input humans provide. People can: Pick some part of an existing visual representation; Locate a point of interest (which may not have an existing representation); Stroke a path; Choose an option from a list of options; Valuate by inputting a number; and Write by inputting text. All of these actions require a physical device. Input devices range from the common – keyboards, mice, graphics tablets, trackballs, and touchpads – to the esoteric – wired gloves, boom arms, and even omnidirectional treadmills. These input actions can be used to control both the information being represented or the way that the information is presented. When the information being presented is altered, the visualization is usually part of a feedback loop. For example, consider an aircraft avionics system where the pilot inputs roll, pitch, and yaw and the visualization system provides a rendering of the aircraft's new attitude. Another example would be a scientist who changes a simulation while it is running in response to a visualization of its current progress. This is called computational steering. More frequently, the representation of the information is changed rather than the information itself. Rapid response to human input Experiments have shown that a delay of more than 20 ms between when input is provided and a visual representation is updated is noticeable by most people . Thus it is desirable for an interactive visualization to provide a rendering based on human input within this time frame. However, when large amounts of data must be processed to create a visualization, this becomes hard or even impossible with current technology. Thus the term "interactive visualization" is usually applied to systems that provide feedback to users within several seconds of input. The term interactive framerate is often used to measure how interactive a visualization is. Framerates measure the frequency with which an image (a frame) can be generated by a visualization system. A framerate of 50 frames per second (frame/s) is considered good while 0.1 frame/s would be considered poor. The use of framerates to characterize interactivity is slightly misleading however, since framerate is a measure of bandwidth while humans are more sensitive to latency. Specifically, it is possible to achieve a good framerate of 50 frame/s but if the images generated refer to changes to the visualization that a person made more than 1 second ago, it will not feel interactive to a person. The rapid response time required for interactive visualization is a difficult constraint to meet and there are several approaches that have been explored to provide people with rapid visual feedback based on their input. Some include Parallel rendering – where more than one computer or video card is used simultaneously to render an image. Multiple frames can be rendered at the same time by different computers and the results transferred over the network for display on a single monitor. This requires each computer to hold a copy of all the information to be rendered and increases bandwidth, but also increases latency. Also, each computer can render a different region of a single frame and send the results over a network for display. This again requires each computer to hold all of the data and can lead to a load imbalance when one computer is responsible for rendering a region of the screen with more information than other computers. Finally, each computer can render an entire frame containing a subset of the information. The resulting images plus the associated depth buffer can then be sent across the network and merged with the images from other computers. The result is a single frame containing all the information to be rendered, even though no single computer's memory held all of the information. This is called parallel depth compositing and is used when large amounts of information must be rendered interactively. Progressive rendering – where a framerate is guaranteed by rendering some subset of the information to be presented and providing incremental (progressive) improvements to the rendering once the visualization is no longer changing. Level-of-detail (LOD) rendering – where simplified representations of information are rendered to achieve a desired framerate while a person is providing input and then the full representation is used to generate a still image once the person is through manipulating the visualization. One common variant of LOD rendering is subsampling. When the information being represented is stored in a topologically rectangular array (as is common with digital photos, MRI scans, and finite difference simulations), a lower resolution version can easily be generated by skipping n points for each 1 point rendered. Subsampling can also be used to accelerate rendering techniques such as volume visualization that require more than twice the computations for an image twice the size. By rendering a smaller image and then scaling the image to fill the requested screen space, much less time is required to render the same data. Frameless rendering – where the visualization is no longer presented as a time series of images, but as a single image where different regions are updated over time. See also Graphical perception Spatial visualization ability References Further reading Bederson, Benjamin B., and Ben Shneiderman. The Craft of Information Visualization: Readings and Reflections, Morgan Kaufmann, 2003, . Cleveland, William S. (1993). Visualizing Data. Cleveland, William S. (1994). The Elements of Graphing Data. Charles D. Hansen, Chris Johnson. The Visualization Handbook, Academic Press (June 2004). Kravetz, Stephen A. and David Womble. ed. Introduction to Bioinformatics. Totowa, N.J. Humana Press, 2003. Will Schroeder, Ken Martin, Bill Lorensen. The Visualization Toolkit, by August 2004. Spence, Robert Information Visualization: Design for Interaction (2nd Edition), Prentice Hall, 2007, . Edward R. Tufte (1992). The Visual Display of Quantitative Information Edward R. Tufte (1990). Envisioning Information. Edward R. Tufte (1997). Visual Explanations: Images and Quantities, Evidence and Narrative. Matthew Ward, Georges Grinstein, Daniel Keim. Interactive Data Visualization: Foundations, Techniques, and Applications. (May 2010). Wilkinson, Leland. The Grammar of Graphics, Springer External links National Institute of Standards and Technology Scientific Visualization Tutorials, Georgia Tech Scientific Visualization Studio (NASA) Visual-literacy.org, (e.g. Periodic Table of Visualization Methods) Conferences Many conferences occur where interactive visualization academic papers are presented and published. Amer. Soc. of Information Science and Technology (ASIS&T SIGVIS) Special Interest Group in Visualization Information and Sound ACM SIGCHI ACM SIGGRAPH ACM VRST Eurographics IEEE Visualization ACM Transactions on Graphics IEEE Transactions on Visualization and Computer Graphics Infographics Computational science Computer graphics Data modeling
26567300
https://en.wikipedia.org/wiki/Principles%20of%20Information%20Security
Principles of Information Security
Principles of Information Security is a textbook written by Michael Whitman and Herbert Mattord and published by Course Technology. It is in widespread use in higher education in the United States as well as in many English-speaking countries. Editions First edition The initial edition of this text was published in 2002. Second edition The second edition was published in 2004. Third edition The third edition was published in 2008. The bound text contained 550 pages. Fourth edition Publication Date: January 1, 2011; Authors: Michael E. Whitman, Herbert J. Mattord. Fifth edition Publication date: November 18, 2014; Authors: Michael E. Whitman, Herbert J. Mattord. Sixth edition Publication Date: January 2018; Authors: Michael E. Whitman, Herbert J. Mattord. Seventh edition Publication Date: July 2021; Authors: Michael E. Whitman, Herbert J. Mattord. Authors Michael E. Whitmam, Ph.D., CISM, CISSP. Cengage biography can be found at . Herbert J. Mattord, CISM, CISSP. Cengage biography can be found at . Other book projects Whitman, M. E. & Mattord, H. J., Hands-On Information Security Lab Manual, 3rd ed. © 2009 Course Technology, Boston, MA, Whitman, M. E. & Mattord, H. J., Principles of Incident Response and Disaster Recovery, © 2006 Course Technology, Boston, MA, Whitman, M. E. & Mattord, H., Management of Information Security, 3rd ed. © 2010 Course Technology, Boston, MA, , Note that this text has been adopted at over 100 institutions globally and is recommended by ASIS as a means to prepare for the CPP certification examination Whitman, M. E. & Mattord, H., Management of Information Security, 2nd ed. © 2007 Course Technology, Boston, MA, Whitman, M. E. & Mattord, H.J., Management of Information Security, © 2004 Course Technology, Boston, MA, Whitman, M. E., and Mattord, H. J., Guide to Firewalls and VPNs © 2011 Course Technology, Boston, MA, contract pending Whitman, M.E. & Mattord, H. J., Readings and Cases in the Management of Information Security, © 2005 Course Technology, Boston, MA, Whitman, M.E. & Mattord, H. J., Readings and Cases in the Management of Information Security: Law & Ethics, © 2009 Course Technology, Boston, MA, Dr. Whitman and Professor Mattord, working with others have collaborated on the following projects: Whitman, M. E., Shackleford, D. & Mattord, H.J., Hands-On Information Security Lab Manual, 2nd ed. © 2005 Course Technology, Boston, MA, Whitman, M. E., Mattord, H. J., & Austin, R.D., Guide to Firewalls and Network Security: Intrusion Detection and VPNs © 2009 Course Technology, Boston, MA, External links http://www.cengage.com/cengage/instructor.do?disciplinenumber=412&product_isbn=9781423901778 http://www.amazon.com/Principles-Information-Security-Michael-Whitman/dp/1423901770/ref=sr_1_2?ie=UTF8&s=books&qid=1268755675&sr=8-2 References Cengage books Information Security, Principles of
26021993
https://en.wikipedia.org/wiki/LeapFrog%20Didj
LeapFrog Didj
The LeapFrog Didj is a handheld console made by LeapFrog Enterprises. The Didj was priced at $89.99 when it debuted on August 22, 2008. Its library mostly consists of educational software aimed for children based on licensed properties such as those from Disney, Nickelodeon, and Marvel. The Didj runs on a customized Linux distribution with OpenGL, plus homebrew applications and demos. Games Didji Racing: Tiki Tropics Foster's Home for Imaginary Friends Hannah Montana High School Musical Indiana Jones Jetpack Heroes Nancy Drew: Mystery in the Hollywood Hills Neopets: Quizara's Curse Nicktoons: Android Invasion Sonic the Hedgehog SpongeBob SquarePants: Fists of Foam Star Wars: Jedi Trials Star Wars: The Clone Wars Super Chicks Tinker Bell and the Lost Treasure Wolverine and the X-Men References External links Children's educational video games Educational toys ARM-based video game consoles Handheld game consoles Linux-based devices Embedded Linux Products introduced in 2008 Video games developed in the United States
4339301
https://en.wikipedia.org/wiki/Attack%20tree
Attack tree
Attack trees are conceptual diagrams showing how an asset, or target, might be attacked. Attack trees have been used in a variety of applications. In the field of information technology, they have been used to describe threats on computer systems and possible attacks to realize those threats. However, their use is not restricted to the analysis of conventional information systems. They are widely used in the fields of defense and aerospace for the analysis of threats against tamper resistant electronics systems (e.g., avionics on military aircraft). Attack trees are increasingly being applied to computer control systems (especially relating to the electric power grid). Attack trees have also been used to understand threats to physical systems. Some of the earliest descriptions of attack trees are found in papers and articles by Bruce Schneier, when he was CTO of Counterpane Internet Security. Schneier was clearly involved in the development of attack tree concepts and was instrumental in publicizing them. However, the attributions in some of the early publicly available papers on attack trees also suggest the involvement of the National Security Agency in the initial development. Attack trees are very similar, if not identical, to threat trees. Threat trees were discussed in 1994 by Edward Amoroso. Basic Attack trees are multi-leveled diagrams consisting of one root, leaves, and children. From the bottom up, child nodes are conditions which must be satisfied to make the direct parent node true; when the root is satisfied, the attack is complete. Each node may be satisfied only by its direct child nodes. A node may be the child of another node; in such a case, it becomes logical that multiple steps must be taken to carry out an attack. For example, consider classroom computers which are secured to the desks. To steal one, the securing cable must be cut or the lock unlocked. The lock may be unlocked by picking or by obtaining the key. The key may be obtained by threatening a key holder, bribing a keyholder, or taking it from where it is stored (e.g. under a mousemat). Thus a four level attack tree can be drawn, of which one path is (Bribe Keyholder, Obtain Key, Unlock Lock, Steal Computer). An attack described in a node may require one or more of many attacks described in child nodes to be satisfied. Our above condition shows only OR conditions; however, an AND condition can be created, for example, by assuming an electronic alarm which must be disabled if and only if the cable will be cut. Rather than making this task a child node of cutting the lock, both tasks can simply reach a summing junction. Thus the path ((Disable Alarm, Cut Cable), Steal Computer) is created. Attack trees are related to the established fault tree formalism. Fault tree methodology employs boolean expressions to gate conditions when parent nodes are satisfied by leaf nodes. By including a priori probabilities with each node, it is possible to perform calculate probabilities with higher nodes using Bayes Rule. However, in reality accurate probability estimates are either unavailable or too expensive to gather. With respect to computer security with active participants (i.e., attackers), the probability distribution of events are probably not independent nor uniformly distributed, hence, naive Bayesian analysis is unsuitable. Since the Bayesian analytic techniques used in fault tree analysis cannot legitimately be applied to attack trees, analysts instead use other techniques to determine which attacks will be preferred by a particular attacker. These may involve comparing the attacker's capabilities (time, money, skill, equipment) with the resource requirements of the specified attack. Attacks which are near or beyond the attacker's ability to perform are less preferred than attacks that are perceived as cheap and easy. The degree to which an attack satisfies the adversary's objectives also affects the attacker's choices. Attacks that are both within the adversary's capabilities, and which satisfy their goals, are more likely than those that do not. Examination Attack trees can become large and complex, especially when dealing with specific attacks. A full attack tree may contain hundreds or thousands of different paths all leading to completion of the attack. Even so, these trees are very useful for determining what threats exist and how to deal with them. Attack trees can lend themselves to defining an information assurance strategy. It is important to consider, however, that implementing policy to execute this strategy changes the attack tree. For example, computer viruses may be protected against by refusing the system administrator access to directly modify existing programs and program folders, instead requiring a package manager be used. This adds to the attack tree the possibility of design flaws or exploits in the package manager. One could observe that the most effective way to mitigate a threat on the attack tree is to mitigate it as close to the root as possible. Although this is theoretically sound, it is not usually possible to simply mitigate a threat without other implications to the continued operation of the system. For example, the threat of viruses infecting a Windows system may be largely reduced by using a standard (non-administrator) account and NTFS instead of FAT file system so that normal users are unable to modify the operating system. Implementing this negates any way, foreseen or unforeseen, that a normal user may come to infect the operating system with a virus; however, it also requires that users switch to an administrative account to carry out administrative tasks, thus creating a different set of threats on the tree and more operational overhead. Also, users are still able to infect files to which they have write permissions, which may include files and documents. Systems using cooperative agents that dynamically examine and identify vulnerability chains, creating attack trees, have been built since 2000. Attack tree modeling software Several commercial packages and open source products are available. Open source ADTool from University of Luxembourg Ent SeaMonster Commercial AttackTree+ from Isograph SecurITree from Amenaza Technologies RiskTree from 2T Security See also Computer insecurity Computer security Computer virus Fault tree analysis IT risk Threat (computer) Vulnerability (computing) References Computer network security
5933562
https://en.wikipedia.org/wiki/International%20Software%20Testing%20Qualifications%20Board
International Software Testing Qualifications Board
The International Software Testing Qualifications Board (ISTQB) is a software testing certification board that operates internationally. Founded in Edinburgh in November 2002, the ISTQB is a non-profit association legally registered in Belgium. ISTQB Certified Tester is a standardized qualification for software testers and the certification is offered by the ISTQB. The qualifications are based on a syllabus, and there is a hierarchy of qualifications and guidelines for accreditation and examination. More than 1 million ISTQB exams have been delivered and over 721,000 certifications issued; the ISTQB consists of 66 member boards worldwide representing more than 100 countries as of May 2020. Product portfolio Current ISTQB product portfolio follows a matrix approach characterized by Levels, that identify progressively increasing learning objectives Foundation Advanced Expert Streams, that identify clusters of certification modules: Core Agile Specialist ISTQB streams focus on: Core – these modules correspond to the “historical” ISTQB certifications and so they: Cover software testing topic in a breadth-first, broad, horizontal way, Are valid for any technology/ methodology/ application domain Allow for a common understanding Agile – these modules address testing practices specifically for the Agile SDLC Specialist – these modules are new in the ISTQB product portfolio and address specific topics in a vertical way: They can address specific quality characteristics (e.g.: Usability; Security; Performance; etc.) They can address technologies that involve specific test approaches (e.g.: model based testing; mobile testing; etc.) They can also be related to specific test activities (e.g.: test automation; test metrics management; etc.) Pre-conditions Pre-conditions relate to certification exams and provide a natural progression through the ISTQB Scheme which helps people pick the right certificate and informs them about what they need to know. The ISTQB Core Foundation is a pre-condition for any other certification. Additional rules for ISTQB pre-conditions are summarized in the following: Foundation Core shall be required for Advanced Level Core; Foundation Core is the default pre-requisite for Foundation Level Specialist certifications unless differently stated in the specific module; as of date, all Foundation Level Specialist certifications require Foundation Core as a pre-requisite; Any Advanced Level Specialist or Expert Level Specialist module which is linked to a lower level Specialist module shall require certification at the lower level; Expert Level modules shall require certification at the corresponding Advanced Level; Any Advanced Level Specialist module which is not linked to a lower level Specialist module shall require the Foundations Core as a pre-condition; Such rules are depicted from a graphical point of view in the ISTQB Product Portfolio map. ISTQB provides a list of referenced books from some previous syllabi online. Exams The Foundation and Advanced exams consist of multiple choice tests. Certification is valid for life (Foundation Level and Advanced Level), and there is no requirement for recertification. ISTQB Member boards are responsible for the quality and the auditing of the examination. Worldwide there are testing boards in 66 countries (date: May 2020). Authorized exam providers are also able to offer exams including e-exams (e.g. at Pearson VUE). Content The current ISTQB Foundation Level certification is based on the 2018 syllabus. The Foundation Level qualification is suitable for anyone who needs to demonstrate practical knowledge of the fundamental concepts of software testing including people in roles such as testers, test analysts, test engineers, test consultants, test managers, user acceptance testers and software developers. It is also appropriate for individuals who need a basic understanding of software testing including project managers, quality managers, software development managers, business analysts, IT directors and management consultants. The different Advanced Level exams are more practical and require deeper knowledge in special areas. Test Manager deals with planning and control of the test process. Test Analyst concerns, among other things, reviews and black box testing methods. Technical Test Analyst includes component tests (also called unit test), requiring knowledge of white box testing and non-functional testing methods – this section also includes test tools. See also Software testing Software verification and validation Sri Lanka Software Testing Board References External links Information technology organizations Software testing 2002 establishments in Scotland Organizations established in 2002 Organisations based in Belgium
3003
https://en.wikipedia.org/wiki/Adrian%20Lamo
Adrian Lamo
Adrián Alfonso Lamo Atwood (February 20, 1981 – March 14, 2018) was an American threat analyst and hacker. Lamo first gained media attention for breaking into several high-profile computer networks, including those of The New York Times, Yahoo!, and Microsoft, culminating in his 2003 arrest. Lamo was best known for reporting U.S. soldier Chelsea Manning to Army criminal investigators in 2010 for leaking hundreds of thousands of sensitive U.S. government documents to WikiLeaks. Lamo died on March 14, 2018, at the age of 37. Early life and education Adrian Lamo was born in Malden, Massachusetts near Boston. His father, Mario Ricardo Lamo, was Colombian. Adrian Lamo attended high schools in Bogotá and San Francisco, from which he did not graduate, but received a GED and was court-ordered to take courses at American River College, a community college in Sacramento County, California. Lamo began his hacking efforts by hacking games on the Commodore 64 and through phone phreaking. Activities and legal issues Lamo first became known for operating AOL watchdog site Inside-AOL.com. Security compromise Lamo was a grey hat hacker who viewed the rise of the World Wide Web with a mixture of excitement and alarm. He felt that others failed to see the importance of internet security in the early days of the World Wide Web. Lamo would break into corporate computer systems, but he never caused damage to the systems involved. Instead, he would offer to fix the security flaws free of charge, and if the flaw wasn't fixed, he would alert the media. Lamo hoped to be hired by a corporation to attempt to break into systems and test their security, a practice that came to be known as red teaming. However, by the time this practice was common, his felony conviction prevented him from being hired. In December 2001, Lamo was praised by Worldcom for helping to fortify their corporate security. In February 2002, he broke into the internal computer network of The New York Times, added his name to the internal database of expert sources, and used the paper's LexisNexis account to conduct research on high-profile subjects. The New York Times filed a complaint, and a warrant for Lamo's arrest was issued in August 2003 following a 15-month investigation by federal prosecutors in New York. At 10:15 a.m. on September 9, after spending a few days in hiding, he surrendered to the US Marshals in Sacramento, California. He re-surrendered to the FBI in New York City on September 11, and pleaded guilty to one felony count of computer crimes against Microsoft, LexisNexis, and The New York Times on January 8, 2004. In July 2004, Lamo was sentenced to two years' probation, with six months to be served in home detention, and ordered to pay $65,000 in restitution. He was convicted of compromising security at The New York Times, Microsoft, Yahoo!, and WorldCom. When challenged for a response to allegations that he was glamorizing crime for the sake of publicity, his response was: "Anything I could say about my person or my actions would only cheapen what they have to say for themselves". When approached for comment during his criminal case, Lamo frustrated reporters with non-sequiturs, such as "Faith manages" and "It's a beautiful day." At his sentencing, Lamo expressed remorse for harm he had caused by his intrusions. The court record quotes him as adding: "I want to answer for what I have done and do better with my life." He subsequently declared on the question-and-answer site Quora that: "We all own our actions in fullness, not just the pleasant aspects of them." Lamo accepted that he had committed mistakes. DNA controversy On May 9, 2006, while 18 months into a two-year probation sentence, Lamo refused to give the United States government a blood sample, which they had demanded to record his DNA in their CODIS system. According to his attorney at the time Lamo had a religious objection to giving blood but was willing to give his DNA in another form. On June 15, 2007, lawyers for Lamo filed a motion citing the Book of Genesis as one basis for Lamo's religious opposition to the giving of blood. On June 20, 2007, Lamo's legal counsel reached a settlement agreement with the U.S. Department of Justice whereby Lamo would submit a cheek swab in place of the blood sample. WikiLeaks and Chelsea Manning In February 2009, a partial list of the anonymous donors to the WikiLeaks website was leaked and published on the WikiLeaks website. Some media sources indicated at the time that Lamo was among the donors on the list. Lamo commented on his Twitter page, "Thanks WikiLeaks, for leaking your donor list ... That's dedication." In May 2010, Lamo reported to U.S. Army authorities that Chelsea Manning, then Bradley Manning, had claimed to have leaked a large body of classified documents, including 260,000 classified United States diplomatic cables. Lamo stated that Manning also "took credit for leaking" the video footage of the July 12, 2007, Baghdad airstrike, which has since come to be known as the "Collateral Murder" video. Lamo stated, in an article written by Kevin Poulsen in Wired magazine, that he would not have turned Manning in "if lives weren't in danger". He characterized her as "in a war zone and basically trying to vacuum up as much classified information as [she] could, and just throwing it up into the air." WikiLeaks responded by denouncing Lamo and Poulsen as "notorious felons, informers & manipulators", and said: "journalists should take care." According to Andy Greenberg of Forbes, Lamo was a volunteer "adversary characterization" analyst for Project Vigilant, a Florida-based semi-secret government contractor, which encouraged him to inform the government about the alleged WikiLeaks source. The head of Project Vigilant, Chet Uber, claimed, "I'm the one who called the U.S. government ... All the people who say that Adrian is a narc, he did a patriotic thing. He sees all kinds of hacks, and he was seriously worried about people dying." Lamo was criticized by fellow hackers, such as those at the Hackers on Planet Earth conference in 2010, who labeled him a "snitch". Another commented to Lamo, following his speech during a panel discussion, saying: "From my perspective, I see what you have done as treason." In April 2011, WikiLeaks founder Julian Assange called Lamo "a very disreputable character", and said it was not right to call him a financial contributor to WikiLeaks, since Lamo's monetary support amounted to only US$20 on one occasion. Assange said it was "mischievous to suggest the individual has anything to do with WikiLeaks." Lamo characterized his decision to work with the government as morally ambiguous, but objectively necessary, writing in The Guardian: "There were no right choices that day, only less wrong ones. It was cold, it was needful, and it was no one's to make except mine," adding to The Guardians Ed Pilkington: "There were hundreds of thousands of documents—let's drop the number to 250,000 to be conservative—and doing nothing meant gambling that each and every one would do no harm if no warning was given." The Taliban insurgency later announced its intention to execute Afghan nationals named in the leaks as having cooperated with the U.S.-led coalition in Afghanistan. By that time, the United States had received months of advance warning that their names were among the leaks. Manning was arrested and incarcerated in the U.S. military justice system and later sentenced to 35 years in confinement, which President Barack Obama at the end of his presidential term, commuted the sentence to a total of seven years, including time served. Lamo responded to the commutation with a post on Medium and an interview with U.S. News & World Report. Greenwald, Lamo, and Wired magazine Lamo's role in the Manning case drew criticism from Glenn Greenwald, who suggested that Lamo lied to Manning by turning Manning in, and then lied after the fact to cover up the circumstances of Manning's confessions. This drew a response from Wired: "At his most reasonable, Greenwald impugns our motives, attacks the character of our staff and carefully selects his facts and sources to misrepresent the truth and generate outrage in his readership." In an article about the Manning case, Greenwald mentioned Wired reporter Kevin Poulsen's 1994 felony conviction for computer hacking, suggesting that "over the years, Poulsen has served more or less as Lamo's personal media voice." Greenwald was skeptical of an earlier story by Poulsen about Lamo's institutionalization on psychiatric grounds, writing: "Lamo claimed he was diagnosed with Asperger's syndrome, a somewhat fashionable autism diagnosis which many stars in the computer world have also claimed." In an article entitled "The Worsening Journalistic Disgrace at Wired", Greenwald wrote that Wired was "actively conceal[ing] from the public, for months on end, the key evidence [the full Lamo–Manning chat logs] in a political story that has generated headlines around the world." On July 13, 2011, Wired published the Lamo–Manning chat logs in full, stating: "The most significant of the unpublished details have now been publicly established with sufficient authority that we no longer believe any purpose is served by withholding the logs." Greenwald wrote of the newly released logs that in his opinion they validated his claim that Wired had concealed important evidence. Criticism of Anonymous Lamo had been critical of media coverage of the hacker collective Anonymous, saying that media outlets have over-hyped and mythologized the group. He also said that Anonymous is not the "invulnerable" group it is claimed to be, and he could see "no rational point in what they're doing." Film and television On August 22, 2002, Lamo was removed from a segment of NBC Nightly News when, after being asked to demonstrate his skills for the camera, he gained access to NBC's internal network. NBC was concerned that they broke the law by taping Lamo while he (possibly) broke the law. Lamo was a guest on The Screen Savers five times beginning in 2002. Hackers Wanted, a documentary film focusing on Lamo's life as a hacker, was produced by Trigger Street Productions, and narrated by Kevin Spacey. Focusing on the 2003 hacking scene, the film features interviews with Kevin Rose and Steve Wozniak. The film has not been conventionally released. In May 2009, a video purporting to be a trailer for Hackers Wanted was allegedly leaked to or by the Internet film site Eye Crave Network. In May 2010, an earlier cut of the film was leaked via BitTorrent. According to an insider, what was leaked on the Internet was a very different film from the newer version, which includes additional footage. On June 12, 2010, a director's cut version of the film was also leaked onto torrent sites. Lamo also appeared on Good Morning America, Fox News, Democracy Now!, Frontline, and repeatedly on KCRA-TV News as an expert on netcentric crime and incidents. He was interviewed for the documentaries We Steal Secrets: The Story of WikiLeaks and True Stories: WikiLeaks – Secrets and Lies. Lamo reconnected with Leo Laporte in 2015 as a result of a Quora article on the "dark web" for an episode of The New Screen Savers. Lamo wrote the book Ask Adrian, a collection of his best Q&A drawn from over 500 pages of Quora answers, which have so far received nearly 30,000,000 views. Personal life and death Lamo was known as the "Homeless Hacker" for his reportedly transient lifestyle, claiming that he spent much of his travels couch-surfing, squatting in abandoned buildings, and traveling to Internet cafés, libraries, and universities to investigate networks, sometimes exploiting security holes. He usually preferred sleeping on couches, and when he did sleep on beds, he didn't sleep under covers. He would also often wander through homes and offices in the middle of the night, by the light of a flashlight. Lamo was bisexual and volunteered for the gay and lesbian media firm PlanetOut Inc. in the mid-1990s. In 1998, Lamo was appointed to the Lesbian, Gay, Bisexual, Transgender, Queer and Questioning Youth Task Force by the San Francisco Board of Supervisors. Lamo used a wide variety of supplements and drugs throughout his life. His wife, Lauren Fisher, called his drug use "body hacking". One of Lamo's preferred supplements was kratom, which he used as a less-dangerous alternative to opioids. In 2001, he overdosed on prescription amphetamines. After he turned in Manning, his drug use escalated, but he later claimed that he was in recovery. In a 2004 interview with Wired, an ex-girlfriend of Lamo's described him as "very controlling", alleging "he carried a stun gun, which he used on me". The same article claimed a court had issued a restraining order against Lamo; he disputed the claim, writing: "I have never been subject to a restraining order in my life". Lamo said in a Wired article that, in May 2010, after he reported the theft of his backpack, an investigating officer noted unusual behavior and placed him under a 72-hour involuntary psychiatric hold, which was extended to a nine-day hold. Lamo said he was diagnosed with Asperger syndrome at the psychiatric ward. For a period of time in March 2011, Lamo was allegedly "in hiding", claiming that his "life was under threat" after turning in Manning. Lamo died on March 14, 2018, in Wichita, Kansas, at the age of 37. Nearly three months later, the Sedgwick County Regional Forensic Science Center reported that "Despite a complete autopsy and supplemental testing, no definitive cause of death was identified." However, many bottles of pills were found in his home. Several of the pills found there were known to cause severe health problems when combined with kratom. As a result, evidence points to an accidental death due to drug abuse. See also List of unsolved deaths References External links 1981 births 2018 deaths American computer criminals American people of Colombian descent American River College alumni Bisexual men LGBT Hispanic and Latino American people LGBT people from Massachusetts Microsoft people The New York Times people Hackers People from Boston People with Asperger syndrome Squatters Unsolved deaths WikiLeaks Yahoo! people 21st-century LGBT people
10826
https://en.wikipedia.org/wiki/Fax
Fax
Fax (short for facsimile), sometimes called telecopying or telefax (the latter short for telefacsimile), is the telephonic transmission of scanned printed material (both text and images), normally to a telephone number connected to a printer or other output device. The original document is scanned with a fax machine (or a telecopier), which processes the contents (text or images) as a single fixed graphic image, converting it into a bitmap, and then transmitting it through the telephone system in the form of audio-frequency tones. The receiving fax machine interprets the tones and reconstructs the image, printing a paper copy. Early systems used direct conversions of image darkness to audio tone in a continuous or analog manner. Since the 1980s, most machines modulate the transmitted audio frequencies using a digital representation of the page which is compressed to quickly transmit areas which are all-white or all-black. History Wire transmission Scottish inventor Alexander Bain worked on chemical mechanical fax type devices and in 1846 was able to reproduce graphic signs in laboratory experiments. He received British patent 9745 on May 27, 1843 for his "Electric Printing Telegraph". Frederick Bakewell made several improvements on Bain's design and demonstrated a telefax machine. The Pantelegraph was invented by the Italian physicist Giovanni Caselli. He introduced the first commercial telefax service between Paris and Lyon in 1865, some 11 years before the invention of the telephone. In 1880, English inventor Shelford Bidwell constructed the scanning phototelegraph that was the first telefax machine to scan any two-dimensional original, not requiring manual plotting or drawing. An account of Henry Sutton's "telephane" was published in 1896. Around 1900, German physicist Arthur Korn invented the Bildtelegraph, widespread in continental Europe especially following a widely noticed transmission of a wanted-person photograph from Paris to London in 1908, used until the wider distribution of the radiofax. Its main competitors were the Bélinographe by Édouard Belin first, then since the 1930s the Hellschreiber, invented in 1929 by German inventor Rudolf Hell, a pioneer in mechanical image scanning and transmission. The 1888 invention of the telautograph by Elisha Gray marked a further development in fax technology, allowing users to send signatures over long distances, thus allowing the verification of identification or ownership over long distances. On May 19, 1924, scientists of the AT&T Corporation "by a new process of transmitting pictures by electricity" sent 15 photographs by telephone from Cleveland to New York City, such photos being suitable for newspaper reproduction. Previously, photographs had been sent over the radio using this process. The Western Union "Deskfax" fax machine, announced in 1948, was a compact machine that fit comfortably on a desktop, using special spark printer paper. Wireless transmission As a designer for the Radio Corporation of America (RCA), in 1924, Richard H. Ranger invented the wireless photoradiogram, or transoceanic radio facsimile, the forerunner of today's "fax" machines. A photograph of President Calvin Coolidge sent from New York to London on November 29, 1924, became the first photo picture reproduced by transoceanic radio facsimile. Commercial use of Ranger's product began two years later. Also in 1924, Herbert E. Ives of AT&T transmitted and reconstructed the first color facsimile, a natural-color photograph of silent film star Rudolph Valentino in period costume, using red, green and blue color separations. Beginning in the late 1930s, the Finch Facsimile system was used to transmit a "radio newspaper" to private homes via commercial AM radio stations and ordinary radio receivers equipped with Finch's printer, which used thermal paper. Sensing a new and potentially golden opportunity, competitors soon entered the field, but the printer and special paper were expensive luxuries, AM radio transmission was very slow and vulnerable to static, and the newspaper was too small. After more than ten years of repeated attempts by Finch and others to establish such a service as a viable business, the public, apparently quite content with its cheaper and much more substantial home-delivered daily newspapers, and with conventional spoken radio bulletins to provide any "hot" news, still showed only a passing curiosity about the new medium. By the late 1940s, radiofax receivers were sufficiently miniaturized to be fitted beneath the dashboard of Western Union's "Telecar" telegram delivery vehicles. In the 1960s, the United States Army transmitted the first photograph via satellite facsimile to Puerto Rico from the Deal Test Site using the Courier satellite. Radio fax is still in limited use today for transmitting weather charts and information to ships at sea. Telephone transmission In 1964, Xerox Corporation introduced (and patented) what many consider to be the first commercialized version of the modern fax machine, under the name (LDX) or Long Distance Xerography. This model was superseded two years later with a unit that would truly set the standard for fax machines for years to come. Up until this point facsimile machines were very expensive and hard to operate. In 1966, Xerox released the Magnafax Telecopiers, a smaller, facsimile machine. This unit was far easier to operate and could be connected to any standard telephone line. This machine was capable of transmitting a letter-sized document in about six minutes. The first sub-minute, digital fax machine was developed by Dacom, which built on digital data compression technology originally developed at Lockheed for satellite communication. By the late 1970s, many companies around the world (especially Japanese firms) had entered the fax market. Very shortly after this, a new wave of more compact, faster and efficient fax machines would hit the market. Xerox continued to refine the fax machine for years after their ground-breaking first machine. In later years it would be combined with copier equipment to create the hybrid machines we have today that copy, scan and fax. Some of the lesser known capabilities of the Xerox fax technologies included their Ethernet enabled Fax Services on their 8000 workstations in the early 1980s. Prior to the introduction of the ubiquitous fax machine, one of the first being the Exxon Qwip in the mid-1970s, facsimile machines worked by optical scanning of a document or drawing spinning on a drum. The reflected light, varying in intensity according to the light and dark areas of the document, was focused on a photocell so that the current in a circuit varied with the amount of light. This current was used to control a tone generator (a modulator), the current determining the frequency of the tone produced. This audio tone was then transmitted using an acoustic coupler (a speaker, in this case) attached to the microphone of a common telephone handset. At the receiving end, a handset's speaker was attached to an acoustic coupler (a microphone), and a demodulator converted the varying tone into a variable current that controlled the mechanical movement of a pen or pencil to reproduce the image on a blank sheet of paper on an identical drum rotating at the same rate. Computer facsimile interface In 1985, Hank Magnuski, founder of GammaLink, produced the first computer fax board, called GammaFax. Such boards could provide voice telephony via Analog Expansion Bus. In the 21st century Although businesses usually maintain some kind of fax capability, the technology has faced increasing competition from Internet-based alternatives. In some countries, because electronic signatures on contracts are not yet recognized by law, while faxed contracts with copies of signatures are, fax machines enjoy continuing support in business. In Japan, faxes are still used extensively as of September 2020 for cultural and graphemic reasons. They are available for sending to both domestic and international recipients from over 81% of all convenience stores nationwide. Convenience-store fax machines commonly print the slightly re-sized content of the sent fax in the electronic confirmation-slip, in A4 paper size. Use of fax machines for reporting cases during the COVID-19 pandemic has been criticised in Japan for introducing data errors and delays in reporting, slowing response efforts to contain the spread of infections and hindering transition to work from home. In many corporate environments, freestanding fax machines have been replaced by fax servers and other computerized systems capable of receiving and storing incoming faxes electronically, and then routing them to users on paper or via an email (which may be secured). Such systems have the advantage of reducing costs by eliminating unnecessary printouts and reducing the number of inbound analog phone lines needed by an office. The once ubiquitous fax machine has also begun to disappear from the small office and home office environments. Remotely hosted fax-server services are widely available from VoIP and e-mail providers allowing users to send and receive faxes using their existing e-mail accounts without the need for any hardware or dedicated fax lines. Personal computers have also long been able to handle incoming and outgoing faxes using analog modems or ISDN, eliminating the need for a stand-alone fax machine. These solutions are often ideally suited for users who only very occasionally need to use fax services. In July 2017 the United Kingdom's National Health Service was said to be the world's largest purchaser of fax machines because the digital revolution has largely bypassed it. In June 2018 the Labour Party said that the NHS had at least 11,620 fax machines in operation and in December the Department of Health and Social Care said that no more fax machines could be bought from 2019 and that the existing ones must be replaced by secure email by March 31, 2020. Leeds Teaching Hospitals NHS Trust, generally viewed as digitally advanced in the NHS, was engaged in a process of removing its fax machines in early 2019. This involved quite a lot of e-fax solutions because of the need to communicate with pharmacies and nursing homes which may not have access to the NHS email system and may need something in their paper records. In 2018 two-thirds of Canadian doctors reported that they primarily used fax machines to communicate with other doctors. Faxes are still seen as safer and more secure and electronic systems are often unable to communicate with each other. Capabilities There are several indicators of fax capabilities: group, class, data transmission rate, and conformance with ITU-T (formerly CCITT) recommendations. Since the 1968 Carterphone decision, most fax machines have been designed to connect to standard PSTN lines and telephone numbers. Group Analog Group 1 and 2 faxes are sent in the same manner as a frame of analog television, with each scanned line transmitted as a continuous analog signal. Horizontal resolution depended upon the quality of the scanner, transmission line, and the printer. Analog fax machines are obsolete and no longer manufactured. ITU-T Recommendations T.2 and T.3 were withdrawn as obsolete in July 1996. Group 1 faxes conform to the ITU-T Recommendation T.2. Group 1 faxes take six minutes to transmit a single page, with a vertical resolution of 96 scan lines per inch. Group 1 fax machines are obsolete and no longer manufactured. Group 2 faxes conform to the ITU-T Recommendations T.3 and T.30. Group 2 faxes take three minutes to transmit a single page, with a vertical resolution of 96 scan lines per inch. Group 2 fax machines are almost obsolete, and are no longer manufactured. Group 2 fax machines can interoperate with Group 3 fax machines. Digital A major breakthrough in the development of the modern facsimile system was the result of digital technology, where the analog signal from scanners was digitized and then compressed, resulting in the ability to transmit high rates of data across standard phone lines. The first digital fax machine was the Dacom Rapidfax first sold in late 1960s, which incorporated digital data compression technology developed by Lockheed for transmission of images from satellites. Group 3 and 4 faxes are digital formats and take advantage of digital compression methods to greatly reduce transmission times. Group 3 faxes conform to the ITU-T Recommendations T.30 and T.4. Group 3 faxes take between 6 and 15 seconds to transmit a single page (not including the initial time for the fax machines to handshake and synchronize). The horizontal and vertical resolutions are allowed by the T.4 standard to vary among a set of fixed resolutions: Horizontal: 100 scan lines per inch Vertical: 100 scan lines per inch ("Basic") Horizontal: 200 or 204 scan lines per inch Vertical: 100 or 98 scan lines per inch ("Standard") Vertical: 200 or 196 scan lines per inch ("Fine") Vertical: 400 or 391 (note not 392) scan lines per inch ("Superfine") Horizontal: 300 scan lines per inch Vertical: 300 scan lines per inch Horizontal: 400 or 408 scan lines per inch Vertical: 400 or 391 scan lines per inch ("Ultrafine") Group 4 faxes conform to the ITU-T Recommendations T.563, T.503, T.521, T.6, T.62, T.70, T.411 to T.417. They are designed to operate over 64 kbit/s digital ISDN circuits. The allowed resolutions, a superset of those in the T.4 recommendation, are specified in the T.6 recommendation. Fax Over IP (FoIP) can transmit and receive pre-digitized documents at near-realtime speeds using ITU-T recommendation T.38 to send digitised images over an IP network using JPEG compression. T.38 is designed to work with VoIP services and often supported by analog telephone adapters used by legacy fax machines that need to connect through a VoIP service. Scanned documents are limited to the amount of time the user takes to load the document in a scanner and for the device to process a digital file. The resolution can vary from as little as 150 DPI to 9600 DPI or more. This type of faxing is not related to the e-mail–to–fax service that still uses fax modems at least one way. Class Computer modems are often designated by a particular fax class, which indicates how much processing is offloaded from the computer's CPU to the fax modem. Class 1 (also known as Class 1.0) fax devices do fax data transfer, while the T.4/T.6 data compression and T.30 session management are performed by software on a controlling computer. This is described in ITU-T recommendation T.31. What is commonly known as "Class 2" is an unofficial class of fax devices that perform T.30 session management themselves, but the T.4/T.6 data compression is performed by software on a controlling computer. Implementations of this "class" are based on draft versions of the standard that eventually significantly evolved to become Class 2.0. All implementations of "Class 2" are manufacturer-specific. Class 2.0 is the official ITU-T version of Class 2 and is commonly known as Class 2.0 to differentiate it from many manufacturer-specific implementations of what is commonly known as "Class 2". It uses a different but standardized command set than the various manufacturer-specific implementations of "Class 2". The relevant ITU-T recommendation is T.32. Class 2.1 is an improvement of Class 2.0 that implements faxing over V.34 (33.6 kbit/s), which boosts faxing speed from fax classes "2" and 2.0, which are limited to 14.4 kbit/s. The relevant ITU-T recommendation is T.32 Amendment 1. Class 2.1 fax devices are referred to as "super G3". Data transmission rate Several different telephone-line modulation techniques are used by fax machines. They are negotiated during the fax-modem handshake, and the fax devices will use the highest data rate that both fax devices support, usually a minimum of 14.4 kbit/s for Group 3 fax. {| class="wikitable" !ITU standard !Released date !Data rates (bit/s) !Modulation method |- |V.27 |1988 |4800, 2400 |PSK |- |V.29 |1988 |9600, 7200, 4800 |QAM |- |V.17 |1991 |, , 9600, 7200 |TCM |- |V.34 |1994 | |QAM |- |V.34bis |1998 | |QAM |- |ISDN |1986 | |digital |} Note that "Super Group 3" faxes use V.34bis modulation that allows a data rate of up to 33.6 kbit/s. Compression As well as specifying the resolution (and allowable physical size) of the image being faxed, the ITU-T T.4 recommendation specifies two compression methods for decreasing the amount of data that needs to be transmitted between the fax machines to transfer the image. The two methods defined in T.4 are: Modified Huffman (MH). Modified READ (MR) (Relative Element Address Designate), optional. An additional method is specified in T.6: Modified Modified READ (MMR). Later, other compression techniques were added as options to ITU-T recommendation T.30, such as the more efficient JBIG (T.82, T.85) for bi-level content, and JPEG (T.81), T.43, MRC (T.44), and T.45 for grayscale, palette, and colour content. Fax machines can negotiate at the start of the T.30 session to use the best technique implemented on both sides. Modified Huffman Modified Huffman (MH), specified in T.4 as the one-dimensional coding scheme, is a codebook-based run-length encoding scheme optimised to efficiently compress whitespace. As most faxes consist mostly of white space, this minimises the transmission time of most faxes. Each line scanned is compressed independently of its predecessor and successor. Modified READ Modified READ, specified as an optional two-dimensional coding scheme in T.4, encodes the first scanned line using MH. The next line is compared to the first, the differences determined, and then the differences are encoded and transmitted. This is effective, as most lines differ little from their predecessor. This is not continued to the end of the fax transmission, but only for a limited number of lines until the process is reset, and a new "first line" encoded with MH is produced. This limited number of lines is to prevent errors propagating throughout the whole fax, as the standard does not provide for error correction. This is an optional facility, and some fax machines do not use MR in order to minimise the amount of computation required by the machine. The limited number of lines is 2 for "Standard"-resolution faxes, and 4 for "Fine"-resolution faxes. Modified Modified READ The ITU-T T.6 recommendation adds a further compression type of Modified Modified READ (MMR), which simply allows a greater number of lines to be coded by MR than in T.4. This is because T.6 makes the assumption that the transmission is over a circuit with a low number of line errors, such as digital ISDN. In this case, the number of lines for which the differences are encoded is not limited. JBIG In 1999, ITU-T recommendation T.30 added JBIG (ITU-T T.82) as another lossless bi-level compression algorithm, or more precisely a "fax profile" subset of JBIG (ITU-T T.85). JBIG-compressed pages result in 20% to 50% faster transmission than MMR-compressed pages, and up to 30 times faster transmission if the page includes halftone images. JBIG performs adaptive compression, that is, both the encoder and decoder collect statistical information about the transmitted image from the pixels transmitted so far, in order to predict the probability for each next pixel being either black or white. For each new pixel, JBIG looks at ten nearby, previously transmitted pixels. It counts, how often in the past the next pixel has been black or white in the same neighborhood, and estimates from that the probability distribution of the next pixel. This is fed into an arithmetic coder, which adds only a small fraction of a bit to the output sequence if the more probable pixel is then encountered. The ITU-T T.85 "fax profile" constrains some optional features of the full JBIG standard, such that codecs do not have to keep data about more than the last three pixel rows of an image in memory at any time. This allows the streaming of "endless" images, where the height of the image may not be known until the last row is transmitted. ITU-T T.30 allows fax machines to negotiate one of two options of the T.85 "fax profile": In "basic mode", the JBIG encoder must split the image into horizontal stripes of 128 lines (parameter L0 = 128) and restart the arithmetic encoder for each stripe. In "option mode", there is no such constraint. Matsushita Whiteline Skip A proprietary compression scheme employed on Panasonic fax machines is Matsushita Whiteline Skip (MWS). It can be overlaid on the other compression schemes, but is operative only when two Panasonic machines are communicating with one another. This system detects the blank scanned areas between lines of text, and then compresses several blank scan lines into the data space of a single character. (JBIG implements a similar technique called "typical prediction", if header flag TPBON is set to 1.) Typical characteristics Group 3 fax machines transfer one or a few printed or handwritten pages per minute in black-and-white (bitonal) at a resolution of 204×98 (normal) or 204×196 (fine) dots per square inch. The transfer rate is 14.4 kbit/s or higher for modems and some fax machines, but fax machines support speeds beginning with 2400 bit/s and typically operate at 9600 bit/s. The transferred image formats are called ITU-T (formerly CCITT) fax group 3 or 4. Group 3 faxes have the suffix .g3 and the MIME type image/g3fax. The most basic fax mode transfers in black and white only. The original page is scanned in a resolution of 1728 pixels/line and 1145 lines/page (for A4). The resulting raw data is compressed using a modified Huffman code optimized for written text, achieving average compression factors of around 20. Typically a page needs 10 s for transmission, instead of about 3 minutes for the same uncompressed raw data of 1728×1145 bits at a speed of 9600 bit/s. The compression method uses a Huffman codebook for run lengths of black and white runs in a single scanned line, and it can also use the fact that two adjacent scanlines are usually quite similar, saving bandwidth by encoding only the differences. Fax classes denote the way fax programs interact with fax hardware. Available classes include Class 1, Class 2, Class 2.0 and 2.1, and Intel CAS. Many modems support at least class 1 and often either Class 2 or Class 2.0. Which is preferable to use depends on factors such as hardware, software, modem firmware, and expected use. Printing process Fax machines from the 1970s to the 1990s often used direct thermal printers with rolls of thermal paper as their printing technology, but since the mid-1990s there has been a transition towards plain-paper faxes: thermal transfer printers, inkjet printers and laser printers. One of the advantages of inkjet printing is that inkjets can affordably print in color; therefore, many of the inkjet-based fax machines claim to have color fax capability. There is a standard called ITU-T30e (formally ITU-T Recommendation T.30 Annex E ) for faxing in color; however, it is not widely supported, so many of the color fax machines can only fax in color to machines from the same manufacturer. Stroke speed Stroke speed in facsimile systems is the rate at which a fixed line perpendicular to the direction of scanning is crossed in one direction by a scanning or recording spot. Stroke speed is usually expressed as a number of strokes per minute. When the fax system scans in both directions, the stroke speed is twice this number. In most conventional 20th century mechanical systems, the stroke speed is equivalent to drum speed. Fax paper As a precaution, thermal fax paper is typically not accepted in archives or as documentary evidence in some courts of law unless photocopied. This is because the image-forming coating is eradicable and brittle, and it tends to detach from the medium after a long time in storage. Internet fax One popular alternative is to subscribe to an Internet fax service, allowing users to send and receive faxes from their personal computers using an existing email account. No software, fax server or fax machine is needed. Faxes are received as attached TIFF or PDF files, or in proprietary formats that require the use of the service provider's software. Faxes can be sent or retrieved from anywhere at any time that a user can get Internet access. Some services offer secure faxing to comply with stringent HIPAA and Gramm–Leach–Bliley Act requirements to keep medical information and financial information private and secure. Utilizing a fax service provider does not require paper, a dedicated fax line, or consumable resources. Another alternative to a physical fax machine is to make use of computer software which allows people to send and receive faxes using their own computers, utilizing fax servers and unified messaging. A virtual (email) fax can be printed out and then signed and scanned back to computer before being emailed. Also the sender can attach a digital signature to the document file. With the surging popularity of mobile phones, virtual fax machines can now be downloaded as applications for Android and iOS. These applications make use of the phone's internal camera to scan fax documents for upload or they can import from various cloud services. Related standards T.4 is the umbrella specification for fax. It specifies the standard image sizes, two forms of image-data compression (encoding), the image-data format, and references, T.30 and the various modem standards. T.6 specifies a compression scheme that reduces the time required to transmit an image by roughly 50-percent. T.30 specifies the procedures that a sending and receiving terminal use to set up a fax call, determine the image size, encoding, and transfer speed, the demarcation between pages, and the termination of the call. T.30 also references the various modem standards. V.21, V.27ter, V.29, V.17, V.34: ITU modem standards used in facsimile. The first three were ratified prior to 1980, and were specified in the original T.4 and T.30 standards. V.34 was published for fax in 1994. T.37 The ITU standard for sending a fax-image file via e-mail to the intended recipient of a fax. T.38 The ITU standard for sending Fax over IP (FoIP). G.711 pass through - this is where the T.30 fax call is carried in a VoIP call encoded as audio. This is sensitive to network packet loss, jitter and clock synchronization. When using voice high-compression encoding techniques such as, but not limited to, G.729, some fax tonal signals may not be correctly transported across the packet network. image/t38 MIME-type SSL Fax An emerging standard that allows a telephone based fax session to negotiate a fax transfer over the internet, but only if both sides support the standard. The standard is partially based on T.30 and is being developed by Hylafax+ developers. See also Black fax Called subscriber identification (CSID) Error correction mode (ECM) Fax art Fax demodulator Fax modem Fax server Faxlore Fultograph Image Scanners Internet fax Junk fax Radiofax—image transmission over HF radio Slow-scan television T.38 Fax-over-IP Telautograph Telex Teletex Transmitting Subscriber Identification (TSID) Wirephoto References Further reading Coopersmith, Jonathan, Faxed: The Rise and Fall of the Fax Machine (Johns Hopkins University Press, 2015) 308 pp. "Transmitting Photographs by Telegraph", Scientific American article, 12 May 1877, p. 297 External links Group 3 Facsimile Communication a '97 essay with technical details on compression and error codes, and call establishment and release. ITU T.30 Recommendation 1843 introductions American inventions Computer peripherals English inventions German inventions Italian inventions ITU-T recommendations Japanese inventions Office equipment Scottish inventions Telecommunications equipment
67188713
https://en.wikipedia.org/wiki/UK%20Cyber%20Security%20Council
UK Cyber Security Council
The UK Cyber Security Council is the self-regulatory body for the UK cyber security profession, tasked by the UK Government with "the development of a framework that speaks across the different specialisms, setting out a comprehensive alignment of career pathways, including the certifications and qualifications required within certain levels. The Council will lay the structural foundations of the cyber security profession that will enable it to respond to the evolving needs of industry and the wider economy." History In November 2016, the UK Government's National Cyber Security Strategy 2016-2021 policy paper set out "the UK Government’s plan to make Britain secure and resilient in cyberspace". It included ambitions to develop and accredit the cyber security profession by "reinforcing the recognised body of cyber security excellence within the industry and providing a focal point which can advise, shape and inform national policy." In December 2018, the Government's Initial National Cyber Security Skills Strategy policy paper described an ambition for a new, independent body, named as the UK Cyber Security Council. In August 2019 the Department for Digital, Culture, Media and Sport (DCMS) appointed the Institution of Engineering and Technology (IET) as the lead organisation in charge of designing and delivering the new UK Cyber Security Council, alongside 15 other cyber security professional organisations collectively known as the Cyber Security Alliance. The council will be "charged with the development of a framework that speaks across the different specialisms, setting out a comprehensive alignment of career pathways, including the certifications and qualifications required within certain levels." In February 2021, the Department for Digital, Culture, Media and Sport confirmed in a statement that the launch of the council is scheduled for the end of March 2021. On March 31, 2021, a press release announced that the Government-mandated Council had officially become an independent entity. See also National Cyber Security Centre (United Kingdom) References External links Official website Information technology organisations based in the United Kingdom
3300050
https://en.wikipedia.org/wiki/Collection%20of%20Computer%20Science%20Bibliographies
Collection of Computer Science Bibliographies
The Collection of Computer Science Bibliographies (founded 1993) is one of the oldest (if not the oldest) bibliography collections freely accessible on the Internet. It is a collection of bibliographies of scientific literature in computer science and (computational) mathematics from various sources, covering most aspects of computer science. The bibliographies are updated weekly from their original locations. As of 2009 the collection contains more than 2.8 million unique references (mostly to journal articles, conference papers and technical reports), clustered in about 1700 bibliographies, and consists of more than 4.4 Gb (950 Mb gzipped) of BibTeX entries. More than 600,000 references contain cross-references to citing or cited publications. More than 1 million references contain URLs to online versions of the papers. Abstracts are available for more than 1 million entries. There are more than 2,000 links to other sites carrying bibliographic information. Duplicates and links As the Collection of Computer Science Bibliographies consists of many subcollections there is a substantial overlap (roughly 1/3). At the end of 2008 there were more than 4.2 million records which represent about 2.8 million unique (in terms of normalized title and authors' last names) bibliographic entries. The number of duplicates may be seen as an advantage, because there is a greater chance for finding a freely available full text PDF of a searched publication. Publications are clustered by title and last names of authors, so it is possible to find an extended version (e.g. Technical Report or Thesis) of an article. There are also generated links to Google Scholar and IEEE Xplore in cases where no full text link was available directly. Almost every bibliographic query may be served in RSS format. Major subcollections arXiv Bibliography Network Project CiteSeerX DBLP LEABib Networked Computer Science Technical Reference Library History The collection was started in 1993 by Alf-Christian Achilles with a simple email-based interface and limited number of entries. One year later the first web interface has been made available. Since then the Collection was maintained by Achilles in his spare time. At the end of 2002 the maintenance has been handed over to Paul Ortyl. References External links Official site LEABib NCSTRL Bibliographic databases in computer science TeX BibTeX
3461266
https://en.wikipedia.org/wiki/The%20Computer%20Museum%2C%20Boston
The Computer Museum, Boston
The Computer Museum was a Boston, Massachusetts, museum that opened in 1979 and operated in three locations until 1999. It was once referred to as TCM and is sometimes called the Boston Computer Museum. When the Museum closed in 2000, much of its collection was sent to the Computer History Museum in California. History The Digital Computer History Museum The Digital Equipment Corporation (DEC) Museum Project began in 1975 with a display of circuit and memory hardware in a converted lobby closet of DEC's Main (Mill) Building 12 in Maynard, Massachusetts. In September 1979, with the assistance of Digital Equipment Corporation, Gordon and Gwen Bell founded the Digital Computer Museum in a former RCA building in Marlboro, Massachusetts. Though entirely funded by DEC and housed within a corporate facility, from its inception the Museum's activities were ecumenical, with an industry-wide, international preservation mission. In spring 1982, the Museum received non-profit charitable foundation status from the Internal Revenue Service. In Fall 1983, The Computer Museum, which had dropped "Digital" from its title, decided to relocate to Museum Wharf in downtown Boston, sharing a renovated wool warehouse with Boston Children's Museum. Oliver Strimpel, recruited from the Science Museum in London, was appointed to develop a major exhibit on computer graphics and image processing, later being appointed Executive Director in 1990. On November 13, 1984, the Museum officially re-opened to the public at its new 53,000 square foot location. The initial set of exhibits featured the pioneering Whirlwind Computer, the SAGE computer room, an evolutionary series of computers built by Seymour Cray, and a 20-year timeline of computing developments that included many artifacts collected by Gordon Bell. Also among the opening exhibits was a permanent gallery devoted to the history, technology, and applications of digital imaging entitled The Computer and the Image. Prior to all of this, DEC's Ken Olsen and Mitre Corporation's Robert Everett had, in 1973, "saved Whirlwind from the scrap heap" and "arranged to exhibit it at the Smithsonian." Olsen began warehousing other old computers, even as the Bells, independently, "were thinking about a computer museum" and collecting artifacts. Computer History Museum (California) While the majority of the Museum's energies and funding were focused on the growing exhibitions and educational programs, the resources available for the historical collections remained flat. Though active collection of artifacts continued, there was a lack of suitable collections storage and study space. Furthermore, with the inexorable shift of the U.S. computer industry from Boston to the West Coast, the Museum's Boston location became a handicap from the point of view of collecting as well as industry support. In 1996, a group of Computer Museum Board members established a division of the Museum in Silicon Valley exclusively devoted to collecting and preserving the history of computing. First called The Computer Museum History Center, it was housed in a storage building near Hangar One at Moffett Field, California. In 2001, it changed its name to the Computer History Museum and acquired its own building in Mountain View, California, in 2002. In 1999, the Computer Museum merged with the Museum of Science, Boston. When the Museum closed as an independent entity in 2000, a few artifacts were moved to the Museum of Science for eventual exhibits. The historical artifact collection was sent to the Computer History Museum forming the base of the museum's collection. An extensive archive of Computer Museum documents and videos of the history of the Museum, formative memos at Digital Equipment Corporation and other materials was compiled by Gordon Bell and is now maintained by The Computer History Museum. Archive sections include: exhibits, with layouts and design documents; Pioneer Lecture Series Videos; Posters; The Computer Bowl; Museum Reports and Annual Reports; and Marketing material, such as brochures, guides, leaflets, press releases, and store catalogs. A Files section contains general documents of the founding and operation of the museum from the Internet Archive, the Computer History Museum, Gordon Bell and Gwen Bell, and Gardner Hendrie. An illustrated timeline weaves the sections together to provide an overview. Governance The Computer Museum was governed by a Board of Directors, which appointed the Executive Director and various Board committees to oversee operations and other areas such as collections, exhibits, education, and development. The following served as Chairman of the Board: Kenneth H. Olsen (1982–1984), John William Poduska, Sr. (1984–1988), Gardner C. Hendrie (1988–1993), Charles A. Zraket (1993–1997), and Lawrence Weber (1997–2000). Collections The Museum's collections were jump-started with the collections of Gordon and Gwen Bell, who had been actively collecting since the 1970s. To bring structure and discipline to collecting efforts, an acquisitions policy was developed in which computing materials were classified into Processor, Memory, and Switch categories, known as the PMS classification. The Transducer category was also added to cover input/output devices. The Museum actively collected artifacts throughout its history, though acquisition criteria became more selective over time owing to increasingly adherence to collecting criteria and severely limited storage space. Acquired artifacts ranged in size from a single chip to the multiple components of a single mainframe computer. In addition to artifacts, the Museum collected images, film, and video. Noteworthy early acquisitions included parts of Whirlwind 1, UNIVAC 1, the TX-0, a CPU from the Burroughs ILLIAC IV, IBM 7030 "Stretch", NASA Apollo Guidance Computer Prototype, a CDC 6600, a CRAY-1, PDP-1, PDP-8, EDSAC Storage Tube, Colossus pulley, and components of the Ferranti Atlas, and the Manchester Mark I. In June 1984, the collection of artifacts and films numbered 900 cataloged items. Examples of acquisitions of computers in the preceding year included an Apple 1, Burroughs B-500, Digital Equipment Corp. PDP-1, Franklin Ace 100, and IBM SAGE: AN/FSQ-7 components. Several types of memory were acquired, including core memory, plasma cell memory, rope memory, selectron tube, magnetic cards, mercury delay line, and fixed-head drum. In the following years noteworthy acquisitions of computers included: Amdahl 470V/6, Apollo Domain DN100 workstation, Control Data Little Character, Data General Eclipse, Evans and Sutherland Line Drawing System 2, Osborne I, SCELBI 8H Mini-computer, and a Sinclair ZX-80. To the nascent historical software collection, the first BASIC written for the Altair and VisiCalc Beta Test Version 0.1 was added. Between Fall 1995 and Spring 1996, The Museum sponsored the Early Model Personal Computer Contest. A call for the earliest personal computers netted 137 additions to the collections. The judges, Steve Wozniak, David Bunnell, and Oliver Strimpel awarded prizes for the earliest machines to John V. Blankenbaker for the Kenbak-1 (1972), Robert Pond for the Altair 8800, Lee Felsenstein for the prototype VDM-1, Don Lancaster for the prototype TVT-1, and Thi T. Truong for the Micral. In 1986-7, the Museum acquired 27 computers, including a CDC 1604, MIT AI Lab CADR, MIT Lincoln Lab LINC, Prime Computer Model 300, Research Machines 380Z, and a Xerox Alto II. As part of the development of the Smart Machines gallery, robot collecting was especially active, with robots such as Carnegie Mellon University Robotics Institute's Direct Drive Arm I and Pluto Rover, GM Consight-I Project materials, Johns Hopkins University Adaptive Machines Group's Beast, Naval Systems International Sea Rover, and Rehabilitation Institute of Pittsburgh Page Turning Robot. The collections of Subassemblies and Components, Memories, Calculating Devices and Transducers continued to expand as well. Spurred by the difficulty of preserving a fast-evolving technology built by future-oriented engineers and entrepreneurs, the Museum signed a joint collecting agreement with the Smithsonian Institution, National Museum of American History to collectively ensure that important computing artifacts would be preserved. Under this 1987 agreement, a common catalog and database of both museums' collections would be created. Permanent exhibitions The Computer and the Image (1984) In addition to exhibits principally directed to the history of computing, the Museum re-opened in 1984 with a 4,000-square-foot gallery on digital image processing and computer graphics, entitled The Computer and the Image. The exhibits addressed the history of the field, the basic principles of digital image processing and image synthesis, and applications of the technologies. The exhibition featured historical artifacts, explanatory text and images, interactive exhibits, and a computer animation theater. Many of the exhibits were developed with the help of university and corporate research labs. The exhibition was developed under the direction of Oliver Strimpel with Geoff Dutton. Digital image processing The gallery included the history, technology and applications of digital image processing. Possibly the first-ever digital image was acquired from Jet Propulsion Labs, consisting of hand-assembled colored strips of line-printer output from the Mariner 4 Mars probe (1965). Static exhibits included a display of early computer graphic input and output devices, examples of digital typography, and a holographic animation of U.S. demographic evolution. Computer graphics Static exhibits included a display of early computer graphic input and output devices, examples of digital typography, the holographic animation American Graph Fleeting and A Visualizer's Bestiary, a tableau of real-world objects that have vexed programmers' attempts to render them realistically. Dynamic exhibits included: A Window full of Polygons depicting the view of downtown Boston that visitors see from the gallery on a large pen-plotter that renders the buildings' silhouettes with changing colors and patterns; an interactive Koch snowflake fractal generator; and the first computer game SPACEWAR! running on a PDP-1 and (more reliably) on a PC. Realistic image synthesis Synthetic lighting and shading algorithms for models of three-dimensional objects have classically been tested by rendering of a teapot. In the early 1970s, Martin Newell, working at The University of Utah, decided to use his teapot as an object with which to test various modeling, lighting and shading techniques. In the summer of 1982, at the 1982 ACM SIGGRAPH conference, Martin Newell donated his original teapot to Oliver Strimpel, wryly noting the symbolism of one Englishman giving another Englishman a teapot to be preserved and displayed a stone's throw from the site of the Boston Tea Party revolt of 1773. The exhibit displayed Allan Newell's original ceramic teapot alongside an Adage frame buffer display of a Bézier model of it, both responding interactively to changes in lighting selected by museum visitors with switches. Computer animation An animation theater performed a program of pioneering computer-animated shorts, including several from Pixar, such as Luxo Jr. Smart Machines (1987) A permanent gallery devoted to the history and technology of artificial intelligence and robotics opened in 1987. Knowledge-based systems Interactive exhibits focused on expert systems. Examples included a medical diagnosis system, a simple rule-based simulated bargaining store-keeper with whom visitors haggled over the price of a crate of strawberries, a computer composition system, and a system that plays tic-tac-toe according to a visitor-selected strategy. Natural language understanding Visitors could sit at computers and ask questions of ELIZA, the automated psychotherapist that was noteworthy because despite its basic rule-based behavior, users became deeply engaged with it. In an interactive video disk system, visitors were invited to analyze the computer HAL's natural language capability in an excerpt of the Stanley Kubrick film 2001: A Space Odyssey. Robot sensing Museum visitors could interact with four robot sensing modalities: vision, hearing, touch, and sonar. Vision: after arranging a set of simple shapes on a board, a vision system attempted to recognize them using edge detection. Hearing: this was exemplified by a speech recognition system. Touch: visitors touch a pressure-sensitive pad that outputs the distribution of pressure under their figures onto a display. Sonar: a ceiling-mounted sensor measured a visitor's height by bouncing a signal off the top of the head. Robot Theater A collection of robots were arrayed inside a theater, each of which, when highlighted in the theater's video program, lit up and, in several cases, performed movements. Mobile robots included: Shakey, Prototype Mars Rover, the Stanford Cart, the quadruped Titan III from the Tokyo Institute of Technology, and a Denning Mobile Robot; robot arms included Unimate I, the Rancho and Stanford Arms and Orm from Stanford, the Direct Drive Arm-1 from Carnegie Mellon University, and the Tentacle Arm from MIT. The Walk-Through Computer (1990, 1995) A two-story-high model of a personal computer, simulated to be working interactively. The purpose of the exhibit was to show the anatomy of a computer and to explain how the various parts work and communicate with each other. Before entering the computer's chassis, visitors could roll a giant trackball to play "World Traveller" on the giant screen. Wall-sized graphics by David Macaulay and interactive exhibits explained how all kinds of information, from text, graphics, video, music, as well as computer programs can be represented as 1's and 0's. Inside the giant chassis, visitors walked between a wall-sized graphics card and memory card to the microprocessor, upon which a projected electron microscope imagery of a CPU's circuits in operation appeared. Further on, a RAM set of modules plugged into the motherboard included reveals showing electron microscope imagery of memory circuits, Peering into a mini-van sized hard drive, visitors could see read/write heads position themselves on either side of rotating platters. Richard Fowler was recruited from The Science Museum, London/Bradford, as exhibit designer. The exhibit garnered international publicity and more than doubled visitor traffic to the museum. People and Computers: Milestones of a Revolution (1991) Through a series of nine milestones portrayed with vignettes and interactive exhibits, this permanent exhibit portrayed computing from the punched card machines of the 1930s through the ubiquitous embedded microprocessors of the 1990s. The birth of electronic computer milestone featured a piece of the 1951 Whirlwind I computer with an interactive exhibit explaining core memory. Machines for big business were exemplified by a UNIVAC I installation and an IBM System 360. The emergence of computer programming languages was featured in a milestone showing how for the first time, different computers were programmed to accept a common language - COBOL. A 1970s vignette portrayed a PDP-8 minicomputer being used backstage to control theater lighting, and applications to scientific computer were shown with a CRAY-1 at the European Centre for Medium-Range Weather Forecasts. A student publishing her school newspaper using a Macintosh showed the beginning of personal computing. Tools & Toys: The Amazing Personal Computer (1992) The exhibition demonstrated eight application areas using some 40 computer stations. The first area, "Making Pictures" featured a Virtual Reality Chair among other interactive stations focusing on graphics. The other areas addressed writing, making sound, calculating, playing games, exploring information, and sharing ideas. The Networked Planet (1994) Against a backdrop of the explosive growth of the Internet, this 4,000-square-foot exhibit addressed the history, technology, and applications of the growing computer network infrastructure. Exhibits included an interactive live air traffic control display, a real-time view into stock exchange transactions, and several internet stations (not commonly found in public spaces at that time) with constantly changing selections of sample web sites to reveal the diversity of Internet applications. The Virtual FishTank (1998) In this 2,200 square-foot virtual undersea world, visitors used interactive stations located in front of a giant projection display to design their own virtual fish, and then release it into the simulated fishtank. Once in the tank, the fish behaved according to the behavioral rules chosen during its design, with surprising results. Together with a set of interactive stations, the exhibit, created in conjunction with the MIT Media Lab and Nearlife, Inc., aimed to reveal how simple behavioral rules lead to distinctive emergent behavior in complex systems such as traffic flows and city demographic distributions. Temporary and travelling exhibits The museum developed temporary exhibits, some of which traveled to other museums. BYTE Magazine Covers 1985 – The original Robert Tinney illustrations. Colors of Chaos 1986 – Brightly colored computer graphic renditions of Julia Set and Mandelbrot Set fractals. On One Hand...Pocket Calculators Then and Now 1987 – From the abacus to the pocket calculator, the exhibit showed portable mechanical, electromechanical, and electronic devices. The exhibit traveled to a number of museums under the auspices of The Smithsonian travelling Exhibition Service. Terra Firma in Focus: The Art and Science of Digital Satellite Imagery 1988-9 – High resolution false-colored digital images from the SPOT satellite. Computer Art in Context: SIGGRAPH '89 Art Show – International juried selection of art involving the use of computers. The Computer in the Studio 1994 – Contemporary computer art developed in conjunction with the DeCordova Museum and Sculpture Park. The Robotic Artist: AARON in Living Color 1994 – Harold Cohen's artificial-intelligence based program autonomously painting color pictures representing rocks, plants, and people on a customized large format flat-bed plotter. Wizards and Their Wonders 1998 – Photographic portraits of inventors of the Computer Age by Louis Fabian Bachrach III Computer Clubhouse In collaboration with the MIT Media Lab, The Computer Museum launched The Computer Clubhouse in 1993 to provide children from under-served inner city communities access to computers to learn how to use and program computers. Guided by adult mentors, children engaged in projects such as developing simulations, building and programming robots, and creating computer games. Spurred by a major grant from Intel Corp., a national and then international network of computer clubhouses was established. After the Museum closed in 1999, the Clubhouse moved to the Museum of Science, Boston, which also served as the headquarters of The Computer Clubhouse Network. Computer Bowl In 1988, the first annual Computer Bowl was held as a fund-raising event for The Computer Museum. The concept played upon rivalries between East Coast (especially Route 128 around Boston) and West Coast (mainly Silicon Valley) technology industries. It took the form of a live and televised (usually on Stewart Cheifet's PBS series Computer Chronicles) computer trivia contest between East and West Coast teams of industry and academic leaders, modeled somewhat on the College Bowl format. Between 1988 and the last Bowl held in 1998, team members included Marc Andreessen, John Doerr, Esther Dyson, Bill Gates, William "Bill" Joy, Mitchell Kapor, John Markoff, Patrick McGovern, Walt Mossberg, Nathan Myhrvold, Nicholas Negroponte, and John William Poduska. Special events The museum hosted a variety of special events, mostly relating to recreational computing. Examples included computer chess tournaments, partial Turing tests, World Micromouse Contest, Core War contests, Computer Animation Festival, The First Internet Auction, and the 25th Anniversary of Computer Games. See also List of computer museums References External links Computer History Museum The Computer Museum Archive 1979 establishments in Massachusetts 1999 disestablishments in Massachusetts 20th century in Boston Museums established in 1979 Museums disestablished in 1999 Computer museums in the United States Defunct museums in Boston History of Boston
18092766
https://en.wikipedia.org/wiki/O3Spaces
O3Spaces
O3Spaces is a document management system developed by O3Spaces B.V.. It is built by a team of software engineers based in the Netherlands using OpenOffice.org, StarOffice, and ODF-centric applications as enterprise office and collaboration solutions. The product is written in Java, and based on the Tomcat server with a PostgreSQL backend (other databases are also supported). O3Spaces works by providing users a single web-based team environment, with built-in search capabilities and an optional Desktop Assistant. Its search functionality is said to work across PDF, ODF, and Microsoft Office document formats. Currently Firefox, Internet Explorer and Safari are supported. History The first preview release was presented to the public at the 2006 CeBIT tradeshow in Hannover, Germany. The first official release, 2.0, was released in December 2006. Version 2.2.0 was released in December 2007. On June 25, 2008, version 2.3.0 beta, was released, adding support for the Mac OS X platform. On September 19, 2008, O3Spaces Workplace 2.3.0, was released to the public, incorporating Mac OS X support (server & client) and the Safari web browser. On January 6, 2009, O3 Spaces Workplace 2.4.0 was released, incorporating email integration together with further additions. The current version is O3Spaces Workplace 4.1, which incorporates Online Document Preview and Document Solutions like Scanning, Contract Management, Template Management and E-mail Archiving. Features Much like SharePoint and the free Windows SharePoint Services (WSS), O3Spaces contains the concept of workspaces. These are working areas document repositories created for a particular task or project. In addition to document storage, a workspace contains a set of standard collaboration tools. These include team calendaring and a discussion forum for communications and dispute resolutions O3Spaces Workplace provides three main entry points: a Web 2.0 AJAX browser based environment a desktop client with Workplace repository file browser; the Workplace Assistant the Office suite and E-mail client plug-ins for OpenOffice.org / StarOffice, Microsoft Office, Microsoft Outlook & Mozilla Thunderbird In addition, O3Spaces Workplace delivers: Template Management Repository access based on open standards: WebDAV Opensearch CMIS Document security (Role based access control, Secure Connections, Backup, Restore & Archiving) LDAP Integration Integration into Zimbra & Zarafa The repository can be accessed in several ways. External applications can access the repository by using the WebDAV or the CMIS protocol. End users can control the repository using a web browser or using the desktop client. Browser based The browser based repository access is a full AJAX application (built on the Echo 3 framework). The browser environment can be split into a so-called Studio environment and a so-called Spaces environment. The Studio entry point is the entry point to be used by repository administrators. The Spaces web application is the end-user entry point. Desktop Client The desktop client (called Workplace Assistant) can be used to access the repository without a web browser. This desktop client can install the appropriate plugins into supported Office suites. Cross platform O3Spaces Workplace is available for different platforms like for instance Linux, Solaris, Windows and Mac OS X. O3Spaces Workplace has several partners supporting its technology, including Mandriva, Sun Microsystems Inc., Xandros, Translucent Technologies, etc. Configurations The package comes in three different configurations: On Demand, Enterprise, and Workgroup. The On Demand configuration is a Software as a Service (SaaS) model, can be accessed from anywhere, and includes feature upgrades through the life of the contract. The Workgroup and Enterprise editions are deployed in the company network and can also be used with secure Internet access. Notes See also Document management system References http://www.eweek.com/c/a/Linux-and-Open-Source/OpenOffice-Extension-Rivals-SharePoint/ http://www.cmswire.com/cms/document-management/o3spaces-challenges-moss-for-team-collaboration-000932.php https://web.archive.org/web/20080518053730/http://w3.linux-magazine.com/issue/80/Extending_OOo_with_O3Spaces.pdf http://www.linux.com/feature/119873 https://web.archive.org/web/20070530044106/http://www.linuxworld.com.au/index.php?id=2000096005 https://web.archive.org/web/20100228005857/http://www.xandros.com/news/press_releases/Xandros_Server_to_Provide_Enterprise-Grade_O3Spaces_OpenDocument_Collaboration.html https://values.institute/values-app/ Document management systems Collaborative software Java platform software Privately held companies of the Netherlands
45444629
https://en.wikipedia.org/wiki/Apple%20electric%20car%20project
Apple electric car project
The Apple electric car project (codenamed "Titan") is an electric car project undergoing research and development by Apple Inc. Apple has yet to openly discuss any of its self-driving research, but around 5,000 employees were reported to be working on the project . In May 2018, Apple reportedly partnered with Volkswagen to produce an autonomous employee shuttle van based on the T6 Transporter commercial vehicle platform. In August 2018, the BBC reported that Apple had 66 road-registered driverless cars, with 111 drivers registered to operate those cars. In 2020, it is believed that Apple is still working on self driving related hardware, software and service as a potential product, instead of actual Apple-branded cars. In December 2020, Reuters reported that Apple was planning on a possible launch date of 2024, but analyst Ming-Chi Kuo claimed it would not be launched before 2025 and might not be launched until 2028 or later. History 2014–2015 The project was rumored to be approved by Apple CEO Tim Cook in late 2014 and assigned to Vice President Steve Zadesky, a former Ford engineer as project in-charge. For the project, Apple was rumored to have hired Johann Jungwirth, the former-president and chief executive of Mercedes-Benz Research and Development North America, as well as at least one transmission engineer. By February 2015, it was rumored that a substantial number of Apple employees were working on an electric car project, with Apple hiring new employees for the project as well. Reports in February 2015 indicated that the company had been offering incentives to Tesla employees to join Apple. In February 2015, The Wall Street Journal reported that the product would resemble more of a minivan than a car, and The Sydney Morning Herald said at that time that production could start as soon as 2020. In February 2015, Apple board member Mickey Drexler stated that Apple co-founder and CEO Steve Jobs had plans to design and build a car, and that discussions about the concept surfaced around the time that Tesla Motors debuted its first car in 2008. In November 2015, Former Apple iPod Senior VP Tony Fadell confirmed that Steve Jobs was interested in an Apple car back in 2008, shortly after the original iPhone was introduced. In May 2015, Apple investor Carl Icahn stated that he believed rumors that Apple would enter the automobile market in 2020, and that logically Apple would view this car as "the ultimate mobile device". In August 2015, The Guardian reported that Apple were meeting with officials from GoMentum Station, a testing ground for connected and autonomous vehicles at the former Concord Naval Weapons Station in Concord, California. In September 2015, there were reports that Apple were meeting with self-driving car experts from the California Department of Motor Vehicles. According to The Wall Street Journal in September 2015, it will be a battery electric vehicle, initially lacking full autonomous driving capability, with a possible unveiling around 2019. In October 2015, Tim Cook stated about the car industry that: "It would seem like there will be massive change in that industry, massive change. You may not agree with that. That's what I think... We'll see what we do in the future. I do think that the industry is at an inflection point for massive change." Cook enumerated ways that the modern descendants of the Ford Model T would be shaken to the very chassis—the growing importance of software in the car of the future, the rise of autonomous vehicles, and the shift from an internal combustion engine to electrification. In November 2015, various websites reported that suspected Apple front SixtyEight Research had attended an auto body conference in Europe. Also in November 2015, after unknown EV startup Faraday Future announced a $1 billion U.S. factory project, some speculated that it might actually be a front for Apple's secret car project. In late 2015, Apple contracted Torc Robotics to retrofit two Lexus SUVs with sensors in a project known internally as Baja. 2016 In 2016, Tesla Motors CEO Elon Musk stated that Apple will probably make a compelling electric car: "It's pretty hard to hide something if you hire over a thousand engineers to do it." In May 2016, there were reports indicating Apple was interested in electric car charging stations. The Wall Street Journal reported on July 25, 2016, that Apple had convinced retired senior hardware engineering executive Bob Mansfield to return and take over the Titan project. A few days later, on July 29, Bloomberg Technology reported that Apple had hired Dan Dodge, the founder and former chief executive officer of QNX, BlackBerry Ltd.’s automotive software division. According to Bloomberg, Dodge's hiring heralded a shift in emphasis at Apple's Project Titan, in which the company will give first priority to creating software for autonomous vehicles. However, the story said that Apple would continue to develop a vehicle of its own. On September 9, The New York Times reported dozens of layoffs in an effort to reboot, presumably from a team still numbering around 1,000. The following week, reports surfaced that Magna International, a contract vehicle manufacturer, had a small team working at Apple's Sunnyvale lab. 2017 After a period of no new reports, car project news flared up again in mid-April 2017, as word spread that Apple was permitted to test autonomous vehicles on California roads. In mid-June, Tim Cook in an interview with Bloomberg TV said Apple was "focusing on autonomous systems" but not necessarily leading to an actual Apple car product, leaving speculation about Apple's role in the convergence of three disruptive "vectors of change": autonomous systems, electric vehicles and ride-sharing services. In mid-August, various sources reported that the car project was focusing on autonomous systems, now expected to test its technology in the real world using a company-operated inter-campus shuttle service between the main Infinite Loop campus in Cupertino and various Silicon Valley offices, including the new Apple Park. Then at the end of August, around 17 former Titan team members, braking and suspension engineers with Detroit experience, were hired by autonomous vehicle startup Zoox. October 2016 reports claimed the Titan project has a 2017 deadline to determine its fate - prove its practicality and viability, set a final direction. In November 2017, Apple employees Yin Zhou and Oncel Tuzel published a paper on VoxelNet, which uses a convolutional neural network to detect three dimensional objects using lidar. Transportation/tech website Jalopnik reported in late November that Apple was recruiting automotive test engineering and tech talent for autonomous systems work, and appeared to be surreptitiously leasing, via third parties, a former Fiat Chrysler proving grounds site in Surprise, Arizona (originally Wittman). Also in 2017, The New York Times suggested that Apple had stopped developing its own self-driving car. In response to such reports, Apple CEO Tim Cook acknowledged publicly that year that the company was working on autonomous-car technology. 2018 In January 2018, the company registered 27 self-driving vehicles with California's Department of Motor Vehicles. In May 2018, an article in The New York Times reported on major project news. After proposed partnership arrangements with Germany's high-end brands BMW and Mercedes-Benz failed, as did potential alliances with Nissan, BYD Auto, McLaren Automotive, and others, Apple reportedly partnered with Volkswagen to produce an autonomous employee shuttle van based on the T6 Transporter commercial vehicle platform. The T6 Transports would be transformed into autonomous electric versions at VW's Italdesign subsidiary in Turin, Italy, with the frame, wheels, and chassis remaining the same. While Apple does its best to keep its autonomous vehicles plans secret, regulatory filings do provide some factual insight into its activities. In September 2018, Apple was reportedly in third place in the number of California autonomous vehicle permits with 70, behind GM's Cruise (175) and Alphabet's Waymo (88). On July 7, 2018, a former Apple employee was arrested by the FBI for allegedly stealing trade secrets about Apple's self-driving car project. He was charged by federal prosecutors. The criminal complaint against the former employee revealed that at that time, Apple still had yet to openly discuss any of its self-driving research, with around 5,000 employees disclosed on the project. In August 2018, Doug Field, formerly senior vice president of engineering at Tesla, became the leader of Apple's Titan team. On August 24, 2018, it was reported that one of Apple's self-driving car had apparently been involved in a crash, when it was rear-ended during road-testing. The crash occurred while the car was at a stop, waiting to merge into traffic about 3.5 miles from Apple's headquarters in Cupertino, with no reported injuries. At the time, the BBC reported that Apple had 66 road-registered driverless cars, with 111 drivers registered to operate those cars. In August 2018, there were reports about an Apple patent of a system that warns riders ahead of time about what an autonomous car would do, purportedly to alleviate the discomfort of surprise. 2019 In January 2019, Apple laid off more than 200 employees from their 'Project Titan' autonomous vehicle team. In June 2019, Apple acquired autonomous vehicle startup Drive.ai. 2020 In early December, Bloomberg reported that Apple artificial intelligence lead John Giannandrea is overseeing Apple Car development as prior lead Bob Mansfield has retired. A few weeks later, Reuters reported that Apple was working towards a possible launch date of 2024 according to two unnamed insiders. 2021 On January 8, the Korea Economic Daily reported that Hyundai were in early discussions with Apple to jointly develop and produce self-driving electric cars. On January 12, The Verge Reported Apple held talks with EV startup Canoo in 2020. The two companies discussed options ranging from investment to an acquisition. Some weeks later, in late January, Apple announced some upper-level engineering changes, leading some Apple-watchers to speculate if Dan Riccio's "new chapter at Apple" might indicate leadership of the Titan project (or something altogether unrelated, such as augmented/virtual reality headset or deluxe noise-cancelling headphones). By early February, it appeared that Apple was close to a $3.59B deal with Hyundai to use its Kia Motors West Point, Georgia manufacturing plant for the car, a fully autonomous machine without a driver's seat. However, in February 2021, Hyundai and Kia confirmed that they were not in talks with Apple to develop a car. Adding further credence to Apple's automotive aspirations, Business Insider Deutschland (Germany) reported that Apple had hired Porsche VP of Chassis Development, Dr. Manfred Harrer. After rumors coming from Financial Times about Apple talking to several Japanese car companies about the Apple Car project after the Hyundai-Kia rumor, Nissan came out to Reuters as saying it is not in any of these discussions. The next Apple Car speculation was that Apple was shopping around for LIDAR navigation sensor suppliers for its project. An industry source told The Korea Times that Apple was working in Korea to build up its supply chain. Later in 2021, Apple was reportedly in talks with Toyota as well as Korean partners for production to commence in 2024. Alleged employees and affiliates Jamie Carlson, a former engineer on Tesla's Autopilot self-driving car program. After he left Tesla for Apple, he left Apple to work with Chinese automaker NIO on their NIO Pilot autonomous driving platform. Most recently he has returned to Apple special projects. Megan McClain, a former Volkswagen AG engineer with expertise in automated driving. Vinay Palakkode, a graduate researcher at Carnegie Mellon University, a hub of automated driving research. Xianqiao Tong, an engineer who developed computer vision software for driver assistance systems at microchip maker Nvidia Corp NVDA.O. Paul Furgale, former deputy director of the Autonomous Systems Lab at the Swiss Federal Institute of Technology in Zurich. Sanjai Massey, an engineer with experience in developing connected and automated vehicles at Ford and several suppliers. Stefan Weber, a former Bosch engineer with experience in video-based driver assistance systems. Lech Szumilas, a former Delphi research scientist with expertise in computer vision and object detection. Anup Vader, formerly Caterpillar autonomous systems thermal engineer, who left Apple in April 2019 to join Zoox autonomous vehicle startup. Doug Betts, former global quality leader at Fiat Chrysler. Johann Jungwirth, former CEO of Mercedes-Benz Research & Development, North America, Inc. – left for VW in Nov. 2015. Mujeeb Ijaz, a former Ford Motor Co. engineer, who founded A123 Systems's Venture Technologies division, which focused on materials research, electrical battery cell product development and advanced concepts (who helped recruited four to five staff researchers from A123, a battery technology company) Nancy Sun, formerly vice president of electrical engineering at electric motorcycle company Mission Motors in San Francisco. Mark Sherwood, formerly director of powertrain systems engineering at Mission Motors. Eyal Cohen, formerly vice president of software and electrical engineering at Mission Motors. Jonathan Cohen, former director of Nvidia's deep learning software. Nvidia uses deep learning in its Nvidia Drive PX platform, which is used in driver assistance systems. Chris Porritt – former Tesla vice president of vehicle engineering and former Aston Martin chief engineer. Alex Hitzinger is a German engineer who until March 31, 2016, was the Technical Director of the Porsche LMP1 project. He previously worked as Head of Advanced Technologies for the Red Bull and Toro Rosso Formula One teams. In January 2019 he left to head the technical VW commercial vehicles department. Benjamin Lyon, sensor expert, manager and founding team member, who reported directly to Doug Field, left Apple for a chief engineer position at "satellite and space startup" Astra in Feb 2021. See also References Apple Inc. hardware Proposed vehicles
47721887
https://en.wikipedia.org/wiki/Taras%20Kulakov
Taras Kulakov
Taras Vladimirovich Kulakov (; ; born March 11, 1987), better known as CrazyRussianHacker, is a Russian-American YouTuber of Ukrainian descent. He became known for his videos on "life hacks", technology and scientific demonstrations, with the catchphrase "Safety is number one priority". He has dropped the use of the catchphrase from 2019 and no longer does life hack or scientific experiment videos but now concentrates mainly on gadget review videos, silver coin videos and junk box videos on both his main and second channel. He recently started to use the catchphrase again from February 2021. Kulakov's YouTube channel, "CrazyRussianHacker", created in 2012, has over 2.79 billion views and 11.5 million subscribers (as of August 2021) and is one of the platform's top 500 channels. He has a second YouTube channel, "Taras Kul", with over 3.7 million subscribers (as of August 2021). His third YouTube channel, "Kul Farm" has 370,000 subscribers (as of August 2021). Personal life Kulakov was born in the Ukrainian SSR (now Ukraine) to a Ukrainian father and Russian mother. He has been a competitive swimmer since 1996. In 2006, he moved to Asheville, North Carolina with his family, where he worked at Walmart until 2012 while developing his early YouTube channels. In a Q&A video, he clarified that the last place he lived before moving to the United States was the city of Donetsk in Eastern Ukraine where Russian is the majority language. As he grew up speaking Russian and not knowing much Ukrainian, he considers himself Russian. He has two brothers and three sisters as stated in his September 2016 Q&A video. He also claims to have a half brother and a half sister as well. He lives with his wife Katherine and daughter Alice. Videos Kulakov is a very active YouTuber, releasing a video per 1–3 days. As of June 2021, he has over 11 million subscribers for his CrazyRussianHacker channel. His videos range from life hacks to chemiluminescence reactions. He is known for his sense of humour and Russian accent. He created his first channel, "origami768", on October 14, 2009, for origami tutorial videos. Later, he renamed this channel to "Taras Kul", which he uses as a second channel. His second attempt at YouTube fame came with "SlomoLaboratory" later renamed "Slow Mo Lab", alongside his brother Dima. He achieved his greatest popularity with his third channel, CrazyRussianHacker, in 2012. In 2014 he started another YouTube channel, where he posted gun videos. With the new YouTube community guidelines starting mid-2017, he deleted the gun content and renamed this channel to "Kul Farm" and has regularly been posting videos about his pets for his 371 thousand subscribers on this channel. Videos from his channel were broadcast in a television program called The Laboratory with CRH on TBD; the show first aired on July 18, 2018. Many of Kulakov's videos feature product demonstrations presented as scientific experiments. References External links Living people American YouTubers Russian YouTubers Russian emigrants to the United States Russian people of Ukrainian descent People from Asheville, North Carolina Online edutainment 1987 births DIY YouTubers Educational and science YouTubers
31423710
https://en.wikipedia.org/wiki/NCH%20Software
NCH Software
NCH Software is an Australian software company founded in 1993 in Canberra, Australia. The Colorado office was started in April 2008 due to the large U.S. customer base. The firm primarily sells to individuals via its website. Software products NCH Software provides software programs for audio, video, business, dictation and transcription, graphics, telephony and other utilities. On September 26, 2014, cnet.com showed their most-frequently downloaded program from NCH Software was WavePad Sound Editor Masters Edition. VideoPad is the firm's video editing application for the home and professional market. It is part of a suite that integrates with other software created by the company. This other software includes WavePad, a sound-editing program; MixPad, a sound-mixing program; PhotoPad, a photo and image editor; Prism a video format converter; Express Burn, disc burning software; Switch, an audio format converter, Express Scribe, a transcription software and Debut, a screen recorder and video capture software. Controversy During 2013, some computer security companies categorized NCH software as bloatware because it bundled the Google Toolbar. In July 2015, NCH Software announced it was no longer bundling the Google toolbar. As of November 30, 2015, NCH Software is marked clean by all major antivirus products. References External links NCH Software Official Site NCH Software Audio Site Software companies of Australia Software companies established in 1993 Privately held companies of Australia Companies based in Canberra Australian companies established in 1993
474702
https://en.wikipedia.org/wiki/Skipjack%20%28cipher%29
Skipjack (cipher)
In cryptography, Skipjack is a block cipher—an algorithm for encryption—developed by the U.S. National Security Agency (NSA). Initially classified, it was originally intended for use in the controversial Clipper chip. Subsequently, the algorithm was declassified. History of Skipjack Skipjack was proposed as the encryption algorithm in a US government-sponsored scheme of key escrow, and the cipher was provided for use in the Clipper chip, implemented in tamperproof hardware. Skipjack is used only for encryption; the key escrow is achieved through the use of a separate mechanism known as the Law Enforcement Access Field (LEAF). The algorithm was initially secret, and was regarded with considerable suspicion by many for that reason. It was declassified on 24 June 1998, shortly after its basic design principle had been discovered independently by the public cryptography community. To ensure public confidence in the algorithm, several academic researchers from outside the government were called in to evaluate the algorithm (Brickell et al., 1993). The researchers found no problems with either the algorithm itself or the evaluation process. Moreover, their report gave some insight into the (classified) history and development of Skipjack: [Skipjack] is representative of a family of encryption algorithms developed in 1980 as part of the NSA suite of "Type I" algorithms... Skipjack was designed using building blocks and techniques that date back more than forty years. Many of the techniques are related to work that was evaluated by some of the world's most accomplished and famous experts in combinatorics and abstract algebra. Skipjack's more immediate heritage dates to around 1980, and its initial design to 1987...The specific structures included in Skipjack have a long evaluation history, and the cryptographic properties of those structures had many prior years of intense study before the formal process began in 1987. In March 2016, NIST published a draft of its cryptographic standard which no longer certifies Skipjack for US government applications. Description Skipjack uses an 80-bit key to encrypt or decrypt 64-bit data blocks. It is an unbalanced Feistel network with 32 rounds. It was designed to be used in secured phones. Cryptanalysis Eli Biham and Adi Shamir discovered an attack against 16 of the 32 rounds within one day of declassification, and (with Alex Biryukov) extended this to 31 of the 32 rounds (but with an attack only slightly faster than exhaustive search) within months using impossible differential cryptanalysis. A truncated differential attack was also published against 28 rounds of Skipjack cipher. A claimed attack against the full cipher was published in 2002, but a later paper with attack designer as a co-author clarified in 2009 that no attack on the full 32 round cipher was then known. In pop culture An algorithm named Skipjack forms part of the back-story to Dan Brown's 1998 novel Digital Fortress. In Brown's novel, Skipjack is proposed as the new public-key encryption standard, along with a back door secretly inserted by the NSA ("a few lines of cunning programming") which would have allowed them to decrypt Skipjack using a secret password and thereby "read the world's email". When details of the cipher are publicly released, programmer Greg Hale discovers and announces details of the backdoor. In real life there is evidence to suggest that the NSA has added back doors to at least one algorithm; the Dual_EC_DRBG random number algorithm may contain a backdoor accessible only to the NSA. Additionally, in the Half-Life 2 modification Dystopia, the "encryption" program used in cyberspace apparently uses both Skipjack and Blowfish algorithms. References Further reading External links SCAN's entry for the cipher fip185 Escrowed Encryption Standard EES Type 2 encryption algorithms National Security Agency cryptography
48144
https://en.wikipedia.org/wiki/Microcomputer
Microcomputer
A microcomputer is a small, relatively inexpensive computer having a central processing unit (CPU) made out of a microprocessor. The computer also includes memory and input/output (I/O) circuitry together mounted on a printed circuit board (PCB) Microcomputers became popular in the 1970s and 1980s with the advent of increasingly powerful microprocessors. The predecessors to these computers, mainframes and minicomputers, were comparatively much larger and more expensive (though indeed present-day mainframes such as the IBM System z machines use one or more custom microprocessors as their CPUs). Many microcomputers (when equipped with a keyboard and screen for input and output) are also personal computers (in the generic sense). An early use of the term personal computer in 1962 predates microprocessor-based designs. (See "Personal Computer: Computers at Companies" reference below). A microcomputer used as an embedded control system may have no human-readable input and output devices. "Personal computer" may be used generically or may denote an IBM PC compatible machine. The abbreviation micro was common during the 1970s and 1980s, but has now fallen out of common usage. Origins The term microcomputer came into popular use after the introduction of the minicomputer, although Isaac Asimov used the term in his short story "The Dying Night" as early as 1956 (published in The Magazine of Fantasy and Science Fiction in July that year). Most notably, the microcomputer replaced the many separate components that made up the minicomputer's CPU with one integrated microprocessor chip. In 1973, the French Institut National de la Recherche Agronomique (INRA) was looking for a computer able to measure agricultural hygrometry. To answer this request, a team of French engineers of the computer technology company R2E, lead by its Head of Development, François Gernelle, created the first available microprocessor-based microcomputer, the Micral N. The same year the company filed their patents with the term "Micro-ordinateur", a literal equivalent of "Microcomputer", to designate a solid state machine designed with a microprocessor. In the US the earliest models such as the Altair 8800 were often sold as kits to be assembled by the user, and came with as little as 256 bytes of RAM, and no input/output devices other than indicator lights and switches, useful as a proof of concept to demonstrate what such a simple device could do. As microprocessors and semiconductor memory became less expensive, microcomputers grew cheaper and easier to use. Increasingly inexpensive logic chips such as the 7400 series allowed cheap dedicated circuitry for improved user interfaces such as keyboard input, instead of simply a row of switches to toggle bits one at a time. Use of audio cassettes for inexpensive data storage replaced manual re-entry of a program every time the device was powered on. Large cheap arrays of silicon logic gates in the form of read-only memory and EPROMs allowed utility programs and self-booting kernels to be stored within microcomputers. These stored programs could automatically load further more complex software from external storage devices without user intervention, to form an inexpensive turnkey system that does not require a computer expert to understand or to use the device. Random-access memory became cheap enough to afford dedicating approximately 1-2 kilobytes of memory to a video display controller frame buffer, for a 40x25 or 80x25 text display or blocky color graphics on a common household television. This replaced the slow, complex, and expensive teletypewriter that was previously common as an interface to minicomputers and mainframes. All these improvements in cost and usability resulted in an explosion in their popularity during the late 1970s and early 1980s. A large number of computer makers packaged microcomputers for use in small business applications. By 1979, many companies such as Cromemco, Processor Technology, IMSAI, North Star Computers, Southwest Technical Products Corporation, Ohio Scientific, Altos Computer Systems, Morrow Designs and others produced systems designed for resourceful end users or consulting firms to deliver business systems such as accounting, database management and word processing to small businesses. This allowed businesses unable to afford leasing of a minicomputer or time-sharing service the opportunity to automate business functions, without (usually) hiring a full-time staff to operate the computers. A representative system of this era would have used an S100 bus, an 8-bit processor such as an Intel 8080 or Zilog Z80, and either CP/M or MP/M operating system. The increasing availability and power of desktop computers for personal use attracted the attention of more software developers. As the industry matured, the market for personal computers standardized around IBM PC compatibles running DOS, and later Windows. Modern desktop computers, video game consoles, laptops, tablet PCs, and many types of handheld devices, including mobile phones, pocket calculators, and industrial embedded systems, may all be considered examples of microcomputers according to the definition given above. Colloquial use of the term By the early 2000s, everyday use of the expression "microcomputer" (and in particular "micro") declined significantly from its peak in the mid-1980s. The term is most commonly associated with the most popular all-in-one 8-bit home computers (such as the Apple II, ZX Spectrum, Commodore 64, BBC Micro, and TRS-80) and small-business CP/M-based microcomputers. Because an increasingly diverse range of devices based on modern microprocessors lack the most common characteristic of "microcomputers," having an 8-bit data bus, they are not referred to as such in everyday speech. In colloquial usage, "microcomputer" has been largely supplanted by the term "personal computer" or "PC", which specifies a computer that has been designed to be used by one individual at a time, a term first coined in 1959. IBM first promoted the term "personal computer" to differentiate the IBM PC from CP/M-based microcomputers likewise targeted at the small-business market, and also IBM's own mainframes and minicomputers. However, following its release, the IBM PC itself was widely imitated, as well as the term. The component parts were commonly available to producers and the BIOS was reverse engineered through cleanroom design techniques. IBM PC compatible "clones" became commonplace, and the terms "personal computer", and especially "PC", stuck with the general public, often specifically for a computer compatible with DOS (or nowadays Windows). Description Monitors, keyboards and other devices for input and output may be integrated or separate. Computer memory in the form of RAM, and at least one other less volatile, memory storage device are usually combined with the CPU on a system bus in one unit. Other devices that make up a complete microcomputer system include batteries, a power supply unit, a keyboard and various input/output devices used to convey information to and from a human operator (printers, monitors, human interface devices). Microcomputers are designed to serve only one user at a time, although they can often be modified with software or hardware to concurrently serve more than one user. Microcomputers fit well on or under desks or tables, so that they are within easy access of users. Bigger computers like minicomputers, mainframes, and supercomputers take up large cabinets or even dedicated rooms. A microcomputer comes equipped with at least one type of data storage, usually RAM. Although some microcomputers (particularly early 8-bit home micros) perform tasks using RAM alone, some form of secondary storage is normally desirable. In the early days of home micros, this was often a data cassette deck (in many cases as an external unit). Later, secondary storage (particularly in the form of floppy disk and hard disk drives) were built into the microcomputer case. History TTL precursors Although they did not contain any microprocessors, but were built around transistor-transistor logic (TTL), Hewlett-Packard calculators as far back as 1968 had various levels of programmability comparable to microcomputers. The HP 9100B (1968) had rudimentary conditional (if) statements, statement line numbers, jump statements (go to), registers that could be used as variables, and primitive subroutines. The programming language resembled assembly language in many ways. Later models incrementally added more features, including the BASIC programming language (HP 9830A in 1971). Some models had tape storage and small printers. However, displays were limited to one line at a time. The HP 9100A was referred to as a personal computer in an advertisement in a 1968 Science magazine, but that advertisement was quickly dropped. HP was reluctant to sell them as "computers" because the perception at that time was that a computer had to be big in size to be powerful, and thus decided to market them as calculators. Additionally, at that time, people were more likely to buy calculators than computers, and, purchasing agents also preferred the term "calculator" because purchasing a "computer" required additional layers of purchasing authority approvals. The Datapoint 2200, made by CTC in 1970, was also comparable to microcomputers. While it contains no microprocessor, the instruction set of its custom TTL processor was the basis of the instruction set for the Intel 8008, and for practical purposes the system behaves approximately as if it contains an 8008. This is because Intel was the contractor in charge of developing the Datapoint's CPU, but ultimately CTC rejected the 8008 design because it needed 20 support chips. Another early system, the Kenbak-1, was released in 1971. Like the Datapoint 2200, it used small-scale integrated transistor–transistor logic instead of a microprocessor. It was marketed as an educational and hobbyist tool, but it was not a commercial success; production ceased shortly after introduction. Early microcomputers In late 1972, a French team headed by François Gernelle within a small company, Réalisations & Etudes Electroniques (R2E), developed and patented a computer based on a microprocessor – the Intel 8008 8-bit microprocessor. This Micral-N was marketed in early 1973 as a "Micro-ordinateur" or microcomputer, mainly for scientific and process-control applications. About a hundred Micral-N were installed in the next two years, followed by a new version based on the Intel 8080. Meanwhile, another French team developed the Alvan, a small computer for office automation which found clients in banks and other sectors. The first version was based on LSI chips with an Intel 8008 as peripheral controller (keyboard, monitor and printer), before adopting the Zilog Z80 as main processor. In late 1972, a Sacramento State University team led by Bill Pentz built the Sac State 8008 computer, able to handle thousands of patients' medical records. The Sac State 8008 was designed with the Intel 8008. It had a full set of hardware and software components: a disk operating system included in a series of programmable read-only memory chips (PROMs); 8 Kilobytes of RAM; IBM's Basic Assembly Language (BAL); a hard drive; a color display; a printer output; a 150 bit/s serial interface for connecting to a mainframe; and even the world's first microcomputer front panel. In early 1973, Sord Computer Corporation (now Toshiba Personal Computer System Corporation) completed the SMP80/08, which used the Intel 8008 microprocessor. The SMP80/08, however, did not have a commercial release. After the first general-purpose microprocessor, the Intel 8080, was announced in April 1974, Sord announced the SMP80/x, the first microcomputer to use the 8080, in May 1974. Virtually all early microcomputers were essentially boxes with lights and switches; one had to read and understand binary numbers and machine language to program and use them (the Datapoint 2200 was a striking exception, bearing a modern design based on a monitor, keyboard, and tape and disk drives). Of the early "box of switches"-type microcomputers, the MITS Altair 8800 (1975) was arguably the most famous. Most of these simple, early microcomputers were sold as electronic kits—bags full of loose components which the buyer had to solder together before the system could be used. The period from about 1971 to 1976 is sometimes called the first generation of microcomputers. Many companies such as DEC, National Semiconductor, Texas Instruments offered their microcomputers for use in terminal control, peripheral device interface control and industrial machine control. There were also machines for engineering development and hobbyist personal use. In 1975, the Processor Technology SOL-20 was designed, which consisted of one board which included all the parts of the computer system. The SOL-20 had built-in EPROM software which eliminated the need for rows of switches and lights. The MITS Altair just mentioned played an instrumental role in sparking significant hobbyist interest, which itself eventually led to the founding and success of many well-known personal computer hardware and software companies, such as Microsoft and Apple Computer. Although the Altair itself was only a mild commercial success, it helped spark a huge industry. Home computers By 1977, the introduction of the second generation, known as home computers, made microcomputers considerably easier to use than their predecessors because their predecessors' operation often demanded thorough familiarity with practical electronics. The ability to connect to a monitor (screen) or TV set allowed visual manipulation of text and numbers. The BASIC language, which was easier to learn and use than raw machine language, became a standard feature. These features were already common in minicomputers, with which many hobbyists and early produces were familiar. In 1979, the launch of the VisiCalc spreadsheet (initially for the Apple II) first turned the microcomputer from a hobby for computer enthusiasts into a business tool. After the 1981 release by IBM of its IBM PC, the term personal computer became generally used for microcomputers compatible with the IBM PC architecture (PC compatible). See also History of computing hardware (1960s-present) Lists of microcomputers Mainframe computer Market share of personal computer vendors Minicomputer Personal computer SFF computer Supercomputer Notes and references Microcomputer Computers
5644999
https://en.wikipedia.org/wiki/IBM%20AN/FSQ-31%20SAC%20Data%20Processing%20System
IBM AN/FSQ-31 SAC Data Processing System
The IBM AN/FSQ-31 SAC Data Processing System (FSQ-31, Q-31, colloq.) was a USAF command, control, and coordination system for the Cold War Strategic Air Command (SAC). IBM's Federal Systems Division was the prime contractor for the AN/FSQ-31s, which were part of the TBD 465L SAC Automated Command and Control System (SACCS), a "Big L" system of systems (cf. 416L SAGE & 474L BMEWS( which had numerous sites throughout the Continental United States: "all SAC command posts and missile LCC's" (e.g., The Notch), a communication network, etc.; and the several FSQ-31 sites including: Offutt AFB's "Headquarters SAC Command Center" (DPC 1 & DPC 2 units) March AFB's 15AF Combat Operations Center ((DPC 3), Barksdale AFB by March 1983. The FSQ-31 provided data to a site's Data Display Central (DDC) "a wall display" (e.g., Iconorama), and the FSQ-31 replaced the TBD at Offutt in 1960. On February 20, 1987, "SAC declared initial operational capability for the SAC Digital Network [which] upgraded the SAC Automated Command and Control system " Description The FSQ-31 included: IBM 4020 Military Computer with Programming and Numerical System and "Arithmetic Unit including storage access", liquid-cooled Ferrite Core storage (65,536 words), High-Speed Input/Output to the Drum Memory system, and the Low-Speed Input/Output section to interface with several different devices: Electronic Data Transmission Communications Central (EDTCC) at 4 "zone-of-interior headquarters bases" for EDT with "outlying" Remote Communications Centrals (e.g., routing "to RCC's, computer (DPC's), or the display devices.") Tape Controllers 1 and 2, connected to 16 IBM 729-V Tape Drives Disk File Controller, which was a modified Tape Controller, connected to the Bryant PH 2000 Disk File, which had 24 disks that were 39 inches in diameter, 125 read/write heads that were hydraulically actuated, and had a total capacity of 26 MB IBM 1401, which controlled data transfers from unit-record equipment: IBM 1402 Card Reader/Punch IBM 1403 Line Printer 2 IBM 729-V Tape Drives 2 IBM Selectric Typewriters, (I/O Typewriters) one of which was used for operational messages and the other for diagnostic messages and maintenance activities. Advanced Display Console Drum memory system with controller and two vertical drum memory devices. Each drum read and wrote 50 bits at a time in parallel so transferring data could be done quickly. The drums were organized as 17 fields with 8192 words per field for a total capacity of 139264 words. The motors that rotated the drums required 208 VAC at 45 Hz so a motor generator unit was required to change the frequency from 60 Hz. This added to the noise level in the computer room. Rockwell-Collins modem Water chilling system for maintaining the liquid coolant temperature in the IBM 4020 SACCS systems outside of the AN/FSQ-31 included the Subnet Communications Processor and the SACCS Software Test (SST) Facility at the Offutt command center (the backup SCP was at Barksdale AFB.) SAC's QOR for the National Survivable Communications System (NSCS) was issued September 13, 1958; and in September 1960 the "installation of a SAC display warning system" included 3 consoles in the Offutt command center. Initial weight: . Memory The Q-31s were equipped with four 16 kiloword memory banks. The memory bank was oil and water cooled. Also considered as part of the memory subsystem in that they were addressed via fixed reserved memory addresses, were four 48 position switch banks, in which a short program could be inserted, and a plugboard, similar to the one used in IBM unit record equipment, that had the capacity of 32 words, so longer bootstrap or diagnostic programs could be installed in plug panels which could then be inserted into the receptacle and used. This served as a primitive ROM. References IBM transistorized computers Strategic Air Command command and control systems
45590275
https://en.wikipedia.org/wiki/Bob%20Muglia
Bob Muglia
Bob Muglia (born 1959) is an American business executive and research and development specialist. He was formerly the Chief Executive Officer of Snowflake Computing, a data warehousing startup. Muglia is known for managing divisions at Microsoft that supported the Microsoft Office Suite, Windows Server and MSN Network product families. He was one of four presidents that reported directly to Microsoft CEO Steve Ballmer. Muglia held several executive positions at Microsoft before resigning from the company in 2011. He worked briefly for Juniper Networks, then accepted his position as CEO of Snowflake Computing in June 2014. Early life Bob Muglia was born in 1959 in Connecticut. His father was an automotive parts salesman. Muglia started working at his first job when he was 15 years old. He moved to Michigan and earned an undergraduate degree from the University of Michigan in 1981. After graduating, he started working for ROLM Corporation. Career Microsoft Windows and business software Bob Muglia started his Microsoft career in 1988. He was the first product manager for SQL Server. Muglia also served as the director of Windows NT Program Management and User Education. He was promoted to vice president of the Windows NT division in October 1995. Muglia later held the position of vice president of the Server Application group, until he was promoted to senior vice president of the Applications and Tools group in February 1998. Bob Muglia was influential in a corporate restructuring at Microsoft in 1999, which assigned business divisions to customer types, rather than technologies. As part of the re-structuring, Muglia became head of the business-productivity group, which oversaw Microsoft Office, Exchange and other business software. According to Computer Reseller News, Muglia pushed developers to visit customers, created customer advisory boards and led other efforts to incorporate customer input into product development at Microsoft. Muglia testified in the United States v. Microsoft Corp. antitrust lawsuit, and in a case between Microsoft and Sun Microsystems regarding Microsoft's use of Java. According to New York Times reporters Steve Lohr and Koel Brinkley, the judge embarrassed Muglia by rebuking him for his persistent characterization of an email from Bill Gates. Muglia also negotiated aggressively with RealNetworks, regarding an antitrust dispute between the two companies. In August 2000, Muglia was appointed to vice president of a new .NET Services Group. The following year he was reassigned to focus on database technologies as senior vice president of the Enterprise Storage Services Group. He helped develop Microsoft's plan for autonomic computing, which was announced in March 2003. By early 2004, Muglia held the position of senior vice president of the Windows Server Division. Servers and tools division Another re-organization at Microsoft in 2005 resulted in Muglia taking the position of Senior Vice President of Servers and Tools before being promoted to president of the division in 2009. This made Muglia one of four presidents at Microsoft. During his tenure, the business group grew its revenues more than ten percent each year for six years. The division accounted for more than 20 percent of Microsoft's revenues by January 2009. In this position, Muglia led Microsoft's ten-year plan for data center and desktop automation products, its Dynamic Systems Initiative and its Dynamic IT strategy. In October 2010, developers criticized Muglia for suggesting Microsoft would put less emphasis on Silverlight; a statement he later retracted. Muglia announced his resignation from Microsoft in January 2011; he was replaced by Satya Nadella, now Microsoft's CEO. He was the fourth executive reporting directly to Microsoft CEO Steve Ballmer to resign between early 2010 and 2011. According to Financial Times, Ballmer credited Muglia for growing the servers and tools division, but implied the departure was related to disagreements between the two executives about the company's cloud computing strategy. Juniper In July 2011, a few months prior to Muglia's last day at Microsoft, Juniper Networks announced it would hire Muglia as the executive vice president of its software division. He reported to then Juniper CEO Kevin Johnson, who (along with other Juniper staff) is also a former Microsoft executive. Muglia was hired to consolidate Juniper's software groups under a new division called Software Solutions. He also helped develop Juniper's software-defined networking (SDN) strategy. In December 2013 Muglia quit Juniper, a month after Shaygan Kheradpir was appointed as the company's new CEO. Several other Juniper executives also left around this time. Snowflake computing Bob Muglia was Chief Executive Officer of Snowflake Computing, a cloud-based data-warehousing startup until April 2019. He joined the company in June 2014, a couple years after it was founded in 2012. Snowflake Computing came out of stealth mode that October. Further reading References External links Official bio American technology chief executives 1959 births Living people Microsoft employees Businesspeople from Connecticut University of Michigan alumni
51225949
https://en.wikipedia.org/wiki/MainView
MainView
MainView, currently advertised as BMC MainView, is a systems management software produced by BMC Software. It was created in 1990 by Boole & Babbage and became part of BMC Software's services after they bought out Boole & Babbage in a stock swap. History MainView was created in 1990 by Boole & Babbage as office automation software, designed specifically to work on IBM hardware. The product was designed so that companies would be able to automate their data management systems as well as being able to control what is automated within each enterprise. In 1993, it was updated to include support for parallel processors. The system gained popularity with users being pleased with its real-time data however expressed dissatisfaction with its usage of counterfactual history to make decisions. In 1998, following BMC Software's purchase of Boole & Babbage, they announced that they would continue to operate MainView by directly integrating it with their IBM hardware products rather than continuing to sell it as specifically as a separate software product. BMC Software continued to upgrade MainView to be compatible with new technologies. In 2016, MainView was upgraded to be compatible with Java Environments. Advertising In 1993, Boole & Babbage signed a deal with Paramount Pictures to license Star Trek for use in their advertising. The first way they used it was to advertise MainView. They produced a short advertising film titled "The Vision", which included Star Trek: The Next Generation Commander William Riker (played by Jonathan Frakes, with whom B&B also signed a spokesman's deal) using MainView on the bridge of the USS Enterprise to promote MainView. Boole & Babbage also used Frakes to promote MainView in person at the Computer Measurement Group conference as well as to announce that MainView would become available for singular desktop computers later in that year. See also OS/2 References External links "The Vision" advert for MainView including Commander Riker Automation software IBM software Star Trek: The Next Generation
1597400
https://en.wikipedia.org/wiki/Bud%20Tribble
Bud Tribble
Guy L. "Bud" Tribble is Vice President of Software Technology at Apple Inc. Work Tribble was a member of the original Apple Macintosh design team. He served as manager of the software development team, and helped to design the classic Mac OS and its user interface. He was among the founders of NeXT, Inc., serving as NeXT's vice president of software development. Tribble is one of the industry's top experts in software design and object-oriented programming. Tribble's career includes time at Sun Microsystems and Eazel. At Eazel, he was vice president of Engineering leading development of next generation user interface software and Internet services for Linux computers. Tribble was also chief technology officer for the Sun-Netscape Alliance, responsible for guiding Internet and e-commerce software R&D. Tribble earned a BA degree in physics at the University of California, San Diego, and an MD and PhD in biophysics and physiology at the University of Washington in Seattle. Tribble is one of three "policy czars" at Apple (along with Jane Horvath and Erik Neuenschwander) who spends a significant amount of time on privacy. Any collection of Apple customer data requires sign-off from a committee of the three privacy czars and a top executive, according to four former employees of Apple who worked on a variety of products that went through privacy vetting. See also Outline of Apple Inc. (personnel) History of Apple Inc. References External links Reality Distortion Field, Feb 1981, at MacIntosh folklore.org Macintosh's Other Designers, Aug 1984, Byte magazine American computer scientists Living people University of Washington alumni University of California, San Diego alumni Apple Inc. employees Year of birth missing (living people) American chief technology officers
26407407
https://en.wikipedia.org/wiki/Red%20Star%20OS
Red Star OS
Red Star OS () is a North Korean Linux distribution, with development first starting in 1998 at the Korea Computer Center (KCC). Prior to its release, computers in North Korea typically used Red Hat Linux and Windows XP. Version 3.0 was released in the summer of 2013, but , version 1.0 continues to be more widely used. It is offered only in a Korean language edition, localized with North Korean terminology and spelling. Specifications Red Star OS features a modified Mozilla Firefox browser called Naenara ("My country" in Korean), which is used for browsing the Naenara web portal on North Korea's national intranet known as Kwangmyong. Naenara comes with two search engines. Other software includes a text editor, an office suite, an e-mail client, audio and video players, a file sharing program, and video games. Version 3, like its predecessors, runs Wine, a piece of software that allows Windows programs to be run under Linux. The operating system utilizes customized versions of KDE Software Compilation. Earlier versions had KDE 3-based desktops. Version 3.0 closely resembles Apple's macOS, whereas previous versions more closely resembled Windows XP; current North Korean leader Kim Jong-un was seen with an iMac on his desk in a 2013 photo, indicating a possible connection to the redesign. Media attention The Japan-based North Korea-affiliated newspaper Choson Sinbo interviewed two Red Star OS programmers in June 2006. English-language technology blogs, including Engadget and OSnews, as well as South Korean wire services such as Yonhap, went on to repost the content. In late 2013, Will Scott, who was visiting the Pyongyang University of Science and Technology, purchased a copy of version 3 from a KCC retailer in southern Pyongyang, and uploaded screenshots to the internet. In 2015, two German researchers speaking at the Chaos Communication Congress described the internal operation of the OS. The North Korean government wants to track the underground market of USB flash drives used to exchange foreign films, music and writing, so the system watermarks all files on portable media attached to computers. History Version 1.0 The first version appeared in 2008. It is very reminiscent of the Windows XP operating system. It featured the "Naenara" web browser, based on Mozilla Firefox, and an Office suite based on Open Office, called "Uri 2.0". Wine is also included. So far, no copies have been leaked online. The screenshots of the operating system were officially published by KCNA and discovered by South Korean news sites. Version 2.0 The development of version 2.0 began in March 2008, and was completed on 3 June 2009. Like its predecessor, it is based on the appearance of Windows XP, and was priced at 2000 North Korean won (approx. US$15). The "Naenara" internet browser is also included in this version. The browser was released on 6 August 2009, as part of the operating system, and was priced at 4000 North Korean won (approx. US$28). The operating system uses a special keyboard layout that differs greatly from the South Korean standard layout. Version 3.0 Version 3.0 was introduced on 15 April 2012, and appears heavily based on macOS operating systems of various versions. The new version supports both IPv4 and IPv6 addresses. The operating system comes pre-installed with a number of applications that monitor its users. If a user tries to disable security functions, an error message will appear on the computer, or the operating system will crash and reboot. In addition, a watermarking tool integrated into the system marks all media content with the hard drive's serial number, allowing the North Korean authorities to trace the spread of files. The system also has hidden "anti-virus" software that is capable of removing censored files that are remotely stored by the North Korean secret service. There is a user group called "administrator" in the operating system. Users do not have root access by default, but are able to elevate their privileges to root by running a built-in utility called "rootsetting". However, provisions are made in kernel modules to deny even root users access to certain files, and extensive system integrity checks are done at boot time to ensure these files have not been modified. Red Star OS 3 comes with a customized version of OpenOffice called Sogwang Office. Version 4.0 Very little information is available on version 4.0. As of late 2017 it is known that a Red Star 4.0 exists and is being field tested. According to The Pyongyang Times, an official version of Red Star OS 4.0 has been developed as of January 2019, with full network support as well as system and service management tools. In June and July 2020, South Korea's NKEconomy (NK경제) obtained Red Star 4.0 and published articles about it. Vulnerabilities In 2016, the computer security company Hacker House found a security vulnerability in the integrated web browser Naenara. This vulnerability makes it possible to execute commands on the computer if the user clicks on a crafted link. References External links "Download" redstar-tools: A tool used for analyzing the system. Information technology in North Korea KDE Korean-language computing State-sponsored Linux distributions Linux distributions
26834603
https://en.wikipedia.org/wiki/StreamMyGame
StreamMyGame
StreamMyGame is a software-only game streaming solution that enables Microsoft Windows-based games and applications to be played remotely on Windows and Linux devices. It was first released on 26 October 2007 as Windows-only software. On 14 January 2008 StreamMyGame launched a Linux version of its player. This new version made it possible to use a PlayStation 3 running Linux to remotely play Windows games. On the 3 June 2008 Sean Maloney, Intel Corporation Executive Vice President of Sales and Marketing demonstrated StreamMyGame at Computex 08 over a WiMAX connection using an Intel Mobile Internet Device to view and play the game Crysis. On 17 July 2008 StreamMyGame announced that its Player was compatible with Intel's Atom range of processors and devices including Asus's EeePC netbook. In addition to streaming games over a local network, StreamMyGame can be used over broadband networks however these broadband connections require a minimum upload speed of 2 Mbit/s. Architecture Members of StreamMyGames website can download and install a Server and Player application. The Server has to be installed on the same computer on which the games are installed. The Server automatically searches the user's hard drive for known games and uploads links to these games onto the StreamMyGame website. The Server is compatible with Windows XP, Windows Vista and Windows 7. The Player is installed on the computer or device on which the game is to be played and is compatible with Windows XP, Windows Vista and Windows 7 along with Ubuntu, Fedora, Red Hat, Xandros, Debian and Yellow Dog Linux. Both Server and Player software require continuous internet access. In addition to streaming games StreamMyGame enables its members to record games to a video file that can be uploaded to sites such as YouTube. Game streaming service Members select a game they want to play on the StreamMyGame website, the website sends an encrypted message to the Server, the Server starts the game and captures its video and audio. The captured video and audio is sent to a Player via Real Time Streaming Protocol and displayed. The Player captures keyboard and mouse commands and sends these back to the Server where they are used to control the game. Community StreamMyGame enables its members to interact via a bespoke Web 2.0 website that includes messaging, chat, forums and groups. Members can use group permissions to enable other members to share the use of their games. StreamMyGame's forums are predominantly used by its members to publish performance details of StreamMyGame when used with new and existing games. See also List of cloud gaming solution providers References External links Cloud gaming Cloud gaming companies
690194
https://en.wikipedia.org/wiki/Historia%20Regum%20Britanniae
Historia Regum Britanniae
Historia regum Britanniae (The History of the Kings of Britain), originally called De gestis Britonum (On the Deeds of the Britons), is a pseudohistorical account of British history, written around 1136 by Geoffrey of Monmouth. It chronicles the lives of the kings of the Britons over the course of two thousand years, beginning with the Trojans founding the British nation and continuing until the Anglo-Saxons assumed control of much of Britain around the 7th century. It is one of the central pieces of the Matter of Britain. Although taken as historical well into the 16th century, it is now considered to have no value as history. When events described, such as Julius Caesar's invasions of Britain, can be corroborated from contemporary histories, Geoffrey's account can be seen to be wildly inaccurate. It remains, however, a valuable piece of medieval literature, which contains the earliest known version of the story of King Lear and his three daughters, and helped popularise the legend of King Arthur. Contents Dedication Geoffrey starts the book with a statement of his purpose in writing the history: "I have not been able to discover anything at all on the kings who lived here before the Incarnation of Christ, or indeed about Arthur and all the others who followed on after the Incarnation. Yet the deeds of these men were such that they deserve to be praised for all time." He claims that he was given a source for this period by Archdeacon Walter of Oxford, who presented him with a "certain very ancient book written in the British language" from which he has translated his history. He also cites Gildas and Bede as sources. Then follows a dedication to Robert, earl of Gloucester and Waleran, count of Meulan, whom he enjoins to use their knowledge and wisdom to improve his tale. Book One The Historia itself begins with the Trojan Aeneas, who, according to the Aeneid of Virgil, settled in Italy after the Trojan War. His great-grandson Brutus is banished, and, after a period of wandering, is directed by the goddess Diana to settle on an island in the western ocean. Brutus lands at Totnes and names the island, then called Albion, "Britain" after himself. Brutus defeats the giants who are the only inhabitants of the island, and establishes his capital, Troia Nova ("New Troy"), on the banks of the Thames; later it is known as Trinovantum, and eventually renamed London. Book Two When Brutus dies, his three sons, Locrinus, Kamber and Albanactus, divide the country between themselves; the three kingdoms are named Loegria, Kambria (North and West of the Severn to Humber) and Albany (Scotland).The story then progresses rapidly through the reigns of the descendants of Locrinus, including Bladud, who uses magic and even tries to fly, but dies in the process. Bladud's son Leir reigns for sixty years. He has no sons, so upon reaching old age he decides to divide his kingdom among his three daughters, Goneril, Regan and Cordelia. To decide who should get the largest share, he asks his daughters how much they love him. Goneril and Regan give extravagant answers, but Cordelia answers simply and sincerely; angered, he gives Cordelia no land. Goneril and Regan are to share half the island with their husbands, the Dukes of Albany and Cornwall. Cordelia marries Aganippus, King of the Franks, and departs for Gaul. Soon Goneril and Regan and their husbands rebel and take the whole kingdom. After Leir has had all his attendants taken from him, he begins to regret his actions towards Cordelia and travels to Gaul. Cordelia receives him compassionately and restores his royal robes and retinue. Aganippus raises a Gaulish army for Leir, who returns to Britain, defeats his sons-in-law and regains the kingdom. Leir rules for three years and then dies; Cordelia inherits the throne and rules for five years before Marganus and Cunedagius, her sisters' sons, rebel against her. They imprison Cordelia; grief-stricken, she kills herself. Marganus and Cunedagius divide the kingdom between themselves, but soon quarrel and go to war with each other. Cunedagius eventually kills Marganus in Wales and retains the whole kingdom, ruling for thirty-three years. He is succeeded by his son Rivallo. A later descendant of Cunedagius, King Gorboduc, has two sons called Ferreux and Porrex. They quarrel and both are eventually killed, sparking a civil war. This leads to Britain being ruled by five kings, who keep attacking each other. Dunvallo Molmutius, the son of Cloten, the King of Cornwall, becomes pre-eminent. He eventually defeats the other kings and establishes his rule over the whole island. He is said to have "established the so-called Molmutine Laws which are still famous today among the English". Book Three Dunvallo's sons, Belinus and Brennius, fight a civil war before being reconciled by their mother, and proceed to sack Rome. Victorious, Brennius remains in Italy, while Belinus returns to rule Britain. Numerous brief accounts of successive kings follow. These include Lud, who renames Trinovantum "Kaerlud" after himself; this later becomes corrupted to London. Lud is succeeded by his brother, Cassibelanus, as Lud's sons Androgeus and Tenvantius are not yet of age. In recompense, Androgeus is made Duke of Kent and Trinovantum (London), and Tenvantius is made Duke of Cornwall. Book Four After his conquest of Gaul, Julius Caesar looks over the sea and resolves to order Britain to swear obedience and pay tribute to Rome. His commands are answered by a letter of refusal from Cassivellaunus. Caesar sails a fleet to Britain, but he is overwhelmed by Cassivellaunus's army and forced to retreat to Gaul. Two years later he makes another attempt, but is again pushed back. Then Cassivellaunus quarrels with one of his dukes, Androgeus, who sends a letter to Caesar asking him to help avenge the duke's honour. Caesar invades once more and besieges Cassivellaunus on a hill. After several days Cassivellaunus offers to make peace with Caesar, and Androgeus, filled with remorse, goes to Caesar to plead with him for mercy. Cassivellaunus pays tribute and makes peace with Caesar, who then returns to Gaul. Cassivelaunus dies and is succeeded by his nephew Tenvantius, as Androgeus has gone to Rome. Tenvantius is succeeded in turn by his son Kymbelinus, and then Kymbelinus's son Guiderius. Guiderius refuses to pay tribute to emperor Claudius, who then invades Britain. After Guiderius is killed in battle with the Romans, his brother Arvirargus continues the defence, but eventually agrees to submit to Rome, and is given the hand of Claudius's daughter Genvissa in marriage. Claudius returns to Rome, leaving the province under Arviragus's governorship. The line of British kings continues under Roman rule, and includes Lucius, Britain's first Christian king, and several Roman figures, including the emperor Constantine I, the usurper Allectus and the military commander Asclepiodotus. When Octavius passes the crown to his son-in-law Maximianus, his nephew Conan Meriadoc is given rule of Brittany to compensate him for not succeeding. After a long period of Roman rule, the Romans decide they no longer wish to defend the island and depart. The Britons are immediately besieged by attacks from Picts, Scots and Danes, especially as their numbers have been depleted due to Conan colonizing Brittany and Maximianus using British troops for his campaigns. In desperation the Britons send letters to the general of the Roman forces, asking for help, but receive no reply (this passage borrows heavily from the corresponding section in Gildas' De Excidio et Conquestu Britanniae). Books Five and Six After the Romans leave, the Britons ask the King of Brittany (Armorica), , descended from Conan, to rule them. However, Aldroenus instead sends his brother Constantine to rule the Britons. After Constantine's death, Vortigern assists his eldest son Constans in succeeding, before enabling their murder and coming to power. Constantine's remaining sons Aurelius Ambrosius and Uther are too young to rule and are taken to safety in Armorica. Vortigern invites the Saxons under Hengist and Horsa to fight for him as mercenaries, but they rise against him. He loses control of much of his land and encounters Merlin. Book Seven: The Prophecies of Merlin At this point Geoffrey abruptly pauses his narrative by inserting a series of prophecies attributed to Merlin. Some of the prophecies act as an epitome of upcoming chapters of the Historia, while others are veiled allusions to historical people and events of the Norman world in the 11th-12th centuries. The remainder are obscure. Book Eight After Aurelius Ambrosius defeats and kills Vortigern, becoming king, Britain remains in a state of war under him and his brother Uther. They are both assisted by the wizard Merlin. At one point during the continuous string of battles, Ambrosius takes ill and Uther must lead the army for him. This allows an enemy assassin to pose as a physician and poison Ambrosius. When the king dies, a comet taking the form of a dragon's head (pendragon) appears in the night sky, which Merlin interprets as a sign that Ambrosius is dead and that Uther will be victorious and succeed him. So after defeating his latest enemies, Uther adds "Pendragon" to his name and is crowned king. But another enemy strikes, forcing Uther to make war again. This time he is temporarily defeated, gaining final victory only with the help of Duke Gorlois of Cornwall. But while celebrating this victory with Gorlois, he falls in love with the duke's wife, Igerna. This leads to war between Uther Pendragon and Gorlois of Cornwall, during which Uther clandestinely lies with Igerna through the magic of Merlin. Arthur is conceived that night. Then Gorlois is killed and Uther marries Igerna. But he must war against the Saxons again. Although Uther ultimately triumphs, he dies after drinking water from a spring the Saxons had poisoned. Books Nine and Ten Uther's son Arthur assumes the throne and defeats the Saxons so severely that they cease to be a threat until after his death. In the meantime, Arthur conquers most of northern Europe and ushers in a period of peace and prosperity that lasts until the Romans, led by Lucius Hiberius, demands that Britain once again pay tribute to Rome. Arthur defeats Lucius in Gaul, intending to become Emperor, but in his absence, his nephew Mordred seduces and marries Guinevere and seizes the throne. Books Eleven and Twelve Arthur returns and kills Mordred at the Battle of Camlann, but, mortally wounded, he is carried off to the isle of Avalon, and hands the kingdom to his cousin Constantine, son of Cador and Duke of Cornwall. The Saxons returned after Arthur's death, but would not end the line of British kings until the death of Cadwallader. Cadwallader is forced to flee Britain and requests the aid of King Alan of the Amoricans. However an angel's voice tells him the Britons will no longer rule and he should go to Rome. Cadwallader does so, dying there, though leaves his son and nephew to rule the remaining Britons. The remaining Britons are driven into Wales and the Saxon Athelstan becomes King of Loegria. Sources Geoffrey claimed to have translated the Historia into Latin from "a very ancient book in the British tongue", given to him by Walter, Archdeacon of Oxford. However, no modern scholars take this claim seriously. Much of the work appears to be derived from Gildas's 6th-century De Excidio et Conquestu Britanniae, Bede's 8th-century Historia ecclesiastica gentis Anglorum, the 9th-century Historia Brittonum ascribed to Nennius, the 10th-century Annales Cambriae, medieval Welsh genealogies (such as the Harleian Genealogies) and king-lists, the poems of Taliesin, the Welsh tale Culhwch and Olwen, and some of the medieval Welsh saints' lives, expanded and turned into a continuous narrative by Geoffrey's own imagination. Influence In an exchange of manuscript material for their own histories, Robert of Torigny gave Henry of Huntington a copy of Historia Regum Britanniae, which both Robert and Henry used uncritically as authentic history and subsequently used in their own works, by which means some of Geoffrey's fictions became embedded in popular history. The history of Geoffrey forms the basis for much British lore and literature as well as being a rich source of material for Welsh bards. It became tremendously popular during the High Middle Ages, revolutionising views of British history before and during the Anglo-Saxon period despite the criticism of such writers as William of Newburgh and Gerald of Wales. The prophecies of Merlin in particular were often drawn on in later periods, for instance by both sides in the issue of English influence over Scotland under Edward I and his successors. The Historia was quickly translated into Norman verse by Wace (the Roman de Brut) in 1155. Wace's version was in turn translated into Middle English verse by Layamon (the Brut) in the early 13th century. In the second quarter of the 13th century, a version in Latin verse, the Gesta Regum Britanniae, was produced by William of Rennes. Material from Geoffrey was incorporated into a large variety of Anglo-Norman and Middle English prose compilations of historical material from the 13th century onward. Geoffrey was translated into a number of different Welsh prose versions by the end of the 13th century, collectively known as Brut y Brenhinedd. One variant of the Brut y Brenhinedd, the so-called Brut Tysilio, was proposed in 1917 by the archaeologist William Flinders Petrie to be the ancient British book that Geoffrey translated, although the Brut itself claims to have been translated from Latin by Walter of Oxford, based on his own earlier translation from Welsh to Latin. Geoffrey's work is greatly important because it brought the Welsh culture into British society and made it acceptable. It is also the first record we have of the great figure King Lear, and the beginning of the mythical King Arthur figure. For centuries, the Historia was accepted at face value, and much of its material was incorporated into Holinshed's 16th-century Chronicles. Modern historians have regarded the Historia as a work of fiction with some factual information contained within. John Morris in The Age of Arthur calls it a "deliberate spoof", although this is based on misidentifying Walter, archdeacon of Oxford, as Walter Map, a satirical writer who lived a century later. It continues to have an influence on popular culture. For example, Mary Stewart's Merlin Trilogy and the TV miniseries Merlin both contain large elements taken from the Historia. Manuscript tradition and textual history Two hundred and fifteen medieval manuscripts of the Historia survive, dozens of them copied before the end of the 12th century. Even among the earliest manuscripts a large number of textual variants, such as the so-called "First Variant", can be discerned. These are reflected in the three possible prefaces to the work and in the presence or absence of certain episodes and phrases. Certain variants may be due to "authorial" additions to different early copies, but most probably reflect early attempts to alter, add to or edit the text. Unfortunately, the task of disentangling these variants and establishing Geoffrey's original text is long and complex, and the extent of the difficulties surrounding the text has been established only recently. The variant title Historia regum Britanniae was introduced in the Middle Ages, and this became the most common form in the modern period. A critical edition of the work published in 2007, however, demonstrated that the most accurate manuscripts refer to the work as De gestis Britonum, and that this was the title Geoffrey himself used to refer to the work. See also List of legendary kings of Britain References Bibliography John Jay Parry and Robert Caldwell. "Geoffrey of Monmouth" in Arthurian Literature in the Middle Ages, Roger S. Loomis (ed.). Oxford: Clarendon Press. 1959. 72–93. Brynley F. Roberts. "Geoffrey of Monmouth and Welsh Historical Tradition," Nottingham Medieval Studies, 20 (1976), 29–40. J. S. P. Tatlock. The Legendary History of Britain: Geoffrey of Monmouth's Historia Regum Britanniae and Its Early Vernacular Versions. Berkeley: University of California Press, 1950. Michael A. Faletra, trans. and ed. The History of the Kings of Britain. Geoffrey of Monmouth. Peterborough, Ont.; Plymouth: Broadview Editions, 2008. N. Wright, ed. The Historia Regum Britannie of Geoffrey of Monmouth. 1, A Single-Manuscript Edition from Bern, Burgerbibliothek, MS. 568. Cambridge: D. S. Brewer, 1984. N. Wright, ed. The Historia Regum Britannie of Geoffrey of Monmouth. 2, The First Variant Version: A Critical Edition. Cambridge: D. S. Brewer, 1988. J. C. Crick. The Historia Regum Britannie of Geoffrey of Monmouth. 3, A Summary Catalogue of the Manuscripts. Cambridge: D. S. Brewer, 1989. J. C. Crick. The Historia Regum Britannie of Geoffrey of Monmouth. 4, Dissemination and Reception in the Later Middle Ages. Cambridge: D. S. Brewer, 1991. J. Hammer, ed. Historia Regum Britanniae: A Variant Version Edited from Manuscripts. Cambridge, MA: 1951. A. Griscom, ed., and J. R. Ellis, trans. The Historia Regum Britanniae of Geoffrey of Monmouth with Contributions to the Study of its Place in Early British History. London: Longmans, Green and Co., 1929. M. D. Reeve, "The Transmission of the Historia Regum Britanniae," Journal of Medieval Latin 1 (1991), 73–117. Edmond Faral. La Légende arthurienne. Études et documents, 3 vols. Bibliothèque de l'École des Hautes Études. Paris, 1929. R. W. Leckie. The Passage of Dominion. Geoffrey of Monmouth and the Periodization of Insular History in the Twelfth Century. Toronto: Toronto University Press, 1981. External links Online text at Google Books Online Latin text at Google Books Historia regum Britanniae Second Variant version at Cambridge Digital Library 1130s books 12th century in Great Britain 12th-century Latin books Arthurian literature in Latin British traditional history Historical writing from Norman and Angevin England King lists Medieval Latin histories Medieval Welsh literature Pseudohistory Works by Geoffrey of Monmouth Depictions of Julius Caesar in literature
508666
https://en.wikipedia.org/wiki/Total%20cost%20of%20ownership
Total cost of ownership
Total cost of ownership (TCO) is a financial estimate intended to help buyers and owners determine the direct and indirect costs of a product or service. It is a management accounting concept that can be used in full cost accounting or even ecological economics where it includes social costs. For manufacturing, as TCO is typically compared with doing business overseas, it goes beyond the initial manufacturing cycle time and cost to make parts. TCO includes a variety of cost of doing business items, for example, ship and re-ship, and opportunity costs, while it also considers incentives developed for an alternative approach. Incentives and other variables include tax credits, common language, expedited delivery, and customer-oriented supplier visits. Use of concept TCO, when incorporated in any financial benefit analysis, provides a cost basis for determining the total economic value of an investment. Examples include: return on investment, internal rate of return, economic value added, return on information technology, and rapid economic justification. A TCO analysis includes total cost of acquisition and operating costs, as well as costs related to replacement or upgrades at the end of the life cycle. A TCO analysis is used to gauge the viability of any capital investment. An enterprise may use it as a product/process comparison tool. It is also used by credit markets and financing agencies. TCO directly relates to an enterprise's asset and/or related systems total costs across all projects and processes, thus giving a picture of the profitability over time. Computer and software industries TCO analysis was popularized by the Gartner Group in 1987. The roots of this concept date at least back to the first quarter of the twentieth century. Many different methodologies and software tools have been developed to analyze TCO in various operational contexts. TCO is applied to the analysis of information technology products, seeking to quantify the financial impact of deploying a product over its life cycle. These technologies include software and hardware, and training. Technology deployment can include the following as part of TCO: Computer hardware and programs Network hardware and software Server hardware and software Workstation hardware and software Installation and integration of hardware and software Purchasing research Warranties and licenses License tracking/compliance Migration expenses Risks: susceptibility to vulnerabilities, availability of upgrades, patches and future licensing policies, etc. Operation expenses Infrastructure (floor space) Electricity (for related equipment, cooling, backup power) Testing costs Downtime, outage and failure expenses Diminished performance (i.e. users having to wait, diminished money-making ability) Security (including breaches, loss of reputation, recovery and prevention) Backup and recovery process Technology/user training Audit (internal and external) Insurance Information technology personnel Corporate management time Long term expenses Replacement Future upgrade or scalability expenses Decommissioning In the case of comparing TCO of existing versus proposed solutions, consideration should be put toward costs required to maintain the existing solution that may not necessarily be required for a proposed solution. Examples include cost of manual processing that are only required to support lack of existing automation, and extended support personnel. Facilities and built environment Total cost of ownership can be applied to the structure and systems of a single building or a campus of buildings. Pioneered by Doug Christensen and the facilities department at Brigham Young University starting in the 1980s, the concept gained more traction in educational facilities in the early 21st century. The application of TCO in facilities goes beyond the predictive cost analysis for a new building’s “first cost” (planning, construction and commissioning), to factor in a variety of critical requirements and costs over the life of the building: replacement of energy, utility, and safety systems; continual maintenance of the building exterior and interior and replacement of materials; updates to design and functionality; and recapitalization costs. A key objective of planning, constructing, operating, and managing buildings via TCO principals is for building owners and facility professionals to predict needs and deliver data-driven results.  TCO can be applied any time during the life of a facility asset to manage cost inputs for the life of the structure or system into the future. Developing standards for TCO in facilities APPA, an ANSI Accredited Standards Developer, published APPA 1000-1 – Total Cost of Ownership for Facilities Asset Management (TCO) – Part 1: Key Principles as an American National Standard in December 2017.  APPA 1000-1 provides financial officers, facility professionals, architects, planners, construction workforce, and operations and maintenance (O&M) personnel the foundation of a standardized and holistic approach to implementing TCO key principles. Implementation of TCO key principles can improve decision making, maximizing financial strategies over the life of an asset, starting at the planning and design stage and extends to the end of the asset's life. APPA 1000-2, slated for publication in 2019, will focus on implementation and application of key TCO principals in facility management. Transportation The TCO concept is easily applicable to the transportation industry and to motor vehicle ownership, for example, the TCO defines the cost of owning an automobile from the time of purchase by the owner, through its operation and maintenance to the time it leaves the possession of the owner. Comparative TCO studies between various models help consumers choose a car to fit their needs and budget. Some of the key elements incorporated in the cost of ownership for a vehicle include: Depreciation costs Fuel costs Insurance Financing Repairs Fees and taxes Maintenance costs Opportunity costs Downtime costs. See also Cost to company (CTC) Capital expenditure (CAPEX) Operating expense (OPEX) Activity-based costing Life cycle cost analysis Total benefits of ownership Total cost Total cost of acquisition Vendor lock-in References Costs Enterprise application integration Information technology governance fr:Coût total de possession
472463
https://en.wikipedia.org/wiki/Maurice%20Wilkes
Maurice Wilkes
Sir Maurice Vincent Wilkes (26 June 1913 – 29 November 2010) was a British computer scientist who designed and helped build the Electronic Delay Storage Automatic Calculator (EDSAC), one of the earliest stored program computers, and who invented microprogramming, a method for using stored-program logic to operate the control unit of a central processing unit's circuits. At the time of his death, Wilkes was an Emeritus Professor at the University of Cambridge. Early life, education, and military service Wilkes was born in Dudley, Worcestershire, England the only child of Ellen (Helen), née Malone (1885–1968) and Vincent Joseph Wilkes (1887–1971), an accounts clerk at the estate of the Earl of Dudley. He grew up in Stourbridge, West Midlands, and was educated at King Edward VI College, Stourbridge. During his school years he was introduced to amateur radio by his chemistry teacher. He studied the Mathematical Tripos at St John's College, Cambridge from 1931 to 1934, and in 1936 completed his PhD in physics on the subject of radio propagation of very long radio waves in the ionosphere. He was appointed to a junior faculty position of the University of Cambridge, through which he was involved in the establishment of a computing laboratory. He was called up for military service during World War II and worked on radar at the Telecommunications Research Establishment (TRE) and in operational research. Research and career In 1945, Wilkes was appointed as the second director of the University of Cambridge Mathematical Laboratory (later known as the Computer Laboratory). The Cambridge laboratory initially had many different computing devices, including a differential analyser. One day Leslie Comrie visited Wilkes and lent him a copy of John von Neumann's prepress description of the EDVAC, a successor to the ENIAC under construction by Presper Eckert and John Mauchly at the Moore School of Electrical Engineering. He had to read it overnight because he had to return it and no photocopying facilities existed. He decided immediately that the document described the logical design of future computing machines, and that he wanted to be involved in the design and construction of such machines. In August 1946 Wilkes travelled by ship to the United States to enroll in the Moore School Lectures, of which he was only able to attend the final two weeks because of various travel delays. During the five-day return voyage to England, Wilkes sketched out in some detail the logical structure of the machine which would become EDSAC. EDSAC Since his laboratory had its own funding, he was immediately able to start work on a small practical machine, EDSAC (for "Electronic Delay Storage Automatic Calculator"), once back at Cambridge. He decided that his mandate was not to invent a better computer, but simply to make one available to the university. Therefore, his approach was relentlessly practical. He used only proven methods for constructing each part of the computer. The resulting computer was slower and smaller than other planned contemporary computers. However, his laboratory's computer was the second practical stored-program computer to be completed and operated successfully from May 1949, well over a year before the much larger and more complex EDVAC. In 1950, along with David Wheeler, Wilkes used EDSAC to solve a differential equation relating to gene frequencies in a paper by Ronald Fisher. This represents the first use of a computer for a problem in the field of biology. Other computing developments In 1951, he developed the concept of microprogramming from the realisation that the central processing unit of a computer could be controlled by a miniature, highly specialised computer program in high-speed ROM. This concept greatly simplified CPU development. Microprogramming was first described at the University of Manchester Computer Inaugural Conference in 1951, then expanded and published in IEEE Spectrum in 1955. This concept was implemented for the first time in EDSAC 2, which also used multiple identical "bit slices" to simplify design. Interchangeable, replaceable tube assemblies were used for each bit of the processor. The next computer for his laboratory was the Titan, a joint venture with Ferranti Ltd begun in 1963. It eventually supported the UK's first time-sharing system which was inspired by CTSS and provided wider access to computing resources in the university, including time-shared graphics systems for mechanical CAD. A notable design feature of the Titan's operating system was that it provided controlled access based on the identity of the program, as well as or instead of, the identity of the user. It introduced the password encryption system used later by Unix. Its programming system also had an early version control system. Wilkes is also credited with the idea of symbolic labels, macros and subroutine libraries. These are fundamental developments that made programming much easier and paved the way for high-level programming languages. Later, Wilkes worked on an early timesharing system (now termed a multi-user operating system) and distributed computing. Toward the end of the 1960s, Wilkes also became interested in capability-based computing, and the laboratory assembled a unique computer, the Cambridge CAP. In 1974, Wilkes encountered a Swiss data network (at Hasler AG) that used a ring topology to allocate time on the network. The laboratory initially used a prototype to share peripherals. Eventually, commercial partnerships were formed, and similar technology became widely available in the UK. Awards, honours and leadership Wilkes received a number of distinctions: he was a Knight Bachelor, Distinguished Fellow of the British Computer Society, a Fellow of the Royal Academy of Engineering and a Fellow of the Royal Society. Wilkes was a founder member of the British Computer Society (BCS) and its first president (1957–1960). He received the Turing Award in 1967, with the following citation: "Professor Wilkes is best known as the builder and designer of the EDSAC, the first computer with an internally stored program. Built in 1949, the EDSAC used a mercury delay-line memory. He is also known as the author, with David Wheeler and Stanley Gill, of a volume on Preparation of Programs for Electronic Digital Computers in 1951, in which program libraries were effectively introduced." In 1968 he received the Harry H. Goode Memorial Award, with the following citation: "For his many original achievements in the computer field, both in engineering and software, and for his contributions to the growth of professional society activities and to international cooperation among computer professionals." In 1972, Wilkes was awarded an honorary Doctor of Science by Newcastle University. In 1980, he retired from his professorships and post as the head of the Computer Laboratory and joined the central engineering staff of Digital Equipment Corporation in Maynard, Massachusetts, USA. Wilkes was awarded the Faraday Medal by the Institution of Electrical Engineers in 1981. The Maurice Wilkes Award, awarded annually for an outstanding contribution to computer architecture made by a young computer scientist or engineer, is named after him. In 1986, he returned to England and became a member of Olivetti's Research Strategy Board. In 1987, he was awarded an Honorary Degree (Doctor of Science) by the University of Bath. In 1993 Wilkes was presented, by Cambridge University, with an honorary Doctor of Science degree. In 1994 he was inducted as a Fellow of the Association for Computing Machinery. He was awarded the Mountbatten Medal in 1997 and in 2000 presented the inaugural Pinkerton Lecture. He was knighted in the 2000 New Years Honours List. In 2001, he was inducted as a Fellow of the Computer History Museum "for his contributions to computer technology, including early machine design, microprogramming, and the Cambridge Ring network." In 2002, Wilkes moved back to the Computer Laboratory, University of Cambridge, as an emeritus professor. In his memoirs Wilkes wrote: Publications Oscillations of the Earth's Atmosphere (1949), Cambridge University Press Preparation of Programs for an Electronic Digital Computer (1951), with D. J. Wheeler and S. Gill, Addison Wesley Press Automatic Digital Computers (1956), Methuen Publishing A Short Introduction to Numerical Analysis (1966), Cambridge University Press Time-sharing Computer Systems (1968), Macdonald The Cambridge CAP Computer and its Operating System (1979), with R. M. Needham, Elsevier Memoirs of a Computer Pioneer (1985), MIT Press Computing Perspectives (1995) Morgan-Kauffman Personal life Wilkes married Nina Twyman in 1947 who died in 2008. He died in November 2010 and was survived by his son, Anthony, and two daughters, Margaret and Helen. References External links Oral history interview with David J. Wheeler, Charles Babbage Institute, University of Minnesota. Wheeler was a research student under Wilkes at the University Mathematical Laboratory at Cambridge from 1948–51. Wheeler discusses the EDSAC project, the influence of EDSAC on the ILLIAC, the ORDVAC, and the IBM 701 computers, as well as visits to Cambridge by Douglas Hartree, Nelson Blackman (of ONR), Peter Naur, Aad van Wijngarden, Arthur van der Poel, Friedrich Bauer, and Louis Couffignal. Listen to an oral history interview with Maurice Wilkes – recorded in June 2010 for An Oral History of British Science at the British Library An after-dinner talk by Maurice Wilkes at King's College, Cambridge, about Alan Turing. Filmed on 1 October 1997 by Ian Pratt (video) 1913 births 2010 deaths Alumni of St John's College, Cambridge British computer scientists Computer designers Digital Equipment Corporation people English physicists Fellows of the Royal Academy of Engineering Fellows of the Royal Society Fellows of the British Computer Society Fellows of the Association for Computing Machinery Foreign associates of the National Academy of Sciences History of computing in the United Kingdom Knights Bachelor Members of the University of Cambridge Computer Laboratory People educated at King Edward VI College, Stourbridge People from Dudley Kyoto laureates in Advanced Technology Presidents of the British Computer Society Turing Award laureates
64971
https://en.wikipedia.org/wiki/Go%20%28game%29
Go (game)
Go or Weiqi, Weichi () is an abstract strategy board game for two players in which the aim is to surround more territory than the opponent. The game was invented in China more than 2,500 years ago and is believed to be the oldest board game continuously played to the present day. A 2016 survey by the International Go Federation's 75 member nations found that there are over 46 million people worldwide who know how to play Go and over 20 million current players, the majority of whom live in East Asia. The playing pieces are called stones. One player uses the white stones and the other, black. The players take turns placing the stones on the vacant intersections (points) of a board. Once placed on the board, stones may not be moved, but stones are removed from the board if the stone (or group of stones) is surrounded by opposing stones on all orthogonally adjacent points, in which case the stone or group is captured. The game proceeds until neither player wishes to make another move. When a game concludes, the winner is determined by counting each player's surrounded territory along with captured stones and komi (points added to the score of the player with the white stones as compensation for playing second). Games may also be terminated by resignation. The standard Go board has a 19×19 grid of lines, containing 361 points. Beginners often play on smaller 9×9 and 13×13 boards, and archaeological evidence shows that the game was played in earlier centuries on a board with a 17×17 grid. However, boards with a 19×19 grid had become standard by the time the game reached Korea in the 5th century CE and Japan in the 7th century CE. Go was considered one of the four essential arts of the cultured aristocratic Chinese scholars in antiquity. The earliest written reference to the game is generally recognized as the historical annal Zuo Zhuan (c. 4th century BCE). Despite its relatively simple rules, Go is extremely complex. Compared to chess, Go has both a larger board with more scope for play and longer games and, on average, many more alternatives to consider per move. The number of legal board positions in Go has been calculated to be approximately , which is vastly greater than the number of atoms in the observable universe, estimated to be of the order of 1080. Etymology The word Go is a short form of the Japanese word (; ), which derives from earlier (), in turn from Middle Chinese (, Mandarin: , ). In English, the name Go when used for the game is often capitalized to differentiate it from the common word go. In events sponsored by the Ing Chang-ki Foundation, it is spelled goe. The Korean word derives from the Middle Korean word , the origin of which is controversial; the more plausible etymologies include the suffix added to to mean 'flat and wide board', or the joining of , meaning 'field', and , meaning 'stone'. Less plausible etymologies include a derivation of , referring to the playing pieces of the game, or a derivation from Chinese (), meaning 'to arrange pieces'. Overview Go is an adversarial game with the objective of surrounding a larger total area of the board with one's stones than the opponent. As the game progresses, the players position stones on the board to map out formations and potential territories. Contests between opposing formations are often extremely complex and may result in the expansion, reduction, or wholesale capture and loss of formation stones. A basic principle of Go is that a group of stones must have at least one open point bordering the group, known as a liberty, to remain on the board. One or more liberties enclosed within a group is called an eye, and a group with two or more eyes cannot be captured, even if surrounded. Such groups are said to be unconditionally alive. The general strategy is to expand one's territory, attack the opponent's weak groups (groups that can be killed), and always stay mindful of the life status of one's own groups. The liberties of groups are countable. Situations where mutually opposing groups must capture each other or die are called capturing races, or semeai. In a capturing race, the group with more liberties will ultimately be able to capture the opponent's stones. Capturing races and the elements of life or death are the primary challenges of Go. Players may pass rather than place a stone if they think there are no further opportunities for profitable play. The game ends when both players pass or when one player resigns. In general, to score the game, each player counts the number of unoccupied points surrounded by their stones and then subtracts the number of stones that were captured by the opponent. The player with the greater score (after adjusting for komi) wins the game. In the opening stages of the game, players typically establish positions (or bases) in the corners and around the sides of the board. These bases help to quickly develop strong shapes which have many options for life (self-viability for a group of stones that prevents capture) and establish formations for potential territory. Players usually start in the corners because establishing territory is easier with the aid of two edges of the board. Established corner opening sequences are called joseki and are often studied independently. Dame are points that lie in between the boundary walls of black and white, and as such are considered to be of no value to either side. Seki are mutually alive pairs of white and black groups where neither has two eyes. A ko (Chinese and Japanese: ) is a repeated-position shape that may be contested by making forcing moves elsewhere. After the forcing move is played, the ko may be "taken back" and returned to its original position. Some ko fights may be important and decide the life of a large group, while others may be worth just one or two points. Some ko fights are referred to as picnic kos when only one side has a lot to lose. The Japanese call it a hanami (flower-viewing) ko. Playing with others usually requires a knowledge of each player's strength, indicated by the player's rank (increasing from 30 kyu to 1 kyu, then 1 dan to 7 dan, then 1 dan pro to 9 dan pro). A difference in rank may be compensated by a handicap—Black is allowed to place two or more stones on the board to compensate for White's greater strength. There are different rule-sets (Korean, Japanese, Chinese, AGA, etc.), which are almost entirely equivalent, except for certain special-case positions. Rules Aside from the order of play (alternating moves, Black moves first or takes a handicap) and scoring rules, there are essentially only two rules in Go: Rule 1 (the rule of liberty) states that every stone remaining on the board must have at least one open point (a liberty) directly orthogonally adjacent (up, down, left, or right), or must be part of a connected group that has at least one such open point (liberty) next to it. Stones or groups of stones which lose their last liberty are removed from the board. Rule 2 (the ko rule) states that the stones on the board must never repeat a previous position of stones. Moves which would do so are forbidden, and thus only moves elsewhere on the board are permitted that turn. Almost all other information about how the game is played is a heuristic, meaning it is learned information about how the game is played, rather than a rule. Other rules are specialized, as they come about through different rule-sets, but the above two rules cover almost all of any played game. Although there are some minor differences between rule-sets used in different countries, most notably in Chinese and Japanese scoring rules, these differences do not greatly affect the tactics and strategy of the game. Except where noted, the basic rules presented here are valid independent of the scoring rules used. The scoring rules are explained separately. Go terms for which there is no ready English equivalent are commonly called by their Japanese names. Basic rules The two players, Black and White, take turns placing stones of their colour on the intersections of the board, one stone at a time. The usual board size is a 19×19 grid but for beginners, or for playing quick games, the smaller board sizes of 13×13 and 9×9 are also popular. The board is empty to begin with. Black plays first, unless given a handicap of two stones or more (in which case, white plays first). The players may choose any unoccupied intersection to play on, except for those forbidden by the ko and suicide rules (see below). Once played, a stone can never be moved and can be taken off the board only if it is captured. A player may also pass, declining to place a stone, though this is usually only done at the end of the game when both players believe nothing more can be accomplished with further play. When both players pass consecutively, the game ends and is then scored. Liberties and capture Vertically and horizontally adjacent stones of the same color form a chain (also called a string or group), forming a discrete unit that cannot then be divided. Only stones connected to one another by the lines on the board create a chain; stones that are diagonally adjacent are not connected. Chains may be expanded by placing additional stones on adjacent intersections, and can be connected together by placing a stone on an intersection that is adjacent to two or more chains of the same color. A vacant point adjacent to a stone, along one of the grid lines of the board, is called a liberty for that stone. Stones in a chain share their liberties. A chain of stones must have at least one liberty to remain on the board. When a chain is surrounded by opposing stones so that it has no liberties, it is captured and removed from the board. Ko rule An example of a situation in which the ko rule applies Players are not allowed to make a move that returns the game to the previous position. This rule, called the ko rule, prevents unending repetition. As shown in the example pictured: Black has just played the stone marked 1, capturing a white stone at the intersection marked with the red circle. If White were allowed to play on the marked intersection, that move would capture the black stone marked 1 and recreate the situation before Black made the move marked 1. Allowing this could result in an unending cycle of captures by both players. The ko rule therefore prohibits White from playing at the marked intersection immediately. Instead White must play elsewhere, or pass; Black can then end the ko by filling at the marked intersection, creating a five-stone black chain. If White wants to continue the ko (that specific repeating position), White tries to find a play elsewhere on the board that Black must answer; if Black answers, then White can retake the ko. A repetition of such exchanges is called a ko fight. While the various rule-sets agree on the ko rule prohibiting returning the board to an immediately previous position, they deal in different ways with the relatively uncommon situation in which a player might recreate a past position that is further removed. See for further information. Suicide A player may not place a stone such that it or its group immediately has no liberties, unless doing so immediately deprives an enemy group of its final liberty. In the latter case, the enemy group is captured, leaving the new stone with at least one liberty. This rule is responsible for the all-important difference between one and two eyes: if a group with only one eye is fully surrounded on the outside, it can be killed with a stone placed in its single eye. The Ing and New Zealand rules do not have this rule, and there a player might destroy one of its own groups (commit suicide). This play would only be useful in a limited set of situations involving a small interior space. In the example at right, it may be useful as a ko threat. Komi Because Black has the advantage of playing the first move, the idea of awarding White some compensation came into being during the 20th century. This is called komi, which gives white a 6.5-point compensation under Japanese rules (number of points varies by rule set). Under handicap play, White receives only a 0.5-point komi, to break a possible tie (jigo). Scoring rules Two general types of scoring system are used, and players determine which to use before play. Both systems almost always give the same result. Territory scoring counts the number of empty points a player's stones surround, together with the number of stones the player captured. Area scoring counts the number of points a player's stones occupy and surround. It is associated with contemporary Chinese play and was probably established there during the Ming Dynasty in the 15th or 16th century. After both players have passed consecutively, the stones that are still on the board but unable to avoid capture, called dead stones, are removed. Area scoring (including Chinese): A player's score is the number of stones that the player has on the board, plus the number of empty intersections surrounded by that player's stones. Territory scoring (including Japanese and Korean): In the course of the game, each player retains the stones they capture, termed prisoners. Any dead stones removed at the end of the game become prisoners. The score is the number of empty points enclosed by a player's stones, plus the number of prisoners captured by that player. If there is disagreement about which stones are dead, then under area scoring rules, the players simply resume play to resolve the matter. The score is computed using the position after the next time the players pass consecutively. Under territory scoring, the rules are considerably more complex; however, in practice, players generally play on, and, once the status of each stone has been determined, return to the position at the time the first two consecutive passes occurred and remove the dead stones. For further information, see Rules of Go. Given that the number of stones a player has on the board is directly related to the number of prisoners their opponent has taken, the resulting net score, that is, the difference between Black's and White's scores, is identical under both rulesets (unless the players have passed different numbers of times during the course of the game). Thus, the net result given by the two scoring systems rarely differs by more than a point. Life and death While not actually mentioned in the rules of Go (at least in simpler rule sets, such as those of New Zealand and the U.S.), the concept of a living group of stones is necessary for a practical understanding of the game. Examples of eyes (marked). The black groups at the top of the board are alive, as they have at least two eyes. The black groups at the bottom are dead as they only have one eye. The point marked a is a false eye. When a group of stones is mostly surrounded and has no options to connect with friendly stones elsewhere, the status of the group is either alive, dead or unsettled. A group of stones is said to be alive if it cannot be captured, even if the opponent is allowed to move first. Conversely, a group of stones is said to be dead if it cannot avoid capture, even if the owner of the group is allowed the first move. Otherwise, the group is said to be unsettled: the defending player can make it alive or the opponent can kill it, depending on who gets to play first. An eye is an empty point or group of points surrounded by a group of stones. If the eye is surrounded by Black stones, White cannot play there unless such a play would take Black's last liberty and capture the Black stones. (Such a move is forbidden according to the suicide rule in most rule sets, but even if not forbidden, such a move would be a useless suicide of a White stone.) If a Black group has two eyes, White can never capture it because White cannot remove both liberties simultaneously. If Black has only one eye, White can capture the Black group by playing in the single eye, removing Black's last liberty. Such a move is not suicide because the Black stones are removed first. In the "Examples of eyes" diagram, all the circled points are eyes. The two black groups in the upper corners are alive, as both have at least two eyes. The groups in the lower corners are dead, as both have only one eye. The group in the lower left may seem to have two eyes, but the surrounded empty point marked a is not actually an eye. White can play there and take a black stone. Such a point is often called a false eye. Seki (mutual life) There is an exception to the requirement that a group must have two eyes to be alive, a situation called seki (or mutual life). Where different colored groups are adjacent and share liberties, the situation may reach a position when neither player wants to move first, because doing so would allow the opponent to capture; in such situations therefore both players' stones remain on the board (in seki). Neither player receives any points for those groups, but at least those groups themselves remain living, as opposed to being captured. Seki can occur in many ways. The simplest are: each player has a group without eyes and they share two liberties, and each player has a group with one eye and they share one more liberty. In the "Example of seki (mutual life)" diagram, the circled points are liberties shared by both a black and a white group. Neither player wants to play on a circled point, because doing so would allow the opponent to capture. All the other groups in this example, both black and white, are alive with at least two eyes. Seki can result from an attempt by one player to invade and kill a nearly settled group of the other player. Tactics In Go, tactics deal with immediate fighting between stones, capturing and saving stones, life, death and other issues localized to a specific part of the board. Larger issues, not limited to only part of the board, are referred to as strategy, and are covered in their own section. Capturing tactics There are several tactical constructs aimed at capturing stones. These are among the first things a player learns after understanding the rules. Recognizing the possibility that stones can be captured using these techniques is an important step forward. A ladder. Black cannot escape unless the ladder connects to black stones further down the board that will intercept with the ladder. The most basic technique is the ladder. To capture stones in a ladder, a player uses a constant series of capture threats (atari) to force the opponent into a zigzag pattern as shown in the adjacent diagram. Unless the pattern runs into friendly stones along the way, the stones in the ladder cannot avoid capture. Experienced players recognize the futility of continuing the pattern and play elsewhere. The presence of a ladder on the board does give a player the option to play a stone in the path of the ladder, thereby threatening to rescue their stones, forcing a response. Such a move is called a ladder breaker and may be a powerful strategic move. In the diagram, Black has the option of playing a ladder breaker. A net. The chain of three marked black stones cannot escape in any direction. Another technique to capture stones is the so-called net, also known by its Japanese name, geta. This refers to a move that loosely surrounds some stones, preventing their escape in all directions. An example is given in the adjacent diagram. It is generally better to capture stones in a net than in a ladder, because a net does not depend on the condition that there are no opposing stones in the way, nor does it allow the opponent to play a strategic ladder breaker. A snapback. Although Black can capture the white stone by playing at the circled point, the resulting shape for Black has only one liberty (at 1), thus White can then capture the three black stones by playing at 1 again (snapback). A third technique to capture stones is the snapback. In a snapback, one player allows a single stone to be captured, then immediately plays on the point formerly occupied by that stone; by so doing, the player captures a larger group of their opponent's stones, in effect snapping back at those stones. An example can be seen on the right. As with the ladder, an experienced player does not play out such a sequence, recognizing the futility of capturing only to be captured back immediately. Reading ahead One of the most important skills required for strong tactical play is the ability to read ahead. Reading ahead includes considering available moves to play, the possible responses to each move, and the subsequent possibilities after each of those responses. Some of the strongest players of the game can read up to 40 moves ahead even in complicated positions. As explained in the scoring rules, some stone formations can never be captured and are said to be alive, while other stones may be in the position where they cannot avoid being captured and are said to be dead. Much of the practice material available to players of the game comes in the form of life and death problems, also known as tsumego. In such problems, players are challenged to find the vital move sequence that kills a group of the opponent or saves a group of their own. Tsumego are considered an excellent way to train a player's ability at reading ahead, and are available for all skill levels, some posing a challenge even to top players. Ko fighting In situations when the Ko rule applies, a ko fight may occur. If the player who is prohibited from capture is of the opinion that the capture is important, because it prevents a large group of stones from being captured for instance, the player may play a ko threat. This is a move elsewhere on the board that threatens to make a large profit if the opponent does not respond. If the opponent does respond to the ko threat, the situation on the board has changed, and the prohibition on capturing the ko no longer applies. Thus the player who made the ko threat may now recapture the ko. Their opponent is then in the same situation and can either play a ko threat as well, or concede the ko by simply playing elsewhere. If a player concedes the ko, either because they do not think it important or because there are no moves left that could function as a ko threat, they have lost the ko, and their opponent may connect the ko. Instead of responding to a ko threat, a player may also choose to ignore the threat and connect the ko. They thereby win the ko, but at a cost. The choice of when to respond to a threat and when to ignore it is a subtle one, which requires a player to consider many factors, including how much is gained by connecting, how much is lost by not responding, how many possible ko threats both players have remaining, what the optimal order of playing them is, and what the size—points lost or gained—of each of the remaining threats is. Frequently, the winner of the ko fight does not connect the ko but instead captures one of the chains that constituted their opponent's side of the ko. In some cases, this leads to another ko fight at a neighboring location. Strategy Strategy deals with global influence, interaction between distant stones, keeping the whole board in mind during local fights, and other issues that involve the overall game. It is therefore possible to allow a tactical loss when it confers a strategic advantage. Novices often start by randomly placing stones on the board, as if it were a game of chance. An understanding of how stones connect for greater power develops, and then a few basic common opening sequences may be understood. Learning the ways of life and death helps in a fundamental way to develop one's strategic understanding of weak groups. A player who both plays aggressively and can handle adversity is said to display kiai, or fighting spirit, in the game. Basic concepts Basic strategic aspects include the following: Connection: Keeping one's own stones connected means that fewer groups need to make living shape, and one has fewer groups to defend. Cut: Keeping opposing stones disconnected means that the opponent needs to defend and make living shape for more groups. Stay alive: The simplest way to stay alive is to establish a foothold in the corner or along one of the sides. At a minimum, a group must have two eyes (separate open points) to be alive. An opponent cannot fill in either eye, as any such move is suicidal and prohibited in the rules. Mutual life (seki) is better than dying: A situation in which neither player can play on a particular point without then allowing the other player to play at another point to capture. The most common example is that of adjacent groups that share their last few liberties—if either player plays in the shared liberties, they can reduce their own group to a single liberty (putting themselves in atari), allowing their opponent to capture it on the next move. Death: A group that lacks living shape is eventually removed from the board as captured. Invasion: Set up a new living group inside an area where the opponent has greater influence, means one reduces the opponent's score in proportion to the area one occupies. Reduction: Placing a stone far enough into the opponent's area of influence to reduce the amount of territory they eventually get, but not so far in that it can be cut off from friendly stones outside. Sente: A play that forces one's opponent to respond (gote). A player who can regularly play sente has the initiative and can control the flow of the game. Sacrifice: Allowing a group to die in order to carry out a play, or plan, in a more important area. The strategy involved can become very abstract and complex. High-level players spend years improving their understanding of strategy, and a novice may play many hundreds of games against opponents before being able to win regularly. Opening strategy In the opening of the game, players usually play and gain territory in the corners of the board first, as the presence of two edges makes it easier for them to surround territory and establish their stones. From a secure position in a corner, it is possible to lay claim to more territory by extending along the side of the board. The opening is the most theoretically difficult part of the game and takes a large proportion of professional players' thinking time. The first stone played at a corner of the board is generally placed on the third or fourth line from the edge. Players tend to play on or near the 4-4 star point during the opening. Playing nearer to the edge does not produce enough territory to be efficient, and playing further from the edge does not safely secure the territory. In the opening, players often play established sequences called joseki, which are locally balanced exchanges; however, the joseki chosen should also produce a satisfactory result on a global scale. It is generally advisable to keep a balance between territory and influence. Which of these gets precedence is often a matter of individual taste. Middlegame and endgame The middle phase of the game is the most combative, and usually lasts for more than 100 moves. During the middlegame, the players invade each other's territories, and attack formations that lack the necessary two eyes for viability. Such groups may be saved or sacrificed for something more significant on the board. It is possible that one player may succeed in capturing a large weak group of the opponent's, which often proves decisive and ends the game by a resignation. However, matters may be more complex yet, with major trade-offs, apparently dead groups reviving, and skillful play to attack in such a way as to construct territories rather than kill. The end of the middlegame and transition to the endgame is marked by a few features. Near the end of a game, play becomes divided into localized fights that do not affect each other, with the exception of ko fights, where before the central area of the board related to all parts of it. No large weak groups are still in serious danger. Moves can reasonably be attributed some definite value, such as 20 points or fewer, rather than simply being necessary to compete. Both players set limited objectives in their plans, in making or destroying territory, capturing or saving stones. These changing aspects of the game usually occur at much the same time, for strong players. In brief, the middlegame switches into the endgame when the concepts of strategy and influence need reassessment in terms of concrete final results on the board. History Origin in China The earliest written reference to the game is generally recognized as the historical annal Zuo Zhuan (c. 4th century BCE), referring to a historical event of 548 BCE. It is also mentioned in Book XVII of the Analects of Confucius and in two books written by Mencius (c. 3rd century BCE). In all of these works, the game is referred to as (). Today, in China, it is known as weiqi (), . Go was originally played on a 17×17 line grid, but a 19×19 grid became standard by the time of the Tang Dynasty (618–907). Legends trace the origin of the game to the mythical Chinese emperor Yao (2337–2258 BCE), who was said to have had his counselor Shun design it for his unruly son, Danzhu, to favorably influence him. Other theories suggest that the game was derived from Chinese tribal warlords and generals, who used pieces of stone to map out attacking positions. In China, Go was considered one of the four cultivated arts of the Chinese scholar gentleman, along with calligraphy, painting and playing the musical instrument guqin In ancient times the rules of go were passed on verbally, rather than being written down. Spread to Korea and Japan Go was introduced to Korea sometime between the 5th and 7th centuries CE, and was popular among the higher classes. In Korea, the game is called baduk (hangul: ), and a variant of the game called Sunjang baduk was developed by the 16th century. Sunjang baduk became the main variant played in Korea until the end of the 19th century, when the current version was reintroduced from Japan. The game reached Japan in the 7th century CE—where it is called or . It became popular at the Japanese imperial court in the 8th century, and among the general public by the 13th century. The game was further formalized in the 15th century. In 1603, Tokugawa Ieyasu re-established Japan's unified national government. In the same year, he assigned the then-best player in Japan, a Buddhist monk named Nikkai (né Kanō Yosaburo, 1559), to the post of Godokoro (Minister of Go). Nikkai took the name Hon'inbō Sansa and founded the Hon'inbō Go school. Several competing schools were founded soon after. These officially recognized and subsidized Go schools greatly developed the level of play and introduced the dan/kyu style system of ranking players. Players from the four schools (Hon'inbō, Yasui, Inoue and Hayashi) competed in the annual castle games, played in the presence of the shōgun. Internationalization Despite its widespread popularity in East Asia, Go has been slow to spread to the rest of the world. Although there are some mentions of the game in western literature from the 16th century forward, Go did not start to become popular in the West until the end of the 19th century, when German scientist Oskar Korschelt wrote a treatise on the ancient Han Chinese game. By the early 20th century, Go had spread throughout the German and Austro-Hungarian empires. In 1905, Edward Lasker learned the game while in Berlin. When he moved to New York, Lasker founded the New York Go Club together with (amongst others) Arthur Smith, who had learned of the game in Japan while touring the East and had published the book The Game of Go in 1908. Lasker's book Go and Go-moku (1934) helped spread the game throughout the U.S., and in 1935, the American Go Association was formed. Two years later, in 1937, the German Go Association was founded. World War II put a stop to most Go activity, since it was a popular game in Japan, but after the war, Go continued to spread. For most of the 20th century, the Japan Go Association (Nihon Ki-in) played a leading role in spreading Go outside East Asia by publishing the English-language magazine Go Review in the 1960s, establishing Go centers in the U.S., Europe and South America, and often sending professional teachers on tour to Western nations. Internationally, the game had been commonly known since the start of the twentieth century by its shortened Japanese name, and terms for common Go concepts are derived from their Japanese pronunciation. In 1996, NASA astronaut Daniel Barry and Japanese astronaut Koichi Wakata became the first people to play Go in space. They used a special Go set, which was named Go Space, designed by Wai-Cheung Willson Chow. Both astronauts were awarded honorary dan ranks by the Nihon Ki-in. , the International Go Federation has 75 member countries, with 67 member countries outside East Asia. Chinese cultural centres across the world are promoting Go, and cooperating with local Go associations, for example the seminars held by the Chinese cultural centre in Tel Aviv, Israel together with the Israeli Go association. Competitive play Ranks and ratings In Go, rank indicates a player's skill in the game. Traditionally, ranks are measured using kyu and dan grades, a system also adopted by many martial arts. More recently, mathematical rating systems similar to the Elo rating system have been introduced. Such rating systems often provide a mechanism for converting a rating to a kyu or dan grade. Kyu grades (abbreviated k) are considered student grades and decrease as playing level increases, meaning 1st kyu is the strongest available kyu grade. Dan grades (abbreviated d) are considered master grades, and increase from 1st dan to 7th dan. First dan equals a black belt in eastern martial arts using this system. The difference among each amateur rank is one handicap stone. For example, if a 5k plays a game with a 1k, the 5k would need a handicap of four stones to even the odds. Top-level amateur players sometimes defeat professionals in tournament play. Professional players have professional dan ranks (abbreviated p). These ranks are separate from amateur ranks. The rank system comprises, from the lowest to highest ranks: Tournament and match rules Tournament and match rules deal with factors that may influence the game but are not part of the actual rules of play. Such rules may differ between events. Rules that influence the game include: the setting of compensation points (komi), handicap, and time control parameters. Rules that do not generally influence the game are: the tournament system, pairing strategies, and placement criteria. Common tournament systems used in Go include the McMahon system, Swiss system, league systems and the knockout system. Tournaments may combine multiple systems; many professional Go tournaments use a combination of the league and knockout systems. Tournament rules may also set the following: compensation points, called komi, which compensate the second player for the first move advantage of their opponent; tournaments commonly use a compensation in the range of 5–8 points, generally including a half-point to prevent draws; handicap stones placed on the board before alternate play, allowing players of different strengths to play competitively (see Go handicap for more information); and superko: Although the basic ko rule described above covers more than 95% of all cycles occurring in games, there are some complex situations—triple ko, eternal life, etc.—that are not covered by it but would allow the game to cycle indefinitely. To prevent this, the ko rule is sometimes extended to forbid the repetition of any previous position. This extension is called superko. Time control A game of Go may be timed using a game clock. Formal time controls were introduced into the professional game during the 1920s and were controversial. Adjournments and sealed moves began to be regulated in the 1930s. Go tournaments use a number of different time control systems. All common systems envisage a single main period of time for each player for the game, but they vary on the protocols for continuation (in overtime) after a player has finished that time allowance. The most widely used time control system is the so-called byoyomi system. The top professional Go matches have timekeepers so that the players do not have to press their own clocks. Two widely used variants of the byoyomi system are: Standard byoyomi: After the main time is depleted, a player has a certain number of time periods (typically around thirty seconds). After each move, the number of full-time periods that the player took (often zero) is subtracted. For example, if a player has three thirty-second time periods and takes thirty or more (but less than sixty) seconds to make a move, they lose one time period. With 60–89 seconds, they lose two time periods, and so on. If, however, they take less than thirty seconds, the timer simply resets without subtracting any periods. Using up the last period means that the player has lost on time. Canadian byoyomi: After using all of their main time, a player must make a certain number of moves within a certain period of time, such as twenty moves within five minutes. If the time period expires without the required number of stones having been played, then the player has lost on time. Notation and recording games Go games are recorded with a simple coordinate system. This is comparable to algebraic chess notation, except that Go stones do not move and thus require only one coordinate per turn. Coordinate systems include purely numerical (4-4 point), hybrid (K3), and purely alphabetical. The Smart Game Format uses alphabetical coordinates internally, but most editors represent the board with hybrid coordinates as this reduces confusion. The Japanese word kifu is sometimes used to refer to a game record. In Unicode, Go stones can be represented with black and white circles from the block Geometric Shapes: The block Miscellaneous Symbols includes "Go markers" that were likely meant for mathematical research of Go: Top players and professional Go A Go professional is a professional player of the game of Go. There are six areas with professional go associations, these are: China (Chinese Weiqi Association), Japan (Nihon Ki-in, Kansai Ki-in), South Korea (Korea Baduk Association), Taiwan (Taiwan Chi Yuan Culture Foundation), the United States (AGA Professional System) and Europe (European Professional System). Although the game was developed in China, the establishment of the Four Go houses by Tokugawa Ieyasu at the start of the 17th century shifted the focus of the Go world to Japan. State sponsorship, allowing players to dedicate themselves full-time to study of the game, and fierce competition between individual houses resulted in a significant increase in the level of play. During this period, the best player of his generation was given the prestigious title Meijin (master) and the post of Godokoro (minister of Go). Of special note are the players who were dubbed Kisei (Go Sage). The only three players to receive this honor were Dōsaku, Jōwa and Shūsaku, all of the house Hon'inbō. After the end of the Tokugawa shogunate and the Meiji Restoration period, the Go houses slowly disappeared, and in 1924, the Nihon Ki-in (Japanese Go Association) was formed. Top players from this period often played newspaper-sponsored matches of 2–10 games. Of special note are the (Chinese-born) player Go Seigen (Chinese: Wu Qingyuan), who scored 80% in these matches and beat down most of his opponents to inferior handicaps, and Minoru Kitani, who dominated matches in the early 1930s. These two players are also recognized for their groundbreaking work on new opening theory (Shinfuseki). For much of the 20th century, Go continued to be dominated by players trained in Japan. Notable names included Eio Sakata, Rin Kaiho (born in Taiwan), Masao Kato, Koichi Kobayashi and Cho Chikun (born Cho Ch'i-hun, from South Korea). Top Chinese and Korean talents often moved to Japan, because the level of play there was high and funding was more lavish. One of the first Korean players to do so was Cho Namchul, who studied in the Kitani Dojo 1937–1944. After his return to Korea, the Hanguk Kiwon (Korea Baduk Association) was formed and caused the level of play in South Korea to rise significantly in the second half of the 20th century. In China, the game declined during the Cultural Revolution (1966–1976) but quickly recovered in the last quarter of the 20th century, bringing Chinese players, such as Nie Weiping and Ma Xiaochun, on par with their Japanese and South Korean counterparts. The Chinese Weiqi Association (today part of the China Qiyuan) was established in 1962, and professional dan grades started being issued in 1982. Western professional Go began in 2012 with the American Go Association's Professional System. In 2014, the European Go Federation followed suit and started their professional system. With the advent of major international titles from 1989 onward, it became possible to compare the level of players from different countries more accurately. Cho Hunhyun of South Korea won the first edition of the Quadrennial Ing Cup in 1989. His disciple Lee Chang-ho was the dominant player in international Go competitions for more than a decade spanning much of 1990s and early 2000s; he is also credited with groundbreaking works on the endgame. Cho, Lee and other South Korean players such as Seo Bong-soo, Yoo Changhyuk and Lee Sedol between them won the majority of international titles in this period. Several Chinese players also rose to the top in international Go from 2000s, most notably Ma Xiaochun, Chang Hao, Gu Li and Ke Jie. , Japan lags behind in the international Go scene. Historically, more men than women have played Go. Special tournaments for women exist, but until recently, men and women did not compete together at the highest levels; however, the creation of new, open tournaments and the rise of strong female players, most notably Rui Naiwei, have in recent years highlighted the strength and competitiveness of emerging female players. The level in other countries has traditionally been much lower, except for some players who had preparatory professional training in East Asia. Knowledge of the game has been scant elsewhere up until the 20th century. A famous player of the 1920s was Edward Lasker. It was not until the 1950s that more than a few Western players took up the game as other than a passing interest. In 1978, Manfred Wimmer became the first Westerner to receive a professional player's certificate from an East Asian professional Go association. In 2000, American Michael Redmond became the first Western player to achieve a 9 dan rank. Equipment It is possible to play Go with a simple paper board and coins, plastic tokens, or white beans and coffee beans for the stones; or even by drawing the stones on the board and erasing them when captured. More popular midrange equipment includes cardstock, a laminated particle board, or wood boards with stones of plastic or glass. More expensive traditional materials are still used by many players. The most expensive Go sets have black stones carved from slate and white stones carved from translucent white shells, played on boards carved in a single piece from the trunk of a tree. Traditional equipment Boards The Go board (generally referred to by its Japanese name goban ) typically measures between in length (from one player's side to the other) and in width. Chinese boards are slightly larger, as a traditional Chinese Go stone is slightly larger to match. The board is not square; there is a 15:14 ratio in length to width, because with a perfectly square board, from the player's viewing angle the perspective creates a foreshortening of the board. The added length compensates for this. There are two main types of boards: a table board similar in most respects to other gameboards like that used for chess, and a floor board, which is its own free-standing table and at which the players sit. The traditional Japanese goban is between thick and has legs; it sits on the floor (see picture). It is preferably made from the rare golden-tinged Kaya tree (Torreya nucifera), with the very best made from Kaya trees up to 700 years old. More recently, the related California Torreya (Torreya californica) has been prized for its light color and pale rings as well as its reduced expense and more readily available stock. The natural resources of Japan have been unable to keep up with the enormous demand for the slow-growing Kaya trees; both T. nucifera and T. californica take many hundreds of years to grow to the necessary size, and they are now extremely rare, raising the price of such equipment tremendously. As Kaya trees are a protected species in Japan, they cannot be harvested until they have died. Thus, an old-growth, floor-standing Kaya goban can easily cost in excess of $10,000 with the highest-quality examples costing more than $60,000. Other, less expensive woods often used to make quality table boards in both Chinese and Japanese dimensions include Hiba (Thujopsis dolabrata), Katsura (Cercidiphyllum japonicum), Kauri (Agathis), and Shin Kaya (various varieties of spruce, commonly from Alaska, Siberia and China's Yunnan Province). So-called Shin Kaya is a potentially confusing merchant's term: shin means 'new', and thus shin kaya is best translated 'faux kaya', because the woods so described are biologically unrelated to Kaya. Stones A full set of Go stones (goishi) usually contains 181 black stones and 180 white ones; a 19×19 grid has 361 points, so there are enough stones to cover the board, and Black gets the extra odd stone because that player goes first. However it may happen, especially in beginners' games, that many back-and-forth captures empty the bowls before the end of the game: in that case an exchange of prisoners allows the game to continue. Traditional Japanese stones are double-convex, and made of clamshell (white) and slate (black). The classic slate is nachiguro stone mined in Wakayama Prefecture and the clamshell from the Hamaguri clam; however, due to a scarcity in the Japanese supply of this clam, the stones are most often made of shells harvested from Mexico. Historically, the most prized stones were made of jade, often given to the reigning emperor as a gift. In China, the game is traditionally played with single-convex stones made of a composite called Yunzi. The material comes from Yunnan Province and is made by sintering a proprietary and trade-secret mixture of mineral compounds derived from the local stone. This process dates to the Tang Dynasty and, after the knowledge was lost in the 1920s during the Chinese Civil War, was rediscovered in the 1960s by the now state-run Yunzi company. The material is praised for its colors, its pleasing sound as compared to glass or to synthetics such as melamine, and its lower cost as opposed to other materials such as slate/shell. The term yunzi can also refer to a single-convex stone made of any material; however, most English-language Go suppliers specify Yunzi as a material and single-convex as a shape to avoid confusion, as stones made of Yunzi are also available in double-convex while synthetic stones can be either shape. Traditional stones are made so that black stones are slightly larger in diameter than white; this is to compensate for the optical illusion created by contrasting colors that would make equal-sized white stones appear larger on the board than black stones. Bowls The bowls for the stones are shaped like a flattened sphere with a level underside. The lid is loose fitting and upturned before play to receive stones captured during the game. Chinese bowls are slightly larger, and a little more rounded, a style known generally as Go Seigen; Japanese Kitani bowls tend to have a shape closer to that of the bowl of a snifter glass, such as for brandy. The bowls are usually made of turned wood. Mulberry is the traditional material for Japanese bowls, but is very expensive; wood from the Chinese jujube date tree, which has a lighter color (it is often stained) and slightly more visible grain pattern, is a common substitute for rosewood, and traditional for Go Seigen-style bowls. Other traditional materials used for making Chinese bowls include lacquered wood, ceramics, stone and woven straw or rattan. The names of the bowl shapes, Go Seigen and Kitani, were introduced in the last quarter of the 20th century by the professional player Janice Kim as homage to two 20th-century professional Go players by the same names, of Chinese and Japanese nationality, respectively, who are referred to as the "Fathers of modern Go". Playing technique and etiquette The traditional way to place a Go stone is to first take one from the bowl, gripping it between the index and middle fingers, with the middle finger on top, and then placing it directly on the desired intersection. One can also place a stone on the board and then slide it into position under appropriate circumstances (where it does not move any other stones). It is considered respectful towards White for Black to place the first stone of the game in the upper right-hand corner. (Because of symmetry, this has no effect on the game's outcome.) It is considered poor manners to run one's fingers through one's bowl of unplayed stones, as the sound, however soothing to the player doing this, can be disturbing to one's opponent. Similarly, clacking a stone against another stone, the board, or the table or floor is also discouraged. However, it is permissible to emphasize select moves by striking the board more firmly than normal, thus producing a sharp clack. Additionally, hovering one's arm over the board (usually when deciding where to play) is also considered rude as it obstructs the opponent's view of the board. Manners and etiquette are extensively discussed in 'The Classic of WeiQi in Thirteen Chapters', a Song dynasty manual to the game. Apart from the points above it also points to the need to remain calm and honorable, in maintaining posture, and knowing the key specialised terms, such as titles of common formations. Generally speaking, much attention is paid to the etiquette of playing, as much as to winning or actual game technique. Computers and Go Nature of the game In combinatorial game theory terms, Go is a zero-sum, perfect-information, partisan, deterministic strategy game, putting it in the same class as chess, draughts (checkers), and Reversi (Othello); however it differs from these in its game play. Although the rules are simple, the practical strategy is complex. The game emphasizes the importance of balance on multiple levels and has internal tensions. To secure an area of the board, it is good to play moves close together; however, to cover the largest area, one needs to spread out, perhaps leaving weaknesses that can be exploited. Playing too low (close to the edge) secures insufficient territory and influence, yet playing too high (far from the edge) allows the opponent to invade. It has been claimed that Go is the most complex game in the world due to its vast number of variations in individual games. Its large board and lack of restrictions allow great scope in strategy and expression of players' individuality. Decisions in one part of the board may be influenced by an apparently unrelated situation in a distant part of the board. Plays made early in the game can shape the nature of conflict a hundred moves later. The game complexity of Go is such that describing even elementary strategy fills many introductory books. In fact, numerical estimates show that the number of possible games of Go far exceeds the number of atoms in the observable universe. Research of go endgame by John H. Conway led to the invention of the surreal numbers. Go also contributed to development of combinatorial game theory (with Go Infinitesimals being a specific example of its use in Go). Software players Go long posed a daunting challenge to computer programmers, putting forward "difficult decision-making tasks, an intractable search space, and an optimal solution so complex it appears infeasible to directly approximate using a policy or value function". Prior to 2015, the best Go programs only managed to reach amateur dan level. On smaller 9×9 and 13x13 boards, computer programs fared better, and were able to compare to professional players. Many in the field of artificial intelligence consider Go to require more elements that mimic human thought than chess. The reasons why computer programs had not played Go at the professional dan level prior to 2016 include: The number of spaces on the board is much larger (over five times the number of spaces on a chess board—361 vs. 64). On most turns there are many more possible moves in Go than in chess. Throughout most of the game, the number of legal moves stays at around 150–250 per turn, and rarely falls below 100 (in chess, the average number of moves is 37). Because an exhaustive computer program for Go must calculate and compare every possible legal move in each ply (player turn), its ability to calculate the best plays is sharply reduced when there are a large number of possible moves. Most computer game algorithms, such as those for chess, compute several moves in advance. Given an average of 200 available moves through most of the game, for a computer to calculate its next move by exhaustively anticipating the next four moves of each possible play (two of its own and two of its opponent's), it would have to consider more than 320 billion (3.2) possible combinations. To exhaustively calculate the next eight moves, would require computing 512 quintillion (5.12) possible combinations. , the most powerful supercomputer in the world, NUDT's "Tianhe-2", can sustain 33.86 petaflops. At this rate, even given an exceedingly low estimate of 10 operations required to assess the value of one play of a stone, Tianhe-2 would require 4 hours to assess all possible combinations of the next eight moves in order to make a single play. The placement of a single stone in the initial phase can affect the play of the game a hundred or more moves later. A computer would have to predict this influence, and it would be unworkable to attempt to exhaustively analyze the next hundred moves. In capture-based games (such as chess), a position can often be evaluated relatively easily, such as by calculating who has a material advantage or more active pieces. In Go, there is often no easy way to evaluate a position. However a 6-kyu human can evaluate a position at a glance, to see which player has more territory, and even beginners can estimate the score within 10 points, given time to count it. The number of stones on the board (material advantage) is only a weak indicator of the strength of a position, and a territorial advantage (more empty points surrounded) for one player might be compensated by the opponent's strong positions and influence all over the board. Normally a 3-dan can easily judge most of these positions. As an illustration, the greatest handicap normally given to a weaker opponent is 9 stones. It was not until August 2008 that a computer won a game against a professional level player at this handicap. It was the Mogo program, which scored this first victory in an exhibition game played during the US Go Congress. By 2013, a win at the professional level of play was accomplished with a four-stone advantage. In October 2015, Google DeepMind's program AlphaGo beat Fan Hui, the European Go champion and a 2 dan (out of 9 dan possible) professional, five times out of five with no handicap on a full size 19×19 board. AlphaGo used a fundamentally different paradigm than earlier Go programs; it included very little direct instruction, and mostly used deep learning where AlphaGo played itself in hundreds of millions of games such that it could measure positions more intuitively. In March 2016, Google next challenged Lee Sedol, a 9 dan considered the top player in the world in the early 21st century, to a five-game match. Leading up to the game, Lee Sedol and other top professionals were confident that he would win; however, AlphaGo defeated Lee in four of the five games. After having already lost the series by the third game, Lee won the fourth game, describing his win as "invaluable". In May 2017, AlphaGo beat Ke Jie, who at the time continuously held the world No. 1 ranking for two years, winning each game in a three-game match during the Future of Go Summit. In October 2017, DeepMind announced a significantly stronger version called AlphaGo Zero which beat the previous version by 100 games to 0. Software assistance An abundance of software is available to support players of the game. This includes programs that can be used to view or edit game records and diagrams, programs that allow the user to search for patterns in the games of strong players, and programs that allow users to play against each other over the Internet. Some web servers provide graphical aids like maps, to aid learning during play. These graphical aids may suggest possible next moves, indicate areas of influence, highlight vital stones under attack and mark stones in atari or about to be captured. There are several file formats used to store game records, the most popular of which is SGF, short for Smart Game Format. Programs used for editing game records allow the user to record not only the moves, but also variations, commentary and further information on the game. Electronic databases can be used to study life and death situations, joseki, fuseki and games by a particular player. Programs are available that give players pattern searching options, which allow players to research positions by searching for high-level games in which similar situations occur. Such software generally lists common follow-up moves that have been played by professionals and gives statistics on win/loss ratio in opening situations. Internet-based Go servers allow access to competition with players all over the world, for real-time and turn-based games. Such servers also allow easy access to professional teaching, with both teaching games and interactive game review being possible. In popular culture and science Apart from technical literature and study material, Go and its strategies have been the subject of several works of fiction, such as The Master of Go by Nobel prize-winning author Yasunari Kawabata and The Girl Who Played Go by Shan Sa. Other books have used Go as a theme or minor plot device. For example, the novel Shibumi by Trevanian centers around the game and uses Go metaphors, and The Way of Go: 8 Ancient Strategy Secrets for Success in Business and Life by Troy Anderson applies Go strategy to business. GO: An Asian Paradigm for Business Strategy by Miura Yasuyuki, a manager with Japan Airlines, uses Go to describe the thinking and behavior of business men. Go features prominently in the Chung Kuo series of novels by David Wingrove, being the favourite game of the main villain. The manga (Japanese comic book) and anime series Hikaru no Go, released in Japan in 1998, had a large impact in popularizing Go among young players, both in Japan and—as translations were released—abroad. Go Player is a similar animated series about young Go players that aired in China. In the anime PriPara, one of the main characters, Sion Tōdō, is a world renowned Go player, but decides to retire as nobody has been able to beat her, becoming an idol instead. Despite this, Go still features heavily in her character's personality. Similarly, Go has been used as a subject or plot device in film, such as π, A Beautiful Mind, Tron: Legacy, and The Go Master, a biopic of Go professional Go Seigen. 2013's Tôkyô ni kita bakari or Tokyo Newcomer portrays a Chinese foreigner Go player moving to Tokyo. In King Hu's wuxia film The Valiant Ones, the characters are color-coded as Go stones (black or other dark shades for the Chinese, white for the Japanese invaders), Go boards and stones are used by the characters to keep track of soldiers prior to battle, and the battles themselves are structured like a game of Go. Go has also been featured in a number of television series. Starz's science fiction thriller Counterpart, for instance, is rich in references (the opening itself featuring developments on a Go board), including applications of the game's metaphors, a book about life and death being displayed, and Go matches, accurately played, relevant to the plot. Another example is Syfy's 12 Monkeys: In the first season's episode Atari, one of the characters explains the homonymous concept, using it as an analogy to the situation he was facing, and his son is briefly seen playing Go later on. The corporation and brand Atari was named after the Go term. Hedge fund manager Mark Spitznagel used Go as his main investing metaphor in his investing book The Dao of Capital. In the endgame, it can often happen that the state of the board consists of several subpositions that do not interact with the others. The whole board position can then be considered as a mathematical sum, or composition, of the individual subpositions. It is this property of go endgames that led John Horton Conway to the discovery of surreal numbers. Psychology A 2004 review of literature by Fernand Gobet, de Voogt and Jean Retschitzki shows that relatively little scientific research has been carried out on the psychology of Go, compared with other traditional board games such as chess. Computer Go research has shown that given the large search tree, knowledge and pattern recognition are more important in Go than in other strategy games, such as chess. A study of the effects of age on Go-playing has shown that mental decline is milder with strong players than with weaker players. According to the review of Gobet and colleagues, the pattern of brain activity observed with techniques such as PET and fMRI does not show large differences between Go and chess. On the other hand, a study by Xiangchuan Chen et al. showed greater activation in the right hemisphere among Go players than among chess players. There is some evidence to suggest a correlation between playing board games and reduced risk of Alzheimer's disease and dementia. Game theory In formal game theory terms, Go is a non-chance, combinatorial game with perfect information. Informally that means there are no dice used (and decisions or moves create discrete outcome vectors rather than probability distributions), the underlying math is combinatorial, and all moves (via single vertex analysis) are visible to both players (unlike some card games where some information is hidden). Perfect information also implies sequence—players can theoretically know about all past moves. Other game theoretical taxonomy elements include the facts that Go is bounded (because every game must end with a victor (or a tie) within a finite number of moves); the strategy is associative (every strategy is a function of board position); format is non-cooperative (not a team sport); positions are extensible (can be represented by board position trees); game is zero-sum (player choices do not increase resources available–colloquially, rewards in the game are fixed and if one player wins, the other loses) and the utility function is restricted (in the sense of win/lose; however, ratings, monetary rewards, national and personal pride and other factors can extend utility functions, but generally not to the extent of removing the win/lose restriction). Affine transformations can theoretically add non-zero and complex utility aspects even to two player games. Comparisons Go begins with an empty board. It is focused on building from the ground up (nothing to something) with multiple, simultaneous battles leading to a point-based win. Chess is tactical rather than strategic, as the predetermined strategy is to trap one individual piece (the king). This comparison has also been applied to military and political history, with Scott Boorman's book The Protracted Game (1969) and, more recently, Robert Greene's book The 48 Laws of Power (1998) exploring the strategy of the Communist Party of China in the Chinese Civil War through the lens of Go. A similar comparison has been drawn among Go, chess and backgammon, perhaps the three oldest games that enjoy worldwide popularity. Backgammon is a "man vs. fate" contest, with chance playing a strong role in determining the outcome. Chess, with rows of soldiers marching forward to capture each other, embodies the conflict of "man vs. man". Because the handicap system tells Go players where they stand relative to other players, an honestly ranked player can expect to lose about half of their games; therefore, Go can be seen as embodying the quest for self-improvement, "man vs. self". See also  – a method for determining the chains that are unconditionally alive Xiangqi Mahjong Pente Notes References Citations Sources Vol. 1: Introduction Vol. 2: Basic techniques Further reading Introductory books Bradley, Milton N. Go for Kids, Yutopian Enterprises, Santa Monica, 2001 . Seckiner, Sancar. Chinese Go Players, 6th article of the main book Budaha, Efil Yayinevi, Ankara, Feb. 2016, . Shotwell, Peter. Go! More than a Game, Tuttle Publishing, 4th ed. 2014, . Historical interest External links History of Go. Sensei's Library, a major resource about the game of Go. 2nd millennium BC in China Traditional board games Abstract strategy games Chinese ancient games Japanese games Korean games Individual sports Partially solved games
3592531
https://en.wikipedia.org/wiki/Oracle%20Text
Oracle Text
Oracle Text is search engine and text analysis software developed and sold by Oracle Corporation. It is proprietary software, sold as part of Oracle Database, a proprietary relational database management system. When integrated with a text storage system, it can analyze text and provide text-filtering and text-reduction for speed-reading and summary-viewing. It can return grammatical assessments of the text it processes, checking for grammatical errors and rating the quality and style of the writing. History Oracle Corporation introduced Oracle ConText first as a software option, then as an Oracle data cartridge (a server-based software module) for text retrieval when it released version 8 of the Oracle database in 1997. It used the default schema CTXSYS and the default tablespace DRSYS. With the appearance of version 8i of the Oracle database in 1999, a re-designed ConText became Oracle interMedia Text — part of the separately-priced Oracle interMedia bundle of products. With the release of version 9i of the database in 2001 Oracle Corporation renamed the software as Oracle Text and again marketed it as a standalone subsystem, integrated with and included in the cost of the database software. Oracle Corporation continues to support Oracle Text as of Oracle Database release 12 (2013). Implementation Oracle Text uses the ctx library. See also Oracle Ultra Search External links Oracle Text History, name-changes and functionality-changes References Oracle software
30306196
https://en.wikipedia.org/wiki/1954%20USC%20Trojans%20football%20team
1954 USC Trojans football team
The 1954 USC Trojans football team represented the University of Southern California (USC) in the 1954 college football season. In their fourth year under head coach Jess Hill, the Trojans compiled an 8–4 record (6–1 against conference opponents), finished in second place in the Pacific Coast Conference, lost to Ohio State in the 1955 Rose Bowl, and outscored their opponents by a combined total of 258 to 159. Jim Contratto led the team in passing with 32 of 79 passes completed for 702 yards, five touchdowns and five interceptions. Jon Arnett led the team in rushing with 96 carries for 601 yards and seven touchdowns. Lindon Crow was the leading receiver with seven catches for 274 yards and three touchdowns. Three Trojans received first-team honors from the Associated Press on the 1954 All-Pacific Coast Conference football team: back Lindon Crow; tackle Ed Fouch; guard Jim Salsbury. Schedule Game summaries UCLA Players Jon Arnett, sophomore tailback (#26), earned second-team All-Coast honors from the UP Al Barry, senior right guard George Belotti, tackle Bing Bordier, right end Ron Brown Ron Calabria, wingback Leon Clarke, left end, second-team All-Coast honors from coaches Frank Clayton, left halfback Jim Contratto, quarterback Lindon Crow, second-team All-Coast (co-captain) Aramis Dandoy, tailback, won All-Coast honors from INS Mario DaRe, tackle Jim Decker, fullback Gordon Duvall, fullback Dirk Eldredge, center Dick Enright, right guard Orlando Ferrante, guard, first-team All-Coast honors from coaches, second-string All-Coast honors from INS Ed Fouch, right tackle, first-team All-Coast (co-captain) George Galli, guard Marv Goux, linebacker, led the team in defensive statistics Chuck Greenwood, right end Chuck Griffith, right end Frank Hall, back Roger Hooks quarterback Bob Isaacson, guard Chuck Leimbach, end Don McFarland, end Ernie Merk, back John Miller, guard Frank Pavich, guard and tackle Vern Sampson, center Irwin Spector, guard, Brooklyn, New York Joe Tisdale, fullback Sam Tsagalakis, placekicker Coaching staff and other personnel Head coach: Jess Hill Assistant coaches: Mel Hein (line coach - centers and tackles), Don Clark (line coach - guards and defensive patterns), Bill Fisk (ends), George Ceithaml (backfield coach), Nick Pappas (defensive backs), Jess Mortensen (freshman coach) Yell kings: Don Ward, Jerry Stolp, Phil Reilly, Shep Aparicio, Bob Mandel Manager: Peter Couden References USC USC Trojans football seasons USC Trojans football
43611328
https://en.wikipedia.org/wiki/Dot%20Chinese%20Website
Dot Chinese Website
The domain name Dot Chinese Website (.中文网) is a new generic top-level domain (gTLD) in the Domain Name System (DNS) of the Internet. Dot Chinese Website is among many listed top level domains. Created along with the partner domain name Dot Chinese Online (.在线) by TLD Registry through Internet Corporation for Assigned Numbers and Names (ICANN)’s new gTLD program launched in April 28, 2014. TLD Registry was founded in June 2008 in Finland with the mission to create essential new Chinese TLDs - intended mainly towards a Chinese-speaking audience. Because it is displayed in a simplified Chinese character language specific script, Dot Chinese Website is known as an Internationalized Domain Name (IDNs). The Chinese used for the domain Dot Chinese Website, "中文网", directly translates to "Chinese Language Website". For Chinese speaking internet users, this communicates a standard phrase they use to find an international website localized into Chinese. Dot Chinese Website is deemed as a premium domain name, and TLD Registry has sold over $900,000 in premium domain names. It holds the record for the most successful premium domain name auction in the ICANN New gTLD Program. The auction was held in the prestigious private members club of the Galaxy Macau, CHINA ROUGE. Hundreds of popular brands and corporations such as Nokia, MSN, Reuters, Jay Chou, the NBA, Eminem, CNBC, IMDb, Real Madrid, Samsung, the United Nations, the BBC and various others are using the term 中文网 in their names. TLD Registry has 52 different accredited partner registrars currently serving the domain name Dot Chinese Website. Dot Chinese Website (.中文网) along with Dot Chinese Online (.在线) currently hold the highest volume in the world in new IDN gTLD registrations. A feature case study was carried out about the two Chinese IDNs in Section 8.2.5 of the report (page 86). In the Domain Name Association's (DNA) first official newsletter the State of the Domains", Dot Chinese Online (.在线) and Dot Chinese Website (.中文网) were featured in the "IDN Spotlight" portion. Released in October 2014, at ICANN 51 in Los Angeles, the "State of the Domains" is a comprehensive quarterly report that provides analysis, trends, and case studies of current Internet domains. TLD Registry, is responsible for the design of the newsletter and completed the full-Mandarin edition of the newsletter. Dot Chinese Website (.CN) is the second most registered top-level-domain TLD after .COM. As of Q4 2017 there are 21.4M .CN registrations. Registration history The application for Dot Chinese Website domain was submitted to ICANN on April 12, 2013. Finnish Prime Minister Jyrki Katainen, attended the ICANN signing ceremony of the Formal Registry Agreement (RA) contract in Beijing on September 10, 2013. On December 3, 2013 in Helsinki, Finland, TLD Registry Ltd, a domain name registry dedicated to Chinese IDNs, announced the launch schedule for two of its domains: Dot Chinese Online and Dot Chinese Website. According to ICANN regulation and rules, the Dot Chinese Online (.在线) and Dot Chinese Website (.中文网) were introduced to the global internet. On November 28, Dot Chinese Online and Dot Chinese Website passed ICANN pre-delegation testing, to test for technical integrity. For 60 days, beginning on January 17 - March 17, Dot Chinese Online and Dot Chinese Website went through the trademark-owners only Sunrise period. The second stage, "the Landrush period" occurred on March 20. Dot Chinese Online and Dot Chinese Website were auctioned off at the Galaxy Macau on Friday, March 21. The Landrush period went on through the weeklong ICANN conference in Singapore (March 23–27) and China's Qing Ming festival (April 4–6), and concluded on Thursday, April 24. A three-day quiet period immediately followed. General availability of domain names Dot Chinese Online and Dot Chinese Website began on Monday April 28, 2014. The Landrush event incited the Chinese government, which purchased close to 20,000 IDNs. The Chinese government also announced that the Ministry of Industry and Information Technology of the People's Republic of China (MIIT) has made it mandatory for all Chinese government-run websites to make the change to exclusively Chinese-specific domain names. Security It is difficult for Chinese netizens to identify phishing attacks in a complete English URL due to language barriers. Therefore, Chinese IDNs provide a source of protection for Chinese consumers against phishing attacks. Chinese netizens (and the Chinese government) tend to favor fully Chinese web addresses because they are more easily able to spot phishing URLs in their own language. References Internationalized domain names Top-level domains
694301
https://en.wikipedia.org/wiki/GNU%20Screen
GNU Screen
GNU Screen is a terminal multiplexer, a software application that can be used to multiplex several virtual consoles, allowing a user to access multiple separate login sessions inside a single terminal window, or detach and reattach sessions from a terminal. It is useful for dealing with multiple programs from a command line interface, and for separating programs from the session of the Unix shell that started the program, particularly so a remote process continues running even when the user is disconnected. Released under the terms of version 3 or later of the GNU General Public License, GNU Screen is free software. Features GNU Screen can be thought of as a text version of graphical window managers, or as a way of putting virtual terminals into any login session. It is a wrapper that allows multiple text programs to run at the same time, and provides features that allow the user to use the programs within a single interface productively. This enables the following features: persistence, multiple windows, and session sharing. Screen is often used when a network connection to the terminal is unreliable, as a dropped network connection typically terminates all programs the user was running (child processes of the login session), due to the session ending and sending a "hangup" signal (SIGHUP) to all the child processes. Running the applications under screen means that the session does not terminate – only the now-defunct terminal gets detached – so applications don't even know the terminal has detached, and allows the user to reattach the session later and continue working from where they left off. History Screen was originally designed by Oliver Laumann and Carsten Bormann at the Technical University of Berlin and published in 1987. Design criteria included VT100 emulation (including ANSI X3.64 (ISO 6429) and ISO 2022) and reasonable performance for heavy daily use when character-based terminals were still common. Later, the at-the-time novel feature of disconnection/reattachment was added. Around 1990, Laumann handed over maintenance of the code to Jürgen Weigert and Michael Schroeder at the University of Erlangen–Nuremberg, who later moved the project to the GNU Project and added features such as scrollback, split-screen, copy-and-paste, and screen sharing. By 2014, development had slowed to a crawl. Wanting to change this, Amadeusz Sławiński volunteered to help. In response, Laumann granted him maintainership. Sławiński proceeded to put out the first new Screen release in half a decade. Because there were some unofficial "Screen 4.1" releases floating around the Internet, he called this new release "Screen 4.2.0". In May 2015, on openSUSE Conference, Jürgen Weigert invited Alexander Naumov to help to develop and maintain GNU screen. Two months later with Alex's help GNU screen 4.3.0 was released. See also xpra, a tool to run X Window System applications on one machine, disconnect them from that machine's display, then reconnect them to another machine's display. Byobu, a frontend for GNU Screen or tmux tmux, an ISC-licensed terminal multiplexer with a feature set similar to GNU Screen Further reading Jeff Covey (12 Oct 2002) The Antidesktop, Freshmeat References Martin Streicher (10 Feb 2009) Speaking UNIX: Stayin' alive with Screen, IBM DeveloperWorks Philip J. Hollenback (22 Aug 2006) Using screen for remote interaction, Linux.com Adam Lazur (January 2003) Power Sessions with Screen, Linux Journal, issue 105 William Von Hagen, Brian K. Jones, Linux server hacks, Volume 2, O'Reilly Media, 2005, , pp. 155–157 (Hack #34) Carl Albing, J. P. Vossen, Cameron Newham, Bash cookbook, O'Reilly Media, 2007, , pp. 415–418 Dru Lavigne, BSD hacks, O'Reilly Media, 2004, , pp. 44–48 (Hack #12) Noah Gift, Jeremy Jones, Python for Unix and Linux system administration, O'Reilly Germany, 2008, , pp. 300–301 Paul Mutton, IRC hacks, O'Reilly Media, 2004, , pp. 345–349 (Hack #92) Notes External links Quick reference Source code repository Screen Unix software Termcap Terminal multiplexers 1987 software
2273502
https://en.wikipedia.org/wiki/Wordfilter
Wordfilter
A wordfilter (sometimes referred to as just "filter" or "censor") is a script typically used on Internet forums or chat rooms that automatically scans users' posts or comments as they are submitted and automatically changes or censors particular words or phrases. The most basic wordfilters search only for specific strings of letters, and remove or overwrite them regardless of their context. More advanced wordfilters make some exceptions for context (such as filtering "butt" but not "butter"), and the most advanced wordfilters may use regular expressions. Functions Wordfilters can serve any of a number of functions. Removal of vulgar language A swear filter, also known as a profanity filter or language filter is a software subsystem which modifies text to remove words deemed offensive by the administrator or community of an online forum. Swear filters are common in custom-programmed chat rooms and online video games, primarily MMORPGs. This is not to be confused with content filtering, which is usually built into internet browsing programs by third-party developers to filter or block specific websites or types of websites. Swear filters are usually created or implemented by the developers of the Internet service. Most commonly, wordfilters are used to censor language considered inappropriate by the operators of the forum or chat room. Expletives are typically partially replaced, completely replaced, or replaced by nonsense words. This relieves the administrators or moderators of the task of constantly patrolling the board to watch for such language. This may also help the message board avoid content-control software installed on users' computers or networks, since such software often blocks access to Web pages that contain vulgar language. Filtered phrases may be permanently replaced as it is saved (example: phpBB 1.x), or the original phrase may be saved but displayed as the censored text. In some software users can view the text behind the wordfilter by quoting the post. Swear filters typically take advantage of string replacement functions built into the programming language used to create the program, to swap out a list of inappropriate words and phrases with a variety of alternatives. Alternatives can include: grawlix nonsense characters, such as !@#$%^&* Replacing a certain letter with a shift-number character or a similar looking one. Asterisks (* or #) of either a set length, or the length of the original word being filtered. Alternatively, posters often replace certain letters with an asterisk. Minced oaths such as "heck" or "darn", or invented words such as "flum". Family friendly words or phrases, or euphemisms, like "LOVE" or "I LOVE YOU", or completely different words which have nothing to do with the original word. Deletion of the post. In this case, the entire post is blocked and there is usually no way to fix it. Nothing at all. In this case, the offending word is deleted. Some swear filters do a simple search for a string. Others have measures that ignore whitespace, and still others go as far as ignoring all non-alphanumeric characters and then filtering the plain text. This means that if the word "you" was set to be filtered, "y o u" or "y.o!u" would also be filtered. Cliché control Clichés—particular words or phrases constantly reused in posts, also known as "memes"—often develop on forums. Some users find that these clichés add to the fun, but other users find them tedious, especially when overused. Administrators may configure the wordfilter to replace the annoying cliché with a more embarrassing phrase, or remove it altogether. Vandalism control Internet forums are sometimes attacked by vandals who try to fill the forum with repeated nonsense messages, or by spammers who try to insert links to their commercial web sites. The site's wordfilter may be configured to remove the nonsense text used by the vandals, or to remove all links to particular websites from posts. Lameness filter Lameness filters are text-based wordfilters used by Slash-based websites (i.e. Textboards and Imageboards) to stop junk comments from being posted in response to stories. Some of the things they are designed to filter include: Too many capital letters Too much repetition ASCII art Comments which are too short or long Use of HTML tags that try to break web pages Comment titles consisting solely of "first post" Any occurrence of a word or term deemed (by the programmers) to be offensive/vulgar Circumventing filters Since wordfilters are automated and look only for particular sequences of characters, users aware of the filters will sometimes try to circumvent them by changing their lettering just enough to avoid the filters. A user trying to avoid a vulgarity filter might replace one of the characters in the offending word into an asterisk, dash, or something similar. Some administrators respond by revising the wordfilters to catch common substitutions; others may make filter evasion a punishable offense of its own. A simple example of evading a wordfilter would be entering symbols between letters or using leet. More advanced techniques of wordfilter evasion include the use of images, using hidden tags, or Cyrillic characters (i.e. a homograph spoofing attack). Another method is to use a soft hyphen. A soft hyphen is only used to indicate where a word can be split when breaking text lines and is not displayed. By placing this halfway in a word, the word gets broken up and will in some cases not be recognised by the wordfilter. Some more advanced filters, such as those in the online game RuneScape, can detect bypassing. However, the downside of sensitive wordfilters is that legitimate phrases get filtered out as well. Censorship aspects Wordfilters are coded into the Internet forums or chat rooms, and operate only on material submitted to the forum or chat room in question. This distinguishes wordfilters from content-control software, which is typically installed on an end user's PC or computer network, and which can filter all Internet content sent to or from the PC or network in question. Since wordfilters alter users' words without their consent, some users still consider them to be censorship, while others consider them an acceptable part of a forum operator's right to control the contents of the forum. False positives A common quirk with wordfilters, often considered either comical or aggravating by users, is that they often affect words that are not intended to be filtered. This is a typical problem when short words are filtered. For example, one may see, "Do you need istance for playing clical music?" Multiple words may be filtered if whitespace is ignored, resulting in "as suspected" becoming " uspected". Prohibiting a phrase such as "hard on" will result in filtering innocuous statements such as "That was a hard one!" and "Sorry I was hard on you," into "That was a e!" and "Sorry I was you." Some words that have been filtered accidentally can become replacements for profane words. One example of this is found on the Myst forum Mystcommunity. There, the word 'manuscript' was accidentally censored for containing the word 'anus', which resulted in 'm****cript'. The word was adopted as a replacement swear and carried over when the forum moved, and many substitutes, such as " 'scripting ", are used (though mostly by the older community members). Place names may be filtered out unintentionally due to containing portions of swear words. In the early years of the internet, the British place name Penistone was often filtered out from spam and swear filters. Implementation Many games, such as World of Warcraft, and more recently, Habbo Hotel and RuneScape allow users to turn the filters off. Other games, especially free Massively multiplayer online games, such as Knight Online do not have such an option. Other games such as Medal of Honor and Call of Duty (except Call of Duty: World at War, Call of Duty: Black Ops, Call of Duty: Black Ops 2, and Call of Duty: Black Ops 3) do not give users the option to turn off scripted foul language, while Gears of War does. In addition to games, profanity filters can be used to moderate user generated content in forums, blogs, social media apps, kid's websites, and product reviews. There are many profanity filter APIs like WebPurify that help in replacing the swear words with other characters (i.e. "@#$!"). These profanity filters APIs work with profanity search and replace method. See also Content-control software Internet censorship Scunthorpe problem References External links Online Text Obfuscator – replaces characters with similar Unicode chars from different character sets (e.g. Cyrillic) Text Filter – Text Tools Online:Alphabetic sort, Remove duplicates, Delete All Non Alphanumeric Characters, Only Numbers, Letters etc. replaces characters with similar Unicode chars from different character sets (e.g. Cyrillic) Prudishness Internet forum terminology Content-control software Internet censorship
33409740
https://en.wikipedia.org/wiki/IPhone%205S
IPhone 5S
The iPhone 5S (stylized and marketed as iPhone 5s) is a smartphone that was designed and marketed by Apple Inc. It is the seventh generation of the iPhone, succeeding the iPhone 5, and unveiled in September 2013, alongside the iPhone 5C. The iPhone 5S maintains almost the same external design as its predecessor, the iPhone 5, although the 5S received a new white/gold color scheme in addition to white/silver and space gray/black. The 5S has vastly upgraded internal hardware, however. It introduced the A7 64-bit dual-core system-on-chip, the first 64-bit processor to be used on a smartphone, accompanied by the M7 "motion co-processor". A redesigned home button with Touch ID, a fingerprint recognition system which can be used to unlock the phone and authenticate App Store and iTunes Store purchases, was also introduced. The camera was also updated with a larger aperture and a dual-LED flash optimized for different color temperatures. Earphones known as EarPods were included with the 5S, and Apple released accessories including a case and a dock. It had a 4-inch display, similar to the iPhone 5 and iPhone 5C. The iPhone 5S originally shipped with iOS 7, which introduced a revamped visual appearance among other new features. Designed by Jony Ive, iOS 7 departed from skeuomorphic elements used in previous versions of iOS in favor of a flat, colorful design. Among new software features introduced to the iPhone 5S were AirDrop, an ad-hoc Wi-Fi sharing platform; Control Center, a control panel containing a number of commonly used functions; and iTunes Radio, an internet radio service. The 5S is the first iPhone to be supported through six major versions of iOS, from iOS 7 to iOS 12, along with the iPhone 6S and the first-generation iPhone SE which were supported from iOS 9 to iOS 15, and the second iOS device to support six major updates - the first being the iPad 2 which supported iOS versions 4 to 9. Reception towards the device was positive, with some outlets considering it to be the best smartphone available on the market due to its upgraded hardware, Touch ID, and other changes introduced by iOS 7. Some criticized the iPhone 5S for keeping the design and small display of the iPhone 5, and others expressed security concerns about the Touch ID system. Nine million units of the iPhone 5S and iPhone 5C were sold on the weekend of their release, breaking Apple's sales record for iPhones. The iPhone 5S was the best selling phone on all major U.S. carriers in September 2013. The iPhone 5S was succeeded as Apple's flagship smartphone by the larger iPhone 6 in September 2014. On March 21, 2016, the iPhone 5S was discontinued following the release of the iPhone SE, which incorporated internal hardware similar to the iPhone 6S while retaining the smaller form factor and design of the 5S. History Before its official unveiling, media speculation primarily centered on reports that the next iPhone would include a fingerprint scanner; including Apple's 2013 acquisition of AuthenTec, a developer of mobile security products, references to a fingerprint sensor on the home button in the beta release of iOS 7 and leaked packaging for an iPhone 5S showing that the traditional home button now had a metallic "ring" around it. Similar ring-based imagery was seen on the official invitation to Apple's iPhone press event in September 2013, where the new device was unveiled. Shortly before its official unveiling, The Wall Street Journal also reported the rumor. Apple announced the iPhone 5C and the iPhone 5S during a media event at its Cupertino headquarters on September 10, 2013. While the iPhone 5C became available for preorder on September 13, 2013, the iPhone 5S was not available for preorder. Both devices were released on September 20, 2013. While most of the promotion focused on Touch ID, the 64-bit Apple A7 processor was also a highlight during the event. Schiller then showed demos of Infinity Blade III to demonstrate the A7's processing power and the device's camera using untouched photographs. The release of iOS 7 on September 18, 2013 was also announced during the keynote. The iPhone 5S was released on September 20, 2013, in the United States, United Kingdom, Canada, China, France, Germany, Australia, Japan, Hong Kong, and Singapore. It was released in 25 additional countries on October 25, 2013, and in 12 countries on November 1, 2013. Indonesia was the last country to receive the iPhone 5S, on January 26, 2014. The iPhone 5S was succeeded as Apple's flagship smartphone by the iPhone 6 and iPhone 6 Plus on September 19, 2014, but the older model remained available for purchase at a reduced price, while the 64GB version was discontinued. The gold edition of the iPhone 5S was discontinued on September 9, 2015, when Apple revealed the iPhone 6S and iPhone 6S Plus The iPhone 5S was discontinued on March 21, 2016, and succeeded by the first-generation iPhone SE, which continues the same form factor but features vastly upgraded internals similar to the flagship iPhone 6S. This was a break with Apple's product positioning trend (in North America and Western Europe), starting with iPhone 4S released in October 2011, which gave each newly released model one year as the flagship phone, then moving it to midrange for its second year of production, with the third and final year as the entry-level offering before discontinuation. While the iPhone 5S was expected to continue on sale until September 2016, replacing it and its A7 processor early meant that Apple "just reduced its long-term chip support window by a year" for iOS. In addition, a new iPhone launch was meant to stimulate demand, as sales of iPhone 6S and 6S Plus had not met expectations since their September 2015 release and the iPhone family was expected to suffer its first-ever negative growth quarter in 2016. Specifications Design The iPhone 5S maintains a similar design to the iPhone 5, with a LCD multi-touch Retina display and a screen resolution of 640×1136 at 326 ppi. Its home button has been updated with a new flat design using a laser-cut sapphire cover surrounded by a metallic ring; the button is no longer concave, nor does it contain the familiar squircle icon seen on previous models. The phone itself is thick and weighs . The phone uses an aluminum composite frame. The device is available in three color finishes; "space gray" (replacing black with slate trim on the iPhone 5), white with silver trim, and white with gold trim. The iPhone 5S was the first iPhone to be available in a gold color; this decision was influenced by the fact that gold is seen as a popular sign of a luxury product among Chinese customers. Hardware The iPhone 5S is powered by the Apple A7 system-on-chip, the first 64-bit processor ever used on a smartphone. The device's operating system and pre-loaded software were optimized to run in 64-bit mode, promising increased performance, although third-party app developers would need to optimize their apps to take advantage of these enhanced capabilities. The A7 processor was designed by Apple and manufactured by Samsung. The A7 processor is accompanied by the M7 "motion co-processor", a dedicated processor for processing motion data from the iPhone's accelerometer and gyroscopes without requiring the attention of the main processor, which integrates with iOS 7's new CoreMotion APIs. The same A7 SoC and M7 motion co-processor are also found in the iPad Air and iPad Mini 2, both of which were released in the same quarter as the iPhone 5S. The phone includes a 1560 mAh battery, which provides 10 hours of talk time and 250 hours of standby time. The home button on the iPhone 5S incorporates a fingerprint recognition system known as Touch ID, based on technology from AuthenTec, a company which Apple had acquired in 2012. The sensor uses a capacitive CMOS-based sensor which can detect the "sub-epidermal layers" of fingers at 500 pixels per inch, and uses a 360-degree design that can read the print at any angle. The sensor itself is activated by a touch-sensitive metallic ring surrounding the button. Touch ID can be used for various authentication activities within the operating system, such as unlocking the device or authenticating App Store and iTunes purchases instead of an Apple ID password. The sensor can be trained to recognize the fingerprints of multiple fingers and multiple users. Fingerprint data is stored in an encrypted format within a "secure enclave" of the A7 chip itself, and is not accessible to any other apps or servers (including iCloud). Camera Camera hardware While the camera is still 8 megapixels in resolution with the image capture size of 3264 × 2448 (4:3), the lens has a larger aperture (2.2, compared to 2.4 on the predecessor) and larger sized pixels in its image sensor than previous iPhone models. The flashlight has dual "True Tone" flashes, consisting of an amber LED and a white LED, which are variably used based on the color temperature of the photo to improve color balancing. Camera software The camera software includes automatic digital image stabilization, dynamic tone mapping, 10 fps burst mode and slow motion video at 120 fps. Photos captured during the 1080p video recording have a resolution of 720p. Slow-motion video The iPhone 5S's camera was paired with a dual-LED flash, allowing for higher-quality nighttime photos. iOS 7 introduced a new camera app, allowing the iPhone 5S to capture fast continuous shots and record slow-motion videos with 720p at 120 frames per second and an audio track, making it the first iPhone to be able to record at any frame rate beyond 30 frames per second. An analysis by GSM Arena suggests that the image quality of the supposedly 720p slow-motion footage resembles approximately 480p. Accessories Earphones known as Apple EarPods were included with the iPhone 5S. According to technology commentators, the design of the earphones is aimed to improve sound quality by allowing air to travel in and out more freely. Apple states that the design of their earphones allows it to "rival high-end headphones that cost hundreds of dollars more". Reviews by Gizmodo and TechRadar reported that although the earphones sounded better than its predecessor, reviewers felt that quality of sound produced is poor. TechRadar further opined that the EarPods are inferior to other earphones of a similar price. Operating system and software The iPhone 5S was initially supplied with iOS 7, released on September 20, 2013. Jonathan Ive, the designer of iOS 7's new elements, described the update as "bringing order to complexity", highlighting features such as refined typography, new icons, translucency, layering, physics, and gyroscope-driven parallaxing as some of the major changes to the design. The design of both iOS 7 and OS X Yosemite (version 10.10) noticeably departs from skeuomorphic elements such as green felt in Game Center, wood in Newsstand, and leather in Calendar, in favor of flat, colorful design. iOS 7 adds AirDrop, an ad-hoc Wi-Fi sharing platform. Users can share files with the iPhone 5 onwards, the iPod Touch (5th generation), iPad (4th generation) onwards, or iPad Mini (1st generation) onwards. The operating system also adds Control Center, a control panel accessed by swiping up from the bottom of the screen. Control Center contains a number of commonly used functions, such as volume and brightness controls, along with toggles for enabling Wi-Fi, Bluetooth, Airplane mode, and for using the rear camera's flash LED as a flashlight. iTunes Radio, an Internet radio service, was also included on the iPhone 5S. It was a free, ad-supported service available to all iTunes users, featuring Siri integration on iOS. Users were able to skip tracks, customize stations, and purchase the station's songs from the iTunes Store. Users could also search through their history of previous songs. Apple announced in June 2018 that the iPhone 5S would support the iOS 12 update. This made it the second longest supported iOS device, having supported six major versions of the iOS operating system, on par with the iPad 2 which supported iOS 4 through iOS 9. The iPhone 5S would also receive major speed boosts of up to 70%, according to Apple. This included the camera, keyboard and other functions. The iPhone 5S did not receive iOS 13, released in September 2019. Apple accessories During the keynote, Apple announced a case for the iPhone 5S that was made of soft microfiber on the inside and leather on the outside. This case was announced along with iPhone 5C's case, both of which were the first cases Apple had announced since the iPhone 4 Bumpers. Docks for both the iPhone 5S and 5C were found on the Apple online store after the announcement. Because of the casing difference between the iPhone 5S and 5C, they have separate docks, each made specifically for each respective phone. Reception Critical reception The iPhone 5S received a positive reception from reviewers and commentators. Walt Mossberg of All Things Digital gave the phone a favorable review, saying that Touch ID "sounds like a gimmick, but it's a real advance, the biggest step ever in biometric authentication for everyday devices," and labeled it "the best smartphone on the market." David Pogue of The New York Times praised Touch ID, but said that the innovation of the smartphone market has been saturated, and "maybe the age of annual mega-leaps is over." He focused much of his review on iOS 7, which he believed was the biggest change of the device over previous generations, praising new Siri features, Control Center, and AirDrop. In an editorial, Pogue stated that iOS 7 was the biggest change in the iPhone series, citing utilitarian interface changes as the main contributor to this. Scott Stein of CNET criticized the lack of design change over iPhone 5 and said that although the iPhone 5S "is not a required upgrade, but it's easily the fastest and most advanced Apple smartphone to date." Although praised for its camera, 64-bit A7 chip, M7 motion-chip, and fingerprint scanning capabilities, some investors thought that the iPhone 5S, although a notable improvement over the iPhone 5, was still relatively unchanged from its predecessor, and worried that the iPhone line had become a stagnant, dull product. Apple's share price fell 5.4% after the launch to close at a month low of $467.71 on the NASDAQ. Darrell Etherington of TechCrunch who praised the iPhone 5S as the best smartphone available said "looks may not be different from the iPhone 5, but the internal components have a dramatic impact on day-to-day activities normal for a smartphone user," and went into detail explaining the impact of the improved camera and specifications on the phone. Etherington suggested that the 64-bit A7 processor will not reach its full potential until developers create applications supporting it. Myriam Joire of Engadget found that the iPhone 5S could benefit significantly from the A7 if developers created applications optimized for the 64-bit processor. Anand Lal Shimpi of AnandTech praised the phone's A7 processor, describing it as "seriously impressive", and stated that it was the most "futureproof of any iPhone ever launched. As much as it pains me to use the word futureproof if you are one of those people who likes to hold onto their device for a while - the iPhone 5S is as good a starting point as any." Scott Lowe of IGN also spoke highly of its 64-bit processor, "which has a substantial lead in processing power over the HTC One and Samsung Galaxy S4, accounting for a graphics boost of up to 32% and 38% in CPU benchmarks." The debut of Apple's 64-bit A7 processor took rival Android smartphone makers by surprise, particularly Qualcomm whose own 64-bit system-on-chip was not released until 2015. Most reviewers recommended the iPhone 5S over the iPhone 5C which was released at the same time. The 5C retained almost the same hardware as the discontinued iPhone 5, while the iPhone 5S featured substantially improved performance/features thanks to its new 64-bit A7 processor, as well as extra storage space, all for a relatively small additional upfront cost over the iPhone 5C (US$650 versus US$550 in March 2014). This was especially the case when iOS 8 was released and both iPhone 5S and iPhone 5C were moved to the mid and low end of the iPhone range, respectively; the iPhone 5S still had 16 or 32 GB (14.9 or 29.8 GiB) storage available while the iPhone 5C had to make do with 8 GB storage with only 4.9GB available to the user after installing iOS 8. Furthermore, the 5C's polycarbonate exterior received a mixed reception and was seen as a cost-cutting downgrade compared to the iPhone 5's aluminum/glass case; the 5S retained the latter design and looked even more premium due to its additional gold finish. As of 2015–16, there were still a significant number of customers who preferred the 4-inch screen size of iPhone 5S, which remained the second-most popular iPhone after the iPhone 6 and ahead of the iPhone 6S. Apple stated in their event that they sold 30 million 4-inch iPhones in 2015, even as that form factor was succeeded as the flagship iPhone by the redesigned larger display 4.7/5.5-inch iPhone 6 and 6 Plus back in September 2014. Furthermore, the 5/5S design was regarded as "long been the golden child of Apple phone design and a benchmark for phones in general" (with the 5S's gold finish adding a premium touch to the 5's already well-regarded look), while the succeeding 6 and 6S design was less critically acclaimed as it "felt a little bit wrong, as though you were holding a slick $650 bar of soap". The iPhone 5 was described as "elegance rooted in the way the aluminum and glass work together. It felt streamlined, yet substantial, which is different from iPhone 6, which feels substantial in size alone. Plus, unlike the ubiquitous rounded corners of the 6, iPhone 5 didn't really look like anything else on the market at the time". However, the iPhone 5/5S design was not suited to scaling up, in contrast to the iPhone 6/6S which could better accommodate the growing consumer trend towards larger screen sizes and indeed spawned the 6/6S Plus phablet models. When Apple discontinued the iPhone 5S, it was replaced by the first-generation iPhone SE which outwardly appears almost identical to the 5S even as the SE's internal hardware has been upgraded significantly. Commercial reception The iPhone 5S and 5C sold over nine million units in the first three days, setting a record for first weekend smartphone sales, with the 5S selling three times more units than the 5C. After the first day of release, 1% of all iPhones in the US were iPhone 5Ss, while 0.3% were iPhone 5Cs. Gene Munster of Piper Jaffray reported that the line at the Fifth Avenue Apple Store contained 1,417 people on release day, compared to 1,300 for the iPhone 4 in 2010, and 549 for the iPhone 3G in 2008 on their respective release days. This was the first time that Apple launched two models simultaneously. The first-day release in China also contributed to the record sales result. On launch day, major in-stock shortages were reported in most stores, across all countries where the iPhone 5S initially went on sale. A great many customers in line outside Apple Stores worldwide were left disappointed due to severe shortages across all 5S models, with the gold model in particular being in highly limited supply. While this situation eased in the US in the days following the launch, other countries reported receiving few restocks. Some commentators questioned how Apple handled the initial release, as online pre-orders were not offered for the iPhone 5S, meaning large numbers of people queuing outside physical stores, with most in line not receiving a unit. In the US, Apple offered an online reservation system, so customers could keep checking for units available at their local Apple Stores, and order for pickup. Online orders were also in short supply on launch day, with the shipping date across all model sizes and colors changing from "7-10 working days" to "October" in all countries, within hours of online orders being taken. The iPhone 5S was the best selling phone on AT&T, Sprint, Verizon, and T-Mobile in September 2013 in the United States, outselling the iPhone 5C and Samsung Galaxy S4. According to Consumer Intelligence Research Partners, the iPhone 5S outsold the 5C by a two-to-one margin during its September release, confirming Apple CEO Tim Cook's view that the high-end smartphone market was not reaching a point of market saturation. While commentators viewed the 5C as a flop because of supply chain cuts signifying a decline in demand, the 5S was viewed as a massive success. Apple admitted that it had failed to anticipate the sales ratio, leading to an overstocking of the 5C and shortages of the 5S. Six months after the release of the iPhone 5S, on March 25, 2014, Apple announced that sales of the iPhone brand had exceeded 500 million units. By May 2014, despite having been on the market for eight months, the iPhone 5S reportedly outsold the newly released Samsung Galaxy S5 by 40%, with 7 million iPhone 5S units versus 5 million Galaxy S5 units. The Galaxy S5's failure to oust the iPhone 5S from the top-selling spot was a major setback for Samsung Mobile, as the preceding Samsung Galaxy SIII and Samsung Galaxy S4, in the first quarter of their releases, had outsold the iPhone 4S and iPhone 5 respectively. Impact of Touch ID A number of technology writers, including Adrian Kingsley-Hughes of ZDNet and Kevin Roose of New York believed that the fingerprint scanning functionality of the iPhone 5S could help spur the adoption of the technology as an alternative to passwords by mainstream users (especially in "bring your own device" scenarios), as fingerprint-based authentication systems have only enjoyed wider usage in enterprise environments. However, citing research by biometrics engineer Geppy Parziale, Roose suggested that the CMOS-based sensor could become inaccurate and wear out over time unless Apple had designed the sensor to prevent this from occurring. Brent Kennedy, a researcher of the United States Computer Emergency Readiness Team, recommended that users not immediately rely on the technology, citing the uncertainty over whether the system could properly reject a spoofed fingerprint. Following the release of the iPhone 5S, the German Chaos Computer Club announced on September 21, 2013 that they had bypassed Apple's new Touch ID fingerprint sensor by using "easy everyday means". The group explained that the security system had been defeated by photographing a fingerprint from a glass surface and using that captured image to make a latex model thumb which was then pressed against the sensor to gain access. The spokesman for the group stated, "We hope that this finally puts to rest the illusions people have about fingerprint biometrics. It is plain stupid to use something that you can't change and that you leave everywhere every day as a security token." However, in 2013, 39% of American smartphone users used no security measures at all to protect their smartphone. Others have also tried Chaos Computer Club's method, but concluded that it is not an easy process in either time or effort, given that the user has to use a high resolution photocopy of a complete fingerprint, special chemicals and expensive equipment, and because the spoofing process takes some time to achieve. Problems Several problems were experienced with the iPhone 5S's hardware after its release. The most widely reported issue is that the angle reported by the phone's level sensor had drifted by several degrees, which caused the gyroscope, compass, and accelerometer to become inaccurate. Reports suggested that this is a hardware-induced problem. Some encountered other problems such as crashing with a blue screen and then restarting, the power button making a rattling noise when the phone was shaken, overheating, the microphone not working, and Touch ID not working for iTunes purchases. Some of these issues have since been fixed by software updates. See also List of iOS devices History of iPhone Comparison of smartphones Timeline of iPhone models References External links – official site Products and services discontinued in 2019 Mobile phones introduced in 2013 IOS Discontinued iPhones
47011126
https://en.wikipedia.org/wiki/Dejal
Dejal
Dejal is a company that develops software for Mac OS X. Established by developer David Sinclair in 1991 in Auckland, New Zealand and since relocated to Portland, Oregon, the company develops and distributes a variety of shareware and freeware applications. Dejal has also released a number of open source projects to be used by other Mac developers in their software. Dejal's first products were for Apple's System 7; today, the company's products are developed exclusively for Mac OS X. Older software for Mac OS 9 and earlier are still available as freeware, but are no longer supported. In 2002, Dejal released version 1.0 of Simon, a server monitoring tool. Simon can perform a variety of tests, such as pinging a server or checking the content of a web page for changes, at user-specified intervals, and report on the results of the tests. Version 2.1 of the software was rated 3.5 out of 5 by Macworld, which praised the software's extensive notification and report options. The current version is 4.0.3. In 2006, Dejal released version 1.0 of Caboodle, which is designed to collect and organize small pieces of information, which may be simply text (such as a shopping list or a snippet of code) or more complex items, such as images and internet links. Macworld rated version 1.3.1 of the software 2.5 out of 5, noting in its review that the software was built on a "superb concept" while being somewhat difficult to use and prone to bugs. The current version is 1.5. Current products Simon BlogAssist Caboodle Time Out (freeware) Discontinued products Macfilink FinderFront QuickEncrypt SndConverter SndCataloguer References External links Dejal Developer page on MacUpdate Software companies of New Zealand Software companies based in Oregon Companies based in Portland, Oregon Software companies of the United States Companies established in 1991
53923603
https://en.wikipedia.org/wiki/Synack
Synack
Synack is an American technology company based in Redwood City, California. The company combines AI and machine learning enabled security software with a crowdsourced network of white-hat hackers to help keep its customers secure. The software provides security testing through a SaaS platform to find exploitable vulnerabilities for reconnaissance. The company offers its services to government agencies and businesses in retail, healthcare and manufacturing industry. According to Bloomberg, Synack is "the most trusted crowdsourced penetration testing platform." It is valued at US $500M as of May 2020, as per Fortune Magazine. Overview Synack was founded in 2013 by former NSA agents Jay Kaplan and Mark Kuhr. Synack uses a network of freelance security analysts, or hackers, in over 80 countries to check vulnerability and security problems. In 2018, Synack worked with US Department of Defense to strengthen the Hack the Pentagon initiative, by vetting ethical hackers for continual assessment of defense websites, hardware and physical systems. In June 2020, the company partnered with DARPA to check for data leakage and buffer errors in their new security prototype developed through the System Security Integration Through Hardware (SSITH) program. In July 2020, the Colorado secretary of state’s office partnered with Synack to conduct penetration tests of its election systems ahead of the presidential vote. Funding Synack is funded by 16 investors. In April 2014, the company announced it had secured Series A funding from Kleiner Perkins Caufield Byers, Google Ventures, Allegis Capital, and Derek Smith of Shape Security. In February 2015, the company raised $25 million in Series B funding. In April 2017, it raised $21 million from Microsoft Ventures, Hewlett Packard Enterprise, and Singtel and prior investors. Achievements By April 11, 2017, Synack had 100 employees as well as a growing network of freelance hackers. CNBC named Synack a "CNBC Disruptor" company four times in a row, from 2015 to 2019. In 2019, the company was again named among CNBC Disruptor 50 for Innovative Crowdsourced Security Platform. In 2020, the company was featured in America's Most Promising Artificial Intelligence Companies list by Forbes magazine and was also named in Gartner’s Top 25 Enterprise Software Startups. See also Security hacker References External links synack.com 2013 establishments in California American companies established in 2013 Security companies of the United States Computer security companies Companies based in Menlo Park, California Technology companies based in the San Francisco Bay Area
35636420
https://en.wikipedia.org/wiki/E-commerce%20identification%20and%20identification%20types
E-commerce identification and identification types
A whole new range of techniques has been developed to identify people since the 1960s from the measurement and analysis of parts of their bodies to DNA profiles. Forms of identification are used to ensure that citizens are eligible for rights to benefits and to vote without fear of impersonation while private individuals have used seals and signatures for centuries to lay claim to real and personal estate. Generally, the amount of proof of identity that is required to gain access to something is proportionate to the value of what is being sought. It is estimated that only 4% of online transactions use methods other than simple passwords. Security of systems resources generally follows a three-step process of identification, authentication and authorization. Today, a high level of trust is as critical to eCommerce transactions as it is to traditional face-to-face transactions. Identification, authentication and authorization Identification It is a scheme established and maintained, whereby users are properly, consistently, effectively and efficiently identified before systems are accessed. An identity verification service is often employed to ensure that users or customers provide information that is associated with the identity of a real person. Authentication Authentication is verification of the identity of the entity requesting access to a system. It is the process of determining whether someone or something is, in fact, who or what it is declared to be. In private and public computer networks (including the Internet), authentication is commonly done through the use of logon passwords. Knowledge of the password is assumed to guarantee that the user is authentic. Each user registers initially (or is registered by someone else), using an assigned or self-declared password. On each subsequent use, the user must know and use the previously declared password. The weakness in this system for transactions that are significant (such as the exchange of money) is that passwords can often be stolen, accidentally revealed, or forgotten. For this reason, Internet business and many other transactions require a more stringent authentication process. The use of digital certificates issued and verified by a Certificate Authority (CA) as part of a public key infrastructure is considered likely to become the standard way to perform authentication on the Internet. Logically, authentication precedes authorization (although they may often seem to be combined). Authorization Authorization is the process of giving someone permission to do or have something. In multi-user computer systems, a system administrator defines for the system which users are allowed access to the system and what privileges of use (such as access to which file directories, hours of access, amount of allocated storage space, and so forth). Assuming that someone has logged into a computer operating system or application, the system or application may want to identify what resources the user can be given during this session. Thus, authorization is sometimes seen as both the preliminary setting up of permissions by a system administrator and the actual checking of the permission values that have been set up when a user is getting access. Logically, authorization is preceded by authentication. ). Types of ecommerce authentication One-time password/Single sign on - It is process where a user's password and information is used for logon and then, becomes invalid after a set time. Two-factor authentication - This requires two forms of authentication before access can be granted to a user. Multi-factor authentication - Multi-factor authentication requires that the user uses a user id, password combined with any other form of authentication method as smartcard or biometric. Using this method decreases the likelihood that an unauthorized person can compromise your electronic security system, but it also increases the cost of maintaining that system. Electronic access card/Smart card - Smart card are credit card-sized plastic cards that house an embedded integrated circuit. They can be used in electronic commerce for providing personal security, stored value and mobility. At the functional level, smart cards can be categorised as either memory cards or microprocessor cards. Memory cards, such as disposable pre-paid payphone cards or loyalty card, are the most cheapest form of smart card. They contain a small amount of memory in the form of ROM (read only memory) and EEPROM (electrically erasable programmable read only memory). Microprocessor cards are more advanced than simple memory cards in that they contain a microprocessor CPU (central processing unit) and RAM (random access memory) in addition to ROM and EEPROM. The ROM contains the card's operating system and factory-loaded applications. Security token - It is an authentication device that has been assigned to a specific user by an appropriate administrator”. It uses what the user has such as, Passport, driver's license etc. to identify them. “Most security tokens also incorporate two-factor authentication methods to work effectively”. Keystroke dynamics - It is an automated form of authentication based on something the user does. It authenticates the user based keyboard typing pattern. Biometric - Biometric based systems enable the automatic identification and/or authentication of individuals. Authentication answers the question: "Am I who I claim to be?". The system verifies the identity of the person by processing biometric data, which refers to the person who asks and takes a yes/no decision (1:1 comparison). On the other hand, identification answers to the question: "Who am I?". The system recognizes the individual who asks by distinguishing him from other persons whose biometric data is also stored in the database. In this case the system takes a l-of-n decision, and answers that the person who asks is X, if her/his biometric data is stored in the database or that there is no match at all. Although the identification function should be regarded as distinct from authentication from an application perspective, often systems using biometrics integrate both identification and authentication functions, since the former is a repetitive execution of the latter. Types of biometric authentication Fingerprint recognition - Fingerprint is the most widely used form of authentication where the pattern of a user's fingertip is used. It can be deployed in a broad range of environments and provides flexibility and increased system accuracy by allowing users to enrol multiple fingers in the template system. Facial recognition - It uses data related to the unique facial features of a user. It involves analyzing facial characteristics. It is a unique biometric in that it does not require the cooperation of the scanned individual; it can utilize almost any high-resolution image acquisition device such as a still or motion camera. Voice pattern - This a form of authentication uses the unique pattern of a user's voice. it relies on voice-to-print technologies, not voice recognition. In this process, a persons voice is transformed into text and compared to an original template. Although this is fairly easy technology to implement because many computers already have built-in microphones, the enrollment procedure is more complicated than other biometrics, and background noise can interfere with the scanning, which can be frustrating to the user. Handwritten Signature - Signature verification analysis the way a person signs their name, such as speed and pressure, as well as the final static shape of the signature itself. Retina recognition - It is a method of biometric authentication that uses data related to unique characteristics associated with the pattern of blood vessels located at the back of an individual's eyes. This technology is personally invasive and requires skilled operators. It results in retina codes of 96 bytes when used for authentication to some Kbytes in the case of identification. Facial recognition techniques exploit characteristics such as relative eyes, nose and mouth positioning, and the distances between them. Iris recognition - A form of authentication that uses data linked to features associated with the colored part of the eye of a user. It involves analyzing the patterns of the colored part of the eye surrounding the pupil. It uses a fairly normal camera and does not require close contact between the eye and the scanner. Glasses can be worn during an iris scan, unlike a retinal scan. Other forms of authentication Mutual Authentication - is the process by which each party in an electronic communication verifies the identity of the other. For instance, a bank clearly has an interest in positively identifying an account holder prior to allowing a transfer of funds; however, the bank customer also have a financial interest in knowing he is communicating with the bank's server prior to providing any personal information. Digital certificate - A digital certificate is an electronic "credit card" that establishes your credentials when doing business or other transactions on the Web. It is issued by a certification authority (CA). It contains your name, a serial number, expiration dates, a copy of the certificate holder's public key (used for encrypting messages and digital signatures), and the digital signature of the certificate-issuing authority so that a recipient can verify that the certificate is real. Some digital certificates conform to a standard, X.509. Digital certificates can be kept in registries so that authenticating users can look up other users' public keys. ). Digital certificates are used in a variety of transactions including e-mail, electronic commerce, and the electronic transfer of funds. When combined with encryption and digital signatures, digital certificates provide individuals and organizations with a means of privately sharing information so that each party is confident that the individual or organization with which they are communicating is in fact who it claims to be. Hand Geometry Authentication - Hand geometry techniques exploit hand shape characteristics, such as finger length and width. This leads to quite a small amount of data (about 9 bytes), thus restricting their application to simple authentication purposes only. Also, their behaviour related to the fulfilment of the above properties is moderate. The iris, the circular coloured membrane surrounding the pupil of the eye, is a unique structure consisting of specific characteristics such as striations, furrows, rings, crypts, filaments, and corona. Iris patterns are characterised by very high distinctiveness, even twins have different ones. The probability that two individuals have the same iris pattern is about 10^sup -52^. The probability that two distinct iris patterns result to the same iris-code used (about 256 bytes) by a biometric system is negligible (about 10^sup -78^), thus allowing almost perfect matching accuracy. Kerberos authentication - This is a form of authentication that provides a mechanism for authenticating a client and a server or server to a server. CHAP authentication - This is form of peer-to-peer protocol (PPP) mechanism used by an authenticator to authenticate a peer. Quantitative authentication - Quantitative authentication is an authentication approach where someone requesting access is required to attain a certain "authentication level" before being granted access. Detailed discussions on quantitative authentication have been undertaken. See also Access control Authorization Biometrics Smart card References External links Nist Publication website Office of the privacy commissioner of Canada information guide Authentication methods
1100187
https://en.wikipedia.org/wiki/Rafael%20Moreu
Rafael Moreu
Rafael Moreu is an American screenwriter, best known for his work in horror and thrillers. Career Moreu wrote the movies Hackers (1995) and The Rage: Carrie 2 (1999). For Hackers, he saw the film as more than just about computer hacking but something much larger: "In fact, to call hackers a counterculture makes it sound like they're a transitory thing; I think they're the next step in human evolution." He had been interested in hacking since the early 1980s. After the crackdown in the United States during 1989 and 1990, he decided to write a script about the subculture. For research, Moreu went to a meeting organized by the New York-based hacker magazine 2600: The Hacker Quarterly. There, he met Phiber Optik, a.k.a. Mark Abene, a 22-year-old hacker who spent most of 1994 in prison on hacking charges. Moreu also hung out with other young hackers being harassed by the government and began to figure out how it would translate into a film. He remembered, "One guy was talking about how he'd done some really interesting stuff with a laptop and payphones and that cracked it for me, because it made it cinematic". The Rage: Carrie 2, which was originally titled The Curse, was initially scheduled to start production in 1996 with Emily Bergl in the lead, but production stalled for two years. The film eventually went into production in 1998 under the title Carrie 2: Say You're Sorry. References External links American male screenwriters Living people Year of birth missing (living people)
18562
https://en.wikipedia.org/wiki/Leet
Leet
Leet (or "1337"), also known as eleet or leetspeak, is a system of modified spellings used primarily on the Internet. It often uses character replacements in ways that play on the similarity of their glyphs via reflection or other resemblance. Additionally, it modifies certain words based on a system of suffixes and alternate meanings. There are many dialects or linguistic varieties in different online communities. The term "leet" is derived from the word elite, used as an adjective to describe skill or accomplishment, especially in the fields of online gaming and computer hacking. The leet lexicon includes spellings of the word as 1337 or leet. History Leet originated within bulletin board systems (BBS) in the 1980s, where having "elite" status on a BBS allowed a user access to file folders, games, and special chat rooms. The Cult of the Dead Cow hacker collective has been credited with the original coining of the term, in their text-files of that era. One theory is that it was developed to defeat text filters created by BBS or Internet Relay Chat system operators for message boards to discourage the discussion of forbidden topics, like cracking and hacking. Creative misspellings and ASCII-art-derived words were also a way to attempt to indicate one was knowledgeable about the culture of computer users. Once reserved for hackers, crackers, and script kiddies, leet has since entered the mainstream. It is now also used to mock newbies, also known colloquially as n00bs, or newcomers, on websites, or in gaming communities. Some consider emoticons and ASCII art, like smiley faces, to be leet, while others maintain that leet consists of only symbolic word encryption. More obscure forms of leet, involving the use of symbol combinations and almost no letters or numbers, continue to be used for its original purpose of encrypted communication. It is also sometimes used as a scripting language. Variants of leet have been used for censorship purposes for many years; for instance "@$$" (ass) and "$#!+" (shit) are frequently seen to make a word appear censored to the untrained eye but obvious to a person familiar with leet. This enables coders and programmers especially to circumvent filters and speak about topics that would usually get banned. "Hacker" would end up as "H4x0r", for example. Leet symbols, especially the number 1337, are Internet memes that have spilled over into popular culture. Signs that show the numbers "1337" are popular motifs for pictures and are shared widely across the Internet. One of the earliest public examples of this substitution would be the album cover of Journey's Escape album which is stylized on the cover as "E5C4P3". Orthography One of the hallmarks of leet is its unique approach to orthography, using substitutions of other letters, or indeed of characters other than letters, to represent letters in a word. For more casual use of leet, the primary strategy is to use homoglyphs, symbols that closely resemble (to varying degrees) the letters for which they stand. The choice of symbol is not fixed: anything the reader can make sense of is valid. However, this practice is not extensively used in regular leet; more often it is seen in situations where the argot (i.e., secret language) characteristics of the system are required, either to exclude newbies or outsiders in general, i.e., anything that the average reader cannot make sense of is valid; a valid reader should themselves try to make sense, if deserving of the underlying message. Another use for leet orthographic substitutions is the creation of paraphrased passwords. Limitations imposed by websites on password length (usually no more than 36) and the characters permitted (e.g. alphanumeric and symbols) require less extensive forms when used in this application. Some examples of leet include B1ff and n00b, a term for the stereotypical newbie; the l33t programming language; and the web-comics Megatokyo and Homestuck, which contain characters who speak variations of leet. Morphology Text rendered in leet is often characterized by distinctive, recurring forms. -xor suffix The meaning of this suffix is parallel with the English -er and -or suffixes (seen in hacker and lesser) in that it derives agent nouns from a verb stem. It is realized in two different forms: -xor and -zor, and , respectively. For example, the first may be seen in the word hax(x)or (H4x0r in leet) and the second in pwnzor . Additionally, this nominalization may also be inflected with all of the suffixes of regular English verbs. The letter 'o' is often replaced with the numeral 0. -age suffix Derivation of a noun from a verb stem is possible by attaching -age to the base form of any verb. Attested derivations are pwnage, skillage, and speakage. However, leet provides exceptions; the word leetage is acceptable, referring to actively being leet. These nouns are often used with a form of "to be" rather than "to have," e.g., "that was pwnage" rather than "he has pwnage". Either is a more emphatic way of expressing the simpler "he pwns," but the former implies that the person is embodying the trait rather than merely possessing it. -ness suffix Derivation of a noun from an adjective stem is done by attaching -ness to any adjective. This is entirely the same as the English form, except it is used much more often in Leet. Nouns such as lulzness and leetness are derivations using this suffix. Words ending in -ed When forming a past participle ending in -ed, the Leet user may replace the -e with an apostrophe, as was common in poetry of previous centuries, (e.g. "pwned" becomes "pwn'd"). Sometimes, the apostrophe is removed as well (e.g. "pwned" becomes "pwnd"). The word ending may also be substituted by -t (e.g. pwned becomes pwnt). Use of the -& suffix Words ending in -and, -anned, -ant, or a similar sound can sometimes be spelled with an ampersand (&) to express the ending sound (e.g. "This is the s&box", "I'm sorry, you've been b&", "&hill/&farm"). It is most commonly used with the word banned. An alternate form of "B&" is "B7", as the ampersand is with the "7" key on the standard US keyboard. It is often seen in the phrase "IBB7" (in before banned), which indicates that the poster believes that a previous poster will soon be banned from the site, channel, or board on which they are posting. Grammar Leet can be pronounced as a single syllable, , rhyming with eat, by way of apheresis of the initial vowel of "elite". It may also be pronounced as two syllables, . Like hacker slang, leet enjoys a looser grammar than standard English. The loose grammar, just like loose spelling, encodes some level of emphasis, ironic or otherwise. A reader must rely more on intuitive parsing of leet to determine the meaning of a sentence rather than the actual sentence structure. In particular, speakers of leet are fond of verbing nouns, turning verbs into nouns (and back again) as forms of emphasis, e.g. "Austin rocks" is weaker than "Austin roxxorz" (note spelling), which is weaker than "Au5t1N is t3h r0xx0rz" (note grammar), which is weaker than something like "0MFG D00D /\Ü571N 15 T3H l_l83Я 1337 Я0XX0ЯZ" (OMG, dude, Austin is the über-elite rocks-er!). In essence, all of these mean "Austin rocks," not necessarily the other options. Added words and misspellings add to the speaker's enjoyment. Leet, like hacker slang, employs analogy in construction of new words. For example, if haxored is the past tense of the verb "to hack" (hack → haxor → haxored), then winzored would be easily understood to be the past tense conjugation of "to win," even if the reader had not seen that particular word before. Leet has its own colloquialisms, many of which originated as jokes based on common typing errors, habits of new computer users, or knowledge of cyberculture and history. Leet is not solely based upon one language or character set. Greek, Russian, and other languages have leet forms, and leet in one language may use characters from another where they are available. As such, while it may be referred to as a "cipher", a "dialect", or a "language", leet does not fit squarely into any of these categories. The term leet itself is often written 31337, or 1337, and many other variations. After the meaning of these became widely familiar, 10100111001 came to be used in its place, because it is the binary form of 1337 decimal, making it more of a puzzle to interpret. An increasingly common characteristic of leet is the changing of grammatical usage so as to be deliberately incorrect. The widespread popularity of deliberate misspelling is similar to the cult following of the "All your base are belong to us" phrase. Indeed, the online and computer communities have been international from their inception, so spellings and phrases typical of non-native speakers are quite common. Vocabulary Many words originally derived from leet have now become part of modern Internet slang, such as "pwned". The original driving forces of new vocabulary in leet were common misspellings and typing errors such as "teh" (generally considered lolspeak), and intentional misspellings, especially the "z" at the end of words ("skillz"). Another prominent example of a surviving leet expression is w00t, an exclamation of joy. w00t is sometimes used as a backronym for "We owned the other team." New words (or corruptions thereof) may arise from a need to make one's username unique. As any given Internet service reaches more people, the number of names available to a given user is drastically reduced. While many users may wish to have the username "CatLover," for example, in many cases it is only possible for one user to have the moniker. As such, degradations of the name may evolve, such as "C@7L0vr." As the leet cipher is highly dynamic, there is a wider possibility for multiple users to share the "same" name, through combinations of spelling and transliterations. Additionally, leet—the word itself—can be found in the screen-names and gamertags of many Internet and video games. Use of the term in such a manner announces a high level of skill, though such an announcement may be seen as baseless hubris. Terminology and common misspellings Warez (nominally ) is a plural shortening of "software", typically referring to cracked and redistributed software. Phreaking refers to the hacking of telephone systems and other non-Internet equipment. Teh originated as a typographical error of "the", and is sometimes spelled t3h. j00 takes the place of "you", originating from the affricate sound that occurs in place of the palatal approximant, , when you follows a word ending in an alveolar plosive consonant, such as or . Also, from German, is über, which means "over" or "above"; it usually appears as a prefix attached to adjectives, and is frequently written without the umlaut over the u. Haxor and suxxor (suxorz) Haxor, and derivations thereof, is leet for "hacker", and it is one of the most commonplace examples of the use of the -xor suffix. Suxxor (pronounced suck-zor) is a derogatory term which originated in warez culture and is currently used in multi-user environments such as multiplayer video games and instant messaging; it, like haxor, is one of the early leet words to use the -xor suffix. Suxxor is a modified version of "sucks" (the phrase "to suck"), and the meaning is the same as the English slang. Suxxor can be mistaken with Succer/Succker if used in the wrong context. Its negative definition essentially makes it the opposite of roxxor, and both can be used as a verb or a noun. The letters ck are often replaced with the Greek Χ (chi) in other words as well. n00b Within leet, the term n00b, and derivations thereof, is used extensively. The word means and derives from newbie (as in new and inexperienced or uninformed), and is used as a means of segregating them as less than the "elite," or even "normal," members of a group. Owned and pwned Owned and pwned (generally pronounced "poned") both refer to the domination of a player in a video game or argument (rather than just a win), or the successful hacking of a website or computer. It is a slang term derived from the verb own, meaning to appropriate or to conquer to gain ownership. As is a common characteristic of leet, the terms have also been adapted into noun and adjective forms, ownage and pwnage, which can refer to the situation of pwning or to the superiority of its subject (e.g., "He is a very good player. He is pwnage."). The term was created accidentally by the misspelling of "own" in video game design due to the keyboard proximity of the "O" and "P" keys. It implies domination or humiliation of a rival, used primarily in the Internet-based video game culture to taunt an opponent who has just been soundly defeated (e.g., "You just got pwned!"). In 2015 Scrabble added pwn to their Official Scrabble Words list. Pr0n Pr0n is slang for pornography. This is a deliberately inaccurate spelling/pronunciation for porn, where a zero is often used to replace the letter O. It is sometimes used in legitimate communications (such as email discussion groups, Usenet, chat rooms, and Internet web pages) to circumvent language and content filters, which may reject messages as offensive or spam. The word also helps prevent search engines from associating commercial sites with pornography, which might result in unwelcome traffic. Pr0n is also sometimes spelled backwards (n0rp) to further obscure the meaning to potentially uninformed readers. It can also refer to ASCII art depicting pornographic images, or to photos of the internals of consumer and industrial hardware. Prawn, a spoof of the misspelling, has started to come into use, as well; in Grand Theft Auto: Vice City, a pornographer films his movies on "Prawn Island". Conversely, in the RPG Kingdom of Loathing, prawn, referring to a kind of crustacean, is spelled pr0n, leading to the creation of food items such as "pr0n chow mein". Also see porm. See also Calculator spelling Faux Cyrillic Geek Code Jargon File, a glossary and usage dictionary of computer programmer slang Padonkaffsky jargon Notes Footnotes References Further reading External links Leet Translator Alphabets Encodings In-jokes Internet culture Internet memes Internet slang Latin-script representations Nerd culture Nonstandard spelling Obfuscation Social networking services 1990s slang
24755801
https://en.wikipedia.org/wiki/Barnes%20%26%20Noble%20Nook%201st%20Edition
Barnes & Noble Nook 1st Edition
The Nook 1st Edition (styled "nook") is the first generation of the Nook e-book reader developed by American book retailer Barnes & Noble, based on the Android platform. The device was announced in the United States in October 2009 and was released the next month. The Nook includes Wi-Fi and AT&T 3G wireless connectivity, a six-inch E Ink display, and a separate, smaller color touchscreen that serves as the primary input device. In June 2010 Barnes & Noble announced a Wi-Fi-only model of the Nook. On June 5, 2018 Barnes and Noble announced support for logging in to BN.com and adding new content to the device will end on June 29, 2018. The second-generation Nook, the Nook Simple Touch, was announced May 25, 2011 with a June 10 release date. History 3G + Wi-Fi version This version made its debut on November 22, 2009, at a retail price of $259 and comes with built-in 3G + Wi-Fi connectivity for free access to the Barnes and Noble online store. The price was reduced to $199 on June 21, 2010, upon the release of the new Nook Wi-Fi. The final price drop was made on May 25, 2011, to a closeout price of $169 at the same time of the announcement of the new Nook, named the Nook Simple Touch. Wi-Fi version This version made its debut on June 21, 2010, at a retail price of $149. It is a version of the Nook 1st Edition that supports Wi-Fi only and not 3G Wireless, and it was launched with firmware version 1.4 preinstalled. It is physically easily distinguishable from the 3G + Wi-fi gray-backed version, due to its white back color. A price reduction was made on May 25, 2011, dropping to a closeout price of $119 in accord with the announcement of the new Nook, named the Nook Simple Touch Reader. Features The original Nook provides a black-and-white electronic ink (e ink) display for viewing digital content with most navigation and additional content provided through a color touchscreen. Pages are turned using arrow buttons on each side of the Nook or by making swipe gesture on touch screen. The original Nook connects to Barnes and Noble's digital store through a free connection to AT&T's 3G network or through available Wi-Fi connections. Users can read books without a wireless connection; disconnecting the wireless connection can extend the battery's charge to up to ten days. The device has a MicroSD expansion slot for extra storage and a user-replaceable rechargeable battery. The battery can be charged through either an AC adapter or a micro-USB 2.0 cable, both included with new Nooks. The device also includes a web browser, a built-in dictionary, Chess and Sudoku, an audio player, speakers, and a 3.5 mm headphone jack. Supported ebook file-formats with DRM include: eReader PDB with Barnes & Noble's eReader DRM, sometimes called Secure eReader format (original Nook only) EPUB with Barnes & Noble's eReader DRM, used for ebooks downloaded wirelessly to the Nook EPUB with Adobe ADEPT DRM, sometimes called Adobe EPUB or Adobe Digital Editions format PDF with Adobe ADEPT DRM (however, figures and equations will not appear) The EPUB with eReader DRM combination is a new format created for the Nook. Adobe has undertaken to include support for that combination in future releases of Adobe Acrobat mobile software, to allow other reader devices to support that format. Supported ebook file formats without DRM include: EPUB eReader PDB (original Nook only) PDF, including password-protected PDF but not Vitrium-protected PDF Supported sound file formats for music and audiobooks include MP3 and Ogg Vorbis, but not WMA. Only the original Nook and the Nook Color support sound files. Nook supports image file formats JPG, GIF, PNG, and BMP, used for book cover thumbnails, wallpapers, and screen savers. The Nook provides a "LendMe" feature allowing users to share some books with other people, depending on licensing by the book's publisher. The buyer is permitted to share a book once with one other user for up to two weeks. Users will be able to share purchased books with others who are using Barnes & Noble's reader application software for Android, BlackBerry, iPad, iPhone, iPod Touch, Mac OS X, and Windows and others. The Nook system recognizes physical Barnes & Noble stores. Customers using the Nook in Barnes & Noble stores receive access to special content and offers while the device is connected to the store's Wi-Fi. Further, most e-Books in the catalog can be read for up to an hour while connected to the store Wi-Fi network with the 1.3 software update. Because Barnes & Noble does not make the Nook available outside the United States, if it is taken overseas it will neither be possible to access a 3G connection nor capable of buying books on the Barnes & Noble Nook Book Store. The Nook is still capable of accessing the same Book Store through Wi-Fi and downloading free books from it outside of the U.S. Software versions Barnes and Noble distributes software updates automatically "over the air" or through a manual download. Version 1.0 Launch version on the Nook and made its debut on November 22, 2009. Version 1.1 Released in December 2009, consists mostly of minor bug fixes. Version 1.2 Released in February 2010, improved the device's responsiveness, bookmarking, in-store connectivity, and battery optimization. The update also included interface changes intended to improve navigation of daily subscriptions, clarify LendMe features, and allow sorting of personal files on the device. Version 1.3 Released in April 2010, added a web browser (in beta), the games Chess and Sudoku, and more options for Wi-Fi connectivity. Other new features included the ability to read complete ebooks for free in Barnes and Noble stores for an hour at a time, the option to pre-order ebooks that are yet unreleased, minor modifications to the user interface, and improved performance when opening ebooks and turning pages. Version 1.4 Released on June 21, 2010, added extended AT&T Wi-Fi Hotspot support, a new extra-extra-large font size, and a Go-to-Page feature. Version 1.5 Released on November 22, 2010 and added optional password protection for the device and for making purchases, a "My Shelves" feature for organizing the user's e-book library, and automatic syncing of the last page read across multiple devices. Other improvements include faster page turning and improved search options. Version 1.6 Released on June 6, 2011 and included "minor system updates". Version 1.7 Released on June 20, 2011 and included "minor system updates". Nook apps Free Nook eReader applications are available to allow reading of eBooks purchases to be read on the iPhone, iPad, Android, and Blackberry devices, without the need for a Nook eReader. Originally, there were also desktop versions for Mac and PC; these were quietly withdrawn in mid-2013. Users were pointed to a web-based version instead. A virtual bookmark can be synced across the devices a reader uses. Hacking Some Nook users have loaded Android applications on the Nook, such as Pandora, a web browser, a Twitter client called Tweet, Google Reader and a Facebook application. Many general Android applications running on the Nook present interactive areas of their interface on the E Ink display, making such applications difficult to manipulate on the device. However, Android applications optimized for the Nook screen are also available, including app launchers, browsers, library managers, and an online book catalog browser and feed reader. Although gaining superuser (root) access to install software on the Nook initially required physical disassembly of the device, users can gain root access using software alone. A new hardware revision introduced in August 2010, identifiable by a serial number starting with 1003, running firmware 1.4.1, requires different software than the older models. Attempting to gain root access using software designed for older models renders the unit unusable. As of October 2010, a new method involving spoofing a DNS entry has been found to root 1.4.1 Nooks. Availability Barnes & Noble made the Nook available for pre-order in the United States for following its launch on October 20, 2009 and began shipping on November 30, 2009. The device was available for demonstration and display in Barnes and Noble retail stores in early December. Barnes & Noble began selling the Nook in-store in February 2010. Due to the large number of pre-orders, the initial launch of the product involved multiple shipment dates depending on when customers ordered the Nook. The first shipment occurred as planned on November 30, but delays occurred with subsequent shipments as demand for the product exceeded production. Further shipments occurred between December and February. Barnes & Noble sent a $100 gift certificate via email to customers who had been promised delivery by December 24, 2009, but whose shipment was delayed past December 25. Reception The Nook initially received mixed reviews, ranging from favorable reviews from Time, Money, and PC Magazine to more critical reviews in Engadget and The New York Times. PC Magazine noted the color touchscreen, Wi-Fi and 3G connectivity, and large ebook library as advantages over the Nook's competitors, with a lack of support for HTML and Microsoft's .doc file format seen as negatives. Money compared the Nook favorably to the Amazon Kindle and the Sony Reader Touch Edition. ZDNet blogger Matthew Miller called the Nook "the king of connectivity and content" and wrote favorably about the lending feature and support for PDF and ePub files. Time listed the Nook as one of its "Top 10 Gadgets of 2009". Critics pointed to the Nook's "sluggish" performance and user interface design, with The New York Times reviewer David Pogue writing that the Nook suffered from "half-baked software." Pogue later demonstrated using a postal scale that the Nook's weight differed from the product specifications advertised by Barnes & Noble (12.1 ounces rather than 11.2 ounces as the company had advertised). Engadget reviewer Joshua Topolsky argued that menu responsiveness and organization was not optimal but commented that "many of the problems seem like they could be fixed with firmware tweaks." PC Magazine wrote that the 1.3 firmware update, released after most reviews of the Nook, improved the device's responsiveness: "On the original Nook, page turning took twice as long as page turning on the Kindle – two seconds compared to one second. With the 1.3 firmware update, it's about a tenth of a second slower than the Kindle, but the difference is negligible." In early January 2010, the Nook was presented with the TechCrunch Best New Gadget Crunchie award for 2009. See also List of Android devices Comparison of e-book readers Amazon Kindle Nook Simple Touch Nook Color References External links Barnes & Noble Android (operating system) devices Dedicated e-book devices Products introduced in 2009 ar:نوك (قارئ إلكتروني) es:Nook fr:Nook ja:Nook pl:Nook ru:Barnes & Noble Nook